@begin(comment)
sigmod89 paper
Instructions:
----- install the following as donalp.ref ------
@marker(References,donalphabetic)
@Style(Citations=2,CitationType=Brackets,Stringmax=750)
@Define(L1,LeftMargin 4,Indent -4,Above .3,Break,Group)
@TextForm(TAGGER="[@parm(TEXT)]@ @ ")
@LibraryFile(ISIREF)
@Enter(Text,Spacing .8,Spread 0,Spaces Compact,Justification off)
@Process(Bibliography)
@Leave(Text)
----------------
after compiling and printing
- draw in arrows in 2 figures
- white out footnote numbers (2) for footnote 0
- and white out page numbers
@end(comment)
@device(imagen300)
@comment(imagen.dev)
@specialfont(f1="/usr/local/fonts/raster.8/300/cmsy10.r10")
@specialfont(f2="/usr/local/fonts/raster.8/300/cmasc10.r10")
@Equate(K=Value)
@equate(rem=comment)
@string< Implies="@f1( )",
element="@f1(2)",
Equiv="@f1()",
neq="@f1()",
And="@f2( )",
Or="@f2( )",
Exists="@f1(9)",
All="@f1(8)",
leq="@f1()",
geq="@f1()",
rarrow="@f1(!)">
@commandstring(iwff="@i[w@hsp{-.5mm}f@hsp{-1mm}f]")
@style(fontfamily computermodernroman10, spacing 0.8, spread .2,
linewidth 7 inches, leftmargin .75 inches,
topmargin .7 inches, bottommargin .9 inches, references donalphabetic)
@comment[donalph = isialp except spacing is .8]
@make(article)
@modify(verbatim, above .2 , below .2, spread .2, spacing .8)
@modify(itemize, above .2 , below .2, spread .2, spacing .8)
@modify(fnenv, spread .2, spacing .8)
@modify(notestyle spacing 0.8)
@modify(bodystyle spacing 0.8)
@modify(hd2, above .1 inch)
@modify(hd3, above .1 inch)
@modify(hd4, above .1 inch)
@modify(Plus,Script +0.1 lines)
@modify(Minus,Script -0.1 lines)
@commandstring(ap="AP5")
@comment[Efficient database triggering
on complex conditions]
@majorheading[Compiling complex database
transition triggers]
@center[Donald Cohen
Information Sciences Institute
University of Southern California
4676 Admiralty Way
Marina del Rey, CA 90292
(213) 822-1511
DonC@@vaxa.isi.edu
@blankspace(.2 inches)]
@begin(text, linewidth 3.26 inches, columns 2, columnmargin .33 inches, boxed)
@set(FootnoteCounter= -1)
@foot[This research was supported by the Defense Advanced Research Projects
Agency under contract No. MDA903 81 C 0335. Views and conclusions
contained in this report are the authors' and should not be interpreted
as representing the official opinion or policy of DARPA, the U.S.
Government, or any person or agency connected with them.
@blankspace(1.1 inch)]
@b(Abstract:) This paper presents a language for specifying database updates,
queries and rule triggers, and describes how triggers can be compiled into
an efficient mechanism. The rule language allows specification of
both state and transition constraints as special cases, but is more general
than either. The implementation we describe compiles rules and updates
independently of each other. Thus rules can be added or deleted without
recompiling any update program and vice versa.
@section(Introduction)
People often wish to tell a database to watch out for certain
conditions arising in the data, for instance, "whenever an
account's balance drops below a threshhold, notify its owner(s)", or
"do not allow a change that causes more than one employee to occupy the
same office". Most databases provide little or no support for stating
and reacting to such conditions, in large part because they can provide
little or no implementation support. Rather a programmer has to figure out
what updates could possibly cause a condition of interest to arise and
for each such update supply code to recognize that condition.
This paper presents a temporal extension to the language of first order logic
which enables convenient description of a wide variety of rule triggers by
allowing reference to both the state before and the state after a transition.
State and transition constraints are special cases of these more general
conditions. Although the language is simple, we know of no previous
attempts to express what it expresses. This includes Postgres@cite(postgres).
The bulk of the paper shows how such specifications can be efficiently
implemented using a network similar to that of the RETE algorithm@cite[forgy].
Of course, triggering, like querying, is not free, and some triggers, like some
queries, simply cannot be implemented cheaply. However, the cost of triggering
with our method is generally competitive with the result of hand
optimization. The implementation we describe has been in daily use for several
years by about a dozen users with databases containing thousands of rules.
@subsection (Related work)
Previous work in this area mostly concentrates on integrity constraints, e.g.,
@cite[Eswaran], @cite[HammerMcleod], @cite[Stonebraker], @cite[HammerSarin],
@cite[blaustein], @cite(postgres), @cite[Qian], etc.
Our work is most closely related to that of Nicolas@cite[Nicolas]
and Forgy@cite[forgy]. Our implementation preserves the best features of
both of these approaches.
Nicolas showed how to efficiently enforce "state constraints", which prevent
invalid database states such as employees sharing offices. Constraints are
represented as well-formed formulae (@b(wffs)) of first order logic. The
method alters a given program that updates the database to take a given
constraint into consideration. For instance, the program that assigned an
office to a person would be altered to check that the office was not assigned
to any other person. This method assumes that the constraint was satisfied
before the update.
Forgy's RETE algorithm is commonly used for implementing expert systems. It
efficiently maintains a set of "matches" for various "patterns" as a (small)
database changes. These patterns may be described approximately as
conjunctions with free variables, where a "match" is a tuple representing
an assignment of values to the free
variables that satisfies the conjunction.@foot[Actually, the RETE language is
more general in some ways - it allows relations of variable arity, matching on
relations, and segment variables. We will ignore these features.] The RETE
algorithm reacts to a database transition. It detects any tuples that the
transition causes to start or stop matching any pattern of interest. Patterns
are compiled into a directed acyclic graph called a "match network". A node
reads data from incoming edges and writes (different) data on outgoing edges.
The computations at the nodes gradually transform a set of database updates
into a set of assignments that start or stop matching the patterns of interest.
Nicolas' algorithm is more general than Forgy's in that it allows arbitrary
wffs, while Forgy's algorithm is more general in that it finds matches (as
opposed to simply checking that there are none). Another advantage of Forgy's
algorithm is that each pattern is compiled once, whereas Nicolas requires that
every constraint be compiled for every update (so adding a new constraint
requires recompiling many update programs). However, Forgy's algorithm is only
appropriate for very small databases, e.g., it uses more than enough internal storage to
store every fact in the database (sometimes much more!).
Another problem with RETE is that it prevents "query optimization" - it
compiles a given pattern into a fixed algorithm which may not be anywhere near
the most efficient one. Nicolas, on the other hand simply compiles constraints
into queries which can be optimized like any other queries.
Our method generalizes both of these in that it finds matches for
arbitrary "transition conditions."@foot[Transition conditions are formally
defined later but are more general than either of the other two. The low
balance notification is an example.] Like Forgy, we compile each trigger
once. New triggers and update programs can both be added independently (at
little cost) while the database is in use. Finally, our algorithm is, at
worst, almost as efficient as either of the other two (for the cases they
handle), and sometimes much more so.
@comment[
We will refer to the union of state constraints and transition constraints
as "consistency rules". We will call programs invoked by generalized
triggering "automation rules". We refer to the union of these as "rules".
Our method allows rules to be compiled independent of any programs
that update the database. New rules can be added without thinking
about previously existing updating programs and new updating programs can
be written without thinking about previously declared rules. Of course,
further optimization may be possible if this independence is given up.
However, with minor exceptions, our method achieves the efficiency of
Nicolas even while retaining this independence.
]
@subsection(Outline)
Section 2 describes our query language and the extension with which
rules are specified. Section 3 describes our implementation which
compiles the rule triggers into efficient programs and runs these
programs at appropriate times. Section 4 deals with efficiency
issues. It discusses some limitations of our method, implementation
alternatives, and the considerations that make them more or less desirable.
@section(The language)
Our method is implemented as part of @ap, a database oriented extension of
commonlisp@cite[CLtL]. We will insulate the reader from lisp syntax.
The query language of @ap is an extension to first order logic. It includes
relations of fixed arity, boolean connectives@k(and)(and),@k(or)(or),
@r(~) (not), @k(implies) (implies), etc. and quantifiers
@k(all) (for-all) and @k(exists) (exists).
@ap adds to lisp the following database constructs which we will use freely:
@begin(itemize)
@t[insert] @i(relation) (o@-(1), ..., o@-(i))@*
adds the tuple (o@-(1), ..., o@-(i)) to the (i-ary) relation @i(relation).
It does nothing if the tuple is already present.
@t[delete] @i(relation) (o@-(1), ..., o@-(i))@*
deletes the tuple (o@-(1), ..., o@-(i)) from the (i-ary) relation @i(relation).
It does nothing if the tuple is already absent.
@t[if @iwff ...]@*
is conditional execution, depending on the truth value of @iwff.
@t[iterate over@*
@i(var)@-(1), ..., @i(var)@-(j) suchthat @iwff ...]@*
where @t[@i(var)@-(1), ..., @i(var)@-(j)] are the variables free in @iwff,
iterates over tuples (o@-(1), ..., o@-(j)) such that the result of substituting
o@-(i) for (free) occurrences of @i(var@-(i)) (1@k(leq)i@k(leq)j) in @iwff
is true.
@t{atomic [@i(program@-(1)); ...; @i(program@-(k))]}@*
executes the programs in order, but delays all database updates (insert's
and delete's) until the end. They are then combined into a single
transition. If the updates cannot @i(all) be done for some reason (see
below), then @i(none) of the updates are done.
@end(itemize)
In addition to these programming constructs, there are rules which are
declared with syntax described below. Semantically there are several disjoint
classes of rules. When a transition is proposed, "consistency rules"
ensure consistency with any integrity constraints. These rules may cause the
transition to be altered or rejected. This process is described in more
detail below. For now we merely note that all accepted transitions satisfy all
constraints. The rule preventing shared offices is a consistency rule.
The rule triggering mechanism is also used to maintain "materialized views",
but that will not be described in detail here.
When a transition is accepted, it invokes "automation rules", so called because
they are used to automate such activities as sending low balance notifications.
It is not inconsistent to have a low balance, or even to have a low balance and
not yet have been notified. Automation rules may cause further database transitions.
@ap does not specify the
order in which automation rules are executed. It only promises that every
automation invoked by a transition will run before control returns to the
program that requested the transition.
The atomic construct is especially relevant for rule triggering, because
it defines the observable states of the database. Constraints apply only
to these observable states. For instance, if every employee were required
to have a unique office, and the employee Joe had Office11 then@*
@t[insert Office(Joe, Office12)] would not be allowed since that would
cause him to have two offices. Likewise@*
@t[delete Office(Joe, Office11)] would not be allowed since that would
cause him to have no offices. However,@*
@verbatim{atomic [insert Office(Joe, Office12);
delete Office(Joe, Office11)]}
would be allowed, unless it violated other constraints.
Similarly, automation rules react to entire database transitions, not to
"parts" of transitions. In the low balance rule,
suppose each account had its own threshhold. Then, reducing the balance from
100 to 10 in the same atomic transition as reducing the threshhold from 20
to 5 would not cause a notification.
@subsection(Language extension)
@label(extension)
We now introduce a simple language extension that allows us to specify triggers
such as that of the low balance rule. The extension
allows us to talk about two adjacent database states: the state before the
transition under consideration and the (proposed) state after. In the
context of a transition, queries may use two temporal operators. Either could
be defined in terms of the other, but it is convenient to have both:@*
@t[Previously @iwff ]@*
is true if @iwff was true before the update.@*
@t[Start @iwff ]@*
is true if @iwff was false before the update and is true after.@*
In the context of a (proposed) transition, any wff not in the scope of
a Start or Previously is evaluated in the new state.
Since we only provide access to two states, we do not allow nested
temporal references. This extension does @i(not) allow
general temporal reference, for instance we cannot notify owners only the first
time in any month that their accounts fall below threshold. @foot[Of course,
this can be "programmed" by introducing additional data. The point is that
the trigger cannot be directly expressed in our language.]
We also introduce a syntactic construct called a @b[Description] of the
form "@w{@i[variables] @t[suchthat] @iwff}", as illustrated above in the
loop construct. @i[Variables] are the variables free in @iwff.
A tuple @b(satisfies) a description if substitution of
the values in the tuple for (free) occurrences of @i(variables) in @iwff
is true. In the special case where @iwff has no free variables,
the empty tuple satisfies "@w[@t(nil suchthat) @iwff]" if @iwff is true.@*
(Note: "nil" denotes the empty variable list, not a single variable named nil!)
We refer to the set of such tuples as the @b(matches) of the description.
We refer to the process of computing this set as @b(matching) the description.
@paragraph(Declaring automation rules)
We are now in a position to introduce syntax for the rules described above.
Automation rules are declared as follows:
@begin(group)
@verbatim{
AutomationRule notify-low-balance
trigger: person, acct suchthat
owns(person, acct)@k(and)
Start [@k(exists) bal, min
[balance(acct, bal)@k(and)
threshhold(acct, min)@k(and)
balance>
Thus if no rules could be triggered by a particular class
of updates, e.g., deletion of a tuple from the Office relation, then no work
is wasted trying. Other forms of discrimination will be evident from the more
formal description of the matcher below.
]
When database transitions are proposed, @ap must find all tuples that satisfy
all rule triggers. This process requires three inputs: the previous database,
P, the set of proposed database updates, U, with no-ops removed, and a set of
relevant descriptions, D, used as rule triggers. The output is, for each
d@k(element)D, the tuples satisfying d. Of course, by the time a transition
is proposed, D has been compiled into a network of @b"matchnodes", N. Each
matchnode, n,
has an associated description, d@-(n), and a program, p@-(n), and is
"connected" to other nodes, its predecessors and successors. D is a subset of
@w[{d@-(n) | n@k(element)N}]. (This set also contains some "intermediate
results" which are not in D.)
Tuples satisfying d@-(n) are computed at n and are used as the inputs to the
successors of n. If n has predecessors, p@-(n) is run on each tuple produced
as output of any predecessor to compute a set of tuples that satisfy d@-(n).
The set of all tuples satisfying d@-(n) (the output of n) is simply the union
of all sets computed by p@-(n). The only nodes without predecessors are those
that correspond to database updates, e.g.,@*
@t[x, y suchthat Start office(x, y)].@*
Their outputs are computed directly from U.
@comment[The programs at matchnodes, unlike RETE matchnodes, access P as well as U.]
@end(text)
@newpage()
@begin(figure)
@begin(verbatim)
@hpos(2.5 inches) @b(node 1)
@hpos(2.5 inches)@r(output description:)
@hpos(2.5 inches) emp, off suchthat
@hpos(2.5 inches) start Office(emp, off)
@b(node 2)@hpos(4.4 inches) @b(node 3)
@r(program:)@hpos(4.4 inches)@r(program:)
accept emp1, off (from node 1)@hpos(4.4 inches) accept emp2, off (from node 1)
iterate over emp2 suchthat@hpos(4.4 inches) iterate over emp1 suchthat
Office(emp2, off)@k(and)emp1@k(neq)emp2@hpos(4.4 inches) Office(emp1, off)@k(and)emp1@k(neq)emp2
produce emp1, emp2, off@hpos(4.4 inches) produce emp1, emp2, off
@r(output description:)@hpos(4.4 inches)@r(output description:)
emp1, emp2, off suchthat@hpos(4.4 inches) emp1, emp2, off suchthat
[start Office(emp1, off)]@k(and)@hpos(4.4 inches) [start Office(emp2, off)]@k(and)
Office(emp2, off)@k(and)emp1@k(neq)emp2@hpos(4.4 inches) Office(emp1, off)@k(and)emp1@k(neq)emp2
@hpos(2.5 inches) @b(node 4)
@hpos(2.5 inches)@r(program:)
@hpos(2.5 inches) produce union of outputs
@hpos(2.5 inches) of nodes 2 and 3
@hpos(2.5 inches)@r(output description:)
@hpos(2.5 inches) emp1, emp2, off suchthat
@hpos(2.5 inches) start [Office(emp1, off)@k(and)
@hpos(2.5 inches) Office(emp2, off)@k(and)
@hpos(2.5 inches) emp1@k(neq)emp2]
@end(verbatim)
@caption(match network for office constraint)
@end(figure)
@begin(text, linewidth 3.33 inches, columns 2, columnmargin .33 inches, boxed)
@subsection(Example)
@begin(group)
To enforce the constraint:
@verbatim{
@k(all) emp1, emp2, off
[[office(emp1, off)@k(and)
office(emp2, off)]
@hsp(4 points)@k(implies)emp1=emp2]}
@group{(in the style of the last consistency rule, with the repair)
@ap builds a node for:
@verbatim"emp1, emp2, off suchthat
Start* @r(~)[[office(emp1, off)@k(and)
office(emp2, off)]
@hsp(4 points)@k(implies)emp1=emp2]"
This simplifies to:
@verbatim"emp1, emp2, off suchthat
Start* [office(emp1, off)@k(and)
office(emp2, off)@k(and)
emp1@k(neq)emp2]"}
The @k(neq) relation is known to be @b"static" (never updated), so we don't need
to worry about it @i(becoming) true. This description
compiles into four nodes as shown in the figure above.
@end(group)
@comment[This is compiled (and optimized) by the @ap compiler. Notice this program
accesses the "new" state, a combination of P and U.]
Note that the programs access the "new" state of the database, so that
atomically replacing one occupant with another will not cause a match.
The implementation of this is discussed later.
@subsection(Compiling the match network)
The "matchnode compiler", a program for constructing nodes, takes as
input a variable list and a simplified wff.@foot[The simplifier reduces
all connectives to {and, or, not}, pushes negations all the way inward
and does "obvious" simplifications, e.g., @r(~)@r(~)P = P.]
We will assume that all Previously's have been translated into Start's:@*
Previously P @k(equiv) (Start @r(~)P@k(or)(P@k(and)@r(~)Start P))
The result is either a node that computes tuples satisfying the description, an
indication that the description is unsatisfiable, e.g., @*@t[x, y suchthat
Start x=y], or an indication that the description cannot be compiled, e.g, it's
"irrelevant". Another reason a description cannot be compiled is that its
program cannot be compiled (see @cite[compiler]). @foot[In @ap,
"optimizations" such as Start* and static relations sometimes influence
whether a description can be compiled. For instance, they may "optimize out"
something that is uncompilable.]
The matchnode compiler builds matchnodes that can be understood
in four levels, described below. The matchnodes in any level have
predecessors only in earlier levels.
@para[Input nodes]
Input nodes are the simplest. Their descriptions have two forms:
@verbatim[x@-(1),...,x@-(n) suchthat Start R(x@-(1),...,x@-(n))
x@-(1),...,x@-(n) suchthat Start @r(~)R(x@-(1),...,x@-(n))]
where R is an n-ary relation. A node with the first description computes
the set of tuples added to R, while a node with the second description
computes the set of tuples deleted from R.
@para[Starts of other literals]
@label(otherlits)
More general literals (non-compound wffs and their negations) differ from those
above only in that their argument lists may contain constants and duplicated
variables, and that the variables in the arguments and in the variable list
may appear in different orders.
Starts of such literals are handled by successors of input nodes, as illustrated
by the nodes described below.
These examples all take input from the node for
@verbatim[x, y suchthat Start office(x, y)]
@begin(itemize)
@i(Constants:) The node for @*
@t[y suchthat Start office(John, y)]@*
has this program:@*
@t[if x=John produce y]
@i(Repeated variables:) The node for @*
@t[x suchthat Start office(x, x)] @*
has this program:@*
@t[if x=y produce x]
@i(Permutations:) The node for @*
@t[y, x suchthat Start office(x, y)]@*
has this program:@*
@t[produce y, x]
@end(Itemize)
As an example that combines all these cases, suppose we have an input node,
@verbatim[x, y, z, w suchthat Start R(x, y, z, w)]
and we want a node for
@verbatim[x, y suchthat Start R(2, y, x, x)]
For purposes of exposition we rename the variables to match the input node:
@verbatim[z, y suchthat Start R(2, y, z, z)]
To obtain such a node we start with the constant:
@verbatim[y, z, w suchthat Start R(2, y, z, w)
program: if x=2 produce y, z, w]
Next the repeated variable:
@verbatim[y, z suchthat Start R(2, y, z, z)
program: if x=2 if z=w produce y, z]
Finally the permutation gives:
@verbatim[z, y suchthat Start R(2, y, z, z)
program: if x=2 if z=w produce z, y]
@para[Conjunctions containing a Start of a literal]
@label(conjuncts)@label(existentials)
An example is
@verbatim{emp1, emp2, off suchthat
[[Start office(emp1, off)]@k(and)
office(emp2, off)@k(and)
emp1@k(neq)emp2]}
These nodes have one predecessor which computes tuples satisfying the Start conjunct:
@verbatim[emp1, off suchthat
Start office(emp1, off)]
(This is really the same input node that we saw before. We rename the variables
for purposes of exposition.)
The program is constructed from these two descriptions. It accepts as
inputs the variable list of the predecessor node's description, @i[inputs],
and does the following:
@verbatim[iterate over @i[outputs - inputs] suchthat
@i[remainder-of-conjunction]
produce @i[outputs]]
where @i[outputs] is the variable list of this node's description,
@i[outputs - inputs] is the set difference (order is immaterial) and
@i[remainder-of-conjunction] is the conjunction to be matched with the
Start conjunct removed. In this case the program is:
@verbatim{iterate over emp2 suchthat
[office(emp2, off)@k(and)emp1@k(neq)emp2]
produce emp1,emp2,off}
When @i[outputs - inputs] is empty, the loop
simplifies to a test:@*
@t"if @i[remainder-of-conjunction] produce @i[outputs]"@*
If several conjuncts are Start's, a choice can be made on grounds
of efficiency.
This level also handles outer existential quantifiers, e.g.,
@verbatim{
emp1 suchthat
@k(Exists) emp2, off
[[Start office(emp1, off)]@k(and)
office(emp2, off)@k(and)
emp1@k(neq)emp2]}
The program is almost the same as if the existential variables were in @i[outputs]
but the existential variables are no longer produced,
and they remain quantified unless they're in @i(inputs):
@verbatim{if @k(Exists) emp2
[office(emp2, off)@k(and)emp1@k(neq)emp2]
produce emp1}
@comment[This has the same program, except that the existential variables
are no longer produced. Recall that duplicate produced tuples are removed,
so if instead of producing the two tuples (a,b,c) and (a,c,b) we produce the
two tuples a and a, the second will be discarded.]
@comment[Up to here it should be obvious that the programs produce the
"right" set of tuples.]
@para[Other compound wffs]
@label(disjuncts)
An example is
@verbatim{emp1, emp2, off suchthat
Start* [office(emp1,off)@k(and)
office(emp2, off)@k(and)
emp1@k(neq)emp2]}
Such descriptions, "@t[@i(vars) suchthat @iWff]",
are translated into equivalent descriptions in disjunctive normal form (@b[DNF]),@*
"@w{@t[@i(vars) suchthat @i[W@-(1)]@k(or)...@k(or)@i[W@-(n)]]}"
where each @i[W@-(i)] takes the form described in @ref(conjuncts).
Intuitively, @i[W@-(i)] corresponds to one occurrence of a non-static
literal, L, in @iWff, and has the meaning that @iWff becomes true, at least
in part, due to L becoming true. @i(W@-(i)) contains @t[Start @r"L"] as one conjunct.
Informally, we call "the rest of" @i[W@-(i)]
the @b(residue) of @iWff with respect to L.
The matchnode for "@t[@i(vars) suchthat @iWff]" simply computes the
union of the matches of its predecessors, which match the descriptions
"@t[@i(vars) suchthat @i[W@-(i)]]". Unless all disjuncts use all of @i(vars)
there will be infinitely many matches, which will prevent compilation.
The desired form is achieved by applying distributive laws such as:
@verbatim{@k(Exists)x[P@k(or)Q] = @k(exists)xP@k(or)@k(exists)xQ}
In general this should be done from the outside in - wffs of the wrong
form may appear as subwffs of acceptable wffs, in which case they need
not be rewritten. For instance
@verbatim{y suchthat
[Start R(y)]@k(and)@k(Exists)x[P(x,y)@k(or)Q(x,y)}
matches the pattern of @ref(conjuncts).
When a conjunction must be rewritten it
must be distributed over @i(some) conjunct, C, with the property that
@t[@i(variables) suchthat @r(C)]
is relevant, where @i(variables) is a list of variables free in C.
@verbatim[P@k(and)(Q@k(or)R) = (P@k(and)Q)@k(or)(P@k(and)R)
P@k(and)@k(exists)xQ = @k(exists)x(P@k(and)Q)]
Usually there's no choice - only one conjunct contains Start.
If no such conjunct leads to a compilable program, the description cannot be matched.
Start's are pushed inward (only as needed) by using the following identities:
@begin[group]
@begin(Verbatim)
[Start [P@k(and)Q]] =
[[Start P]@k(and)Q]@k(or) [[Start Q]@k(and)P]
[Start [P@k(or)Q]] =
[[Start P]@k(and)@r(~)[Previously Q]]@k(or)
[[Start Q]@k(and)@r(~)[Previously P]]
[Start @k(all)xP] = @k(exists)x[Start P]@k(and)@k(all)xP
[Start @k(exists)xP] =
@k(exists)x[Start P]@k(and)@r(~)[Previously @k(exists)xP]
[Start* [P@k(and)Q]] =
[[Start* P]@k(and)Q]@k(or)[[Start* Q]@k(and)P]
[Start* [P@k(or)Q]] =
[Start* P]@k(or)[Start* Q]
[Start* @k(all)xP] = @k(exists)x[Start* P]@k(and)@k(all)xP
[Start* @k(exists)xP] = @k(exists)x[Start* P]@r{@foot"This definition
of Start* has the properties that [Start P] @k(implies) [Start* P] and that
[Start* P] @k(implies) P. It thus meets the requirements of Start*,
since @w([Previously @r(~)P]) @k(implies) [[Start P] @k(equiv) P]."}
@end(verbatim)
@end(group)
Note that, for literals, Start* is the same as Start.
Notice that outer @k(all)'s cannot be matched.
Such descriptions look like
@verbatim[x suchthat @k(All)y Start office(x, y)]
@ap assumes a world with infinitely many objects.@comment[, e.g.,
all numbers are objects.] Therefore this describes
a transition with infinitely many updates.@foot{The
example can be altered to be satisfied by finite transitions, e.g.,
@t{nil suchthat @k(All)y [Q(y)@k(implies)Start P(y)]}
but this is "irrelevant" if Q is ever empty.
For quantifying over fixed finite sets,@k(and)seems
more appropriate than @k(All). In any case, outer @k(All)'s have not arisen in
practice.}
Note the result contains at most one disjunct per literal occurrence
in the original wff.
@subsection(Detailed example)
@begin(group)
We now illustrate by carrying out the procedure for the automation trigger:
@verbatim{person, acct suchthat
owns(person, acct)@k(and)
Start @k(exists) bal, min
[balance(acct, bal)@k(and)
threshhold(acct, min)@k(and)
bal