From Action Systems to Modular Systems

25
From Action Systems to Modular Systems R. J. R. Back 1 and K. Sere 2 1 ˚ Abo Akademi University, Department of Computer Science FIN-20520 Turku, Finland e-mail: [email protected] 2 University of Kuopio, Department of Computer Science and Applied Mathematics P.O.Box 1627, FIN-70211 Kuopio, Finland e-mail: [email protected] Abstract. Action systems are used to extend program refinement meth- ods for sequential programs, as described in the refinement calculus, to parallel and reactive system refinement. They provide a general descrip- tion of reactive systems, capable of modeling terminating, possibly abort- ing and infinitely repeating systems. We show how to extend the action system model to refinement of modular systems. A module may export and import variables, it may provide access procedures for other mod- ules, and it may itself access procedures of other modules. Modules may have autonomous internal activity and may execute in parallel or in sequence. Modules may be nested within each other. They may commu- nicate by shared variables, shared actions, a generalized form of remote procedure calls and by persistent data structures. Both synchronous and asynchronous communication between modules is supported. The paper shows how a single framework can be used for both the specification of large systems, the modular decomposition of the system into smaller units and the refinement of the modules into program modules that can be described in a standard programming language and executed on stan- dard hardware. 1 Introduction Refining a large program as a single unit is seldom practical. In stead, one usually identifies a subcomponent of the program, and constructs a refinement of the subcomponent. Ideally, replacing the subcomponent with its refinement should then result in a refinement of the whole program. If the composition methods used in the program are all monotonic with respect to the refinement relation, this is indeed the case. Even if this is not the case, the composition method may be almost monotonic, so that the replacement does give a refinement of the whole program, provided certain additional conditions are satisfied. An important issue in programming language design is to identify monotonic or almost monotonic methods for program composition, which permit refinement of one component, with no or little regard for the rest of the program. A program that is structured in a way that permits this is loosely said to be modular and the composition method is called a modularization mechanism. There are different kinds of modularization mechanisms which permit refinement of components. We

Transcript of From Action Systems to Modular Systems

From Action Systems to Modular Systems

R. J. R. Back1 and K. Sere2

1 Abo Akademi University, Department of Computer ScienceFIN-20520 Turku, Finland

e-mail: [email protected] University of Kuopio, Department of Computer Science and Applied Mathematics

P.O.Box 1627, FIN-70211 Kuopio, Finlande-mail: [email protected]

Abstract. Action systems are used to extend program refinement meth-ods for sequential programs, as described in the refinement calculus, toparallel and reactive system refinement. They provide a general descrip-tion of reactive systems, capable of modeling terminating, possibly abort-ing and infinitely repeating systems. We show how to extend the actionsystem model to refinement of modular systems. A module may exportand import variables, it may provide access procedures for other mod-ules, and it may itself access procedures of other modules. Modules mayhave autonomous internal activity and may execute in parallel or insequence. Modules may be nested within each other. They may commu-nicate by shared variables, shared actions, a generalized form of remoteprocedure calls and by persistent data structures. Both synchronous andasynchronous communication between modules is supported. The papershows how a single framework can be used for both the specificationof large systems, the modular decomposition of the system into smallerunits and the refinement of the modules into program modules that canbe described in a standard programming language and executed on stan-dard hardware.

1 Introduction

Refining a large program as a single unit is seldom practical. In stead, one usuallyidentifies a subcomponent of the program, and constructs a refinement of thesubcomponent. Ideally, replacing the subcomponent with its refinement shouldthen result in a refinement of the whole program. If the composition methodsused in the program are all monotonic with respect to the refinement relation,this is indeed the case. Even if this is not the case, the composition methodmay be almost monotonic, so that the replacement does give a refinement of thewhole program, provided certain additional conditions are satisfied.

An important issue in programming language design is to identify monotonicor almost monotonic methods for program composition, which permit refinementof one component, with no or little regard for the rest of the program. A programthat is structured in a way that permits this is loosely said to be modular and thecomposition method is called a modularization mechanism. There are differentkinds of modularization mechanisms which permit refinement of components. We

will here study three important modularization mechanisms: procedures, paral-

lel composition and data encapsulation. These are all common modularizationmechanisms that are found in many programming languages, and have provedto be indispensable in mastering the complexity of constructing large programs.

The logical basis for program refinement that we will use here is the refine-

ment calculus, originally described by Back [1, 2] as a formal framework for step-wise refinement of sequential programs. This calculus extends Dijkstra’s weakestprecondition semantics [13] for total correctness of programs with a relation ofrefinement between program statements. The refinement relation is defined interms of the weakest preconditions of statements, and expresses the requirementthat a refinement must preserve total correctness of the statement being refined.A good overview of how to apply the refinement calculus in practical programderivations is given by Morgan [19].

The modularization facilities that we will put forward here are based on theaction system approach, introduced by Back and Kurki-Suonio [5, 6]. An action

system describes the behavior of a parallel system in terms of the atomic actionsthat can take place during the execution of the system. Action systems provide ageneral description of reactive systems, capable of modeling systems that may ormay not terminate and where atomic actions need not terminate themselves. Ar-bitrary sequential program statements can be used to describe an atomic action.Because action systems can be seen as special kinds of sequential systems, therefinement calculus framework carries over to the refinement of parallel systemsin [8]. Reactive system refinement is handled by existing techniques for datarefinement of sequential programs within the refinement calculus [3]. Relatedformalisms, like the UNITY approach [12] and the IP-language [14] lack most ofthe modularization facilities described here.

The focus in this paper is on describing the way in which modular constructscan be modeled in an action system framework, when program refinement isassumed to be carried out within the refinement calculus. We will not go anydeeper into the methods by which modularity is used in program refinement, norwill we give specific refinement rules for modular components. These issues havebeen treated in other papers, and the interested reader can consult them for moredetails [9, 11, 7]. The features of our modular programming language have beenchosen so that refinement of program modules is simple and straightforward. Wewant to show how a single framework can be used for both the specification oflarge systems, the modular decomposition of the system into smaller units andthe refinement of the modules into program modules that can be described in astandard programming language and executed on standard hardware.

Overview We show in this paper how to extend a simple basic language withmodularization mechanisms, to support the stepwise refinement of large pro-grams. We take the guarded commands language of Dijkstra [13] as our basis.We describe the usual refinement calculus extensions to this language, and ex-tend it with a standard blocks construct, to permit the use of local variables inprogram statements. This is our base language.

Our first extension of the base language is to permit local constants in a block,

in addition to local variables. This allows us to model parameterless procedures.We explain the meaning of constants by syntactic reduction, i.e., by showinghow a statement with constants can be reduced to an equivalent statement inthe base language (which has no constants).

Procedures with parameters require an extension of statements and, in par-ticular, of the block construct. We extend blocks with formal parameter dec-larations, and define an adaption operation, which takes a block with formalparameters and a list of actual parameters, and constructs a standard block with-out formal parameters that serves as an ordinary program statement. Adaptionmodels parameter passing, and is again defined by syntactic reduction.

Our next extension is to add parallelism to our language, in the form of action

systems. An action system is basically a block, with some local variables and witha single iteration that can be executed in a parallel fashion under appropriateatomicity constraints. However, any parallel execution of the iteration statementis equivalent in effect to some sequential nondeterministic execution. Hence,from a logical point of view, it is sufficient to treat action systems as ordinarysequential statements.

We define parallel composition of action systems in order to model reactive

systems. The definition is again by syntactic reduction, showing how any parallelcomposition of action systems can be reduced to a single action system. To getmore flexibility in forming parallel composition, we generalize the block constructby permitting local variables to be exported (or revealed), so that they can beaccessed from other action systems in a parallel composition. We also make thedistinction between local variables that are created at block entry and destroyedat block exit, and local variables that exist before block entry and continue toexist after the action system has terminated. The latter models persistent data

structures.

With these extensions, we have in fact a module description language, where,in addition to parallel composition of modules, we also define sequential compo-

sition of modules and nested modules with hiding. The modules that we definein this way provide both data encapsulation and parallel processing.

Having defined the module facility, we finally show how these modules com-municate with each other. Communication between modules does not need anyfurther extensions of the language, they are consequences of features alreadydefined. Basically, we show that different ways of partitioning an action systeminto parallel components corresponds to different kinds of communication mech-anisms. We exemplify three such mechanisms: shared variable communication,shared action communication (of which CSP/Occam communication is a specialcase [16, 17]) and remote procedure calls (of which, e.g., Ada rendez-vous is aspecial case [22]). A fourth communication mechanism is provided by persistent

variables in sequential composition of action systems.

We end this treatment with a short discussion on how to extend an existingmodular programming language to provide the features that we have definedhere. We choose the Oberon language [23] for this purpose, as it is a very simplelanguage, has certain basic design decisions that are very close to what we need,

and does not have any explicit facilities for parallelism.We describe the modularization facilities as a sequence of syntactic and se-

mantic extensions of a simple base language, because this is a simple way ofpresenting the underlying ideas. This should not be treated as a concrete lan-guage definition, because we have simplified a number of issues and omitted anumber of restrictions that would be needed in order to turn the proposal intoa rigorous language definition.

2 Guarded commands

Statements S in the guarded command language of Dijkstra [13] are defined bythe following syntax:

S ::= abort {abortion, nontermination}| skip {empty statement}| w := e {(multiple) assignment}| S1; S2 {sequential composition}| if A fi {conditional composition}| do A od {iterative composition}

A ::= g → S1 {guarded command, action}| A1 [] A2 {nondeterministic choice}

Here w = w1, . . . ,wn is a list of program variables and e = e1, . . . , en a cor-responding list of expressions, n > 0. Also, g is a boolean expressions. Eachvariable wi is associated with a type that restricts the values that the variablecan take. We will not define the syntactic classes of variables, types and expres-sions here, any standard definition of their syntax may be assumed.

Sequential composition of statements, as well as nondeterministic choice isassociative, so we may write S1; S2; . . . ; Sn and A1 [] A2 [] . . . [] An withoutexplicit bracketing. We will write gA for the guard g and sA for the body S ofan action A = g → S .

A statement operates on a set of program variables. It describe how a programstate is changed, where the program state is determined by the values of thevariables that are manipulated by the statements. We write S : v when wewant to explicitly indicate that v are the program variables in the state that S

operates on.The abort statement will be considered equivalent to a nonterminating loop.

The skip statement has no effect on the state. The assignment statement is welldefined if wi is a variable and ei and wi have the same type, i = 1, . . .n. Theeffect is to first compute the values ei , i = 1, . . . ,n, and then assign wi thevalue ei , for i = 1, . . . ,n. Sequential composition S1; S2; . . . ; Sn is executed byfirst executing S1 and, if S1 terminates, continuing by executing S2, and so on,executing Sn last.

An action Ai = gi → Si in the conditional statement if A1 [] . . . [] An fi issaid to be enabled in a given state if the guard gi is true in that state. The condi-tional statement is executed by choosing some enabled action Ai and executing

its body Si . If no action is enabled in the state, then the conditional statementbehaves as an abort statement. If more than one action is enabled, then oneof these is chosen nondeterministically. The nondeterminism is demonic, in thesense that there is no way of influencing which action is chosen.

The iteration statement do A1 [] . . . [] An od is executed by first choosingan enabled action Ai = gi → Si , executing its body Si , and then repeating thisuntil a state is reached where no action is enabled anymore and the iterationstops. If there is more than one action for which the guard is satisfied, then thechoice is nondeterministic, in the same way as for the conditional statement.

The iteration statement terminates only if a state is eventually reached whereno action is enabled. Otherwise, the execution goes on for ever. If no action isenabled initially, iteration terminates immediately, and the iteration statementbehaves as a skip statement.

Example The following statement is intended to sort the values of the variablesx = x1, x2, . . . , x5 into nondecreasing order according to the variable index:

SORT0 ≡ do x1 > x2 → x1, x2 := x2, x1 {EX 1}

[] x2 > x3 → x2, x3 := x3, x2 {EX 2}

[] x3 > x4 → x3, x4 := x4, x3 {EX 3}

[] x4 > x5 → x4, x5 := x5, x4 {EX 4}

od

The only actions that can take place are exchanging two neighboring valueswhen they are in the wrong order. Termination is guaranteed, because eachexecuted action decreases the number of pairs that are in the wrong order.Upon termination, the values of the variables have been sorted. Execution isnondeterministic, because two actions may be enabled at the same time. Thisnondeterminism is, however, harmless, it does not affect the outcome of theexecution.

3 Refinement calculus extensions

The work on refinement calculus has lead to a number of extensions to guardedcommand language, which make the language considerably more powerful as aspecification language. The main extensions are the nondeterministic assignment

[1] (called specification statements in [19]), miraculous statements [4, 18, 20, 21]and angelic nondeterminism [10]. We can include the first two extensions inthe language by the following extensions to the syntax (we do not add angelicnondeterminism here, because its interplay with the demonic nondeterminismrequires an extended execution model):

S ::= . . .| w := w ′.Q {nondeterministic assignment}| A {action}

An example of a nondeterministic assignment statement is, e.g., x , y :=x ′, y ′.(x ′ = x + 1 ∧ y ′ ≥ y). This statement assigns new values x ′, y ′ thatsatisfies the condition x ′ = x + 1 ∧ y < y ′ ≤ 20 to the variables x and y . Thestatement would abort, if no such values x ′, y ′ exist.

Permitting an action as a statement means that a statement about to beexecuted in a sequential composition need not be enabled, e.g., as in T = x :=x + 1; (x = 1 → x := y). If the action is reached in a state when the guarddoes not hold, then the statement is considered to terminate miraculously, inthe sense that it will establish any postcondition that we want. A statement isin general enabled when it can avoid miraculous termination. In the exampleabove, the whole statement T is enabled if x 6= 0 (because then x 6= 1 whenthe second part is about to be executed). Thus, T is in fact equivalent to T ′ =(x 6= 0 → x := x + 1; x := y), which is an action of the form permitted by theordinary guarded commands language.

Neither a nondeterministic assignment statement nor a naked guarded com-mand are in general executable on a real computer. However, they are executablein an idealized sense, and do not present any difficulties in a logical analysis ofprogram correctness. Hence, they are included in the specification language asabstractions that are convenient during program specification and development,but which need to be removed during the refinement process in order to get anexecutable program.

4 Blocks with local variables

The block construct allows local variables to be introduced in guarded commands.The syntax of guarded commands is extended as follows:

S ::= . . . |B {block}

D ::= < empty > {empty declaration}| varw := e {local variable declaration}| D1; D2 {declaration composition}

B ::= [[D • S ]] {block}.

The types of local variables in a block construct have to be indicated explicitlyif they cannot be inferred from the expression e. Composition of declarations isassociative.

The block [[D • S ]] is executed by adding the new local variables declared inD to the state, initializing their values to the given expressions respectively, thenexecuting S and, upon termination of S , deleting the local variables. We assumethat the local variables in a declaration are all distinct from each other. Theyalso have to be different from the global variables of a block, i.e., redeclarationof variables is not allowed. The initialization expressions may not refer to anylocal variables introduced in the block. The local variables declared in a blockare only accessible to statements inside the block.

5 Constants

We will extend the block construct by also permitting local constants to bedeclared in blocks, in addition to local variables. We extend the syntax as follows:

S ::= . . . | c {constant}

D ::= . . . | con c = C {local constant declaration}

Here c is an identifier that stands for the value C . The constant c can be usedanywhere in the program where C can be used. The constant may denote anexpression, statement, action or block. Above we extended statements by alsopermitting constants as statements. The syntax of expressions also needs to beextended in a similar way.

We treat constants as mere syntactic sugar, which can be removed by simplerewriting (macro expansion):

[[D ; con c = C ; D ′ • S ]]

=

[[D ; D ′ • S [C/c] ]].

The reduction substitutes C for each occurrence of the identifier c in S . Thesemantics of programs with constants is thus explained by showing how to reducesuch a program to another, equivalent program that does not have any constants.We refer to a definition of this form as a syntactic reduction.

Procedures A (parameterless) procedure is a special case of a constant. The pro-cedure declaration con p = S is usually written as proc p = S . A procedurecall p is handled by substituting the body S for each call p. We assume thatrecursive calls are disallowed, so the substitution eliminates the procedure calls.Recursive procedures do not present any real difficulty, but require a fixpointoperator on statements [2] which we choose not to introduce here for simplicity.

Because redeclaration of global variables in a block is not permitted, dynamicand static scoping will be the same. Hence, substitution does give the correctbinding for global variables.

6 Parameters and adaption

Procedures with parameters are also a special case of constant declarations, butwe need to introduce (formal) parameter declarations and a new statement con-struct called adaption to handle parameter passing. The syntax of our languageis extended as follows:

S ::= . . . |B(e, x , y) {adaption}

D ::= . . .| valw {value parameter}| updw {update parameter}| resw := w0 {result parameter}

The values w0 are default values for the result parameters. The formal parametersare considered as local declarations of a block. These need to be instantiated toactual parameters e, x , y in an adaption statement, before the block can be usedas an ordinary statement.

To conform with standard practice, we will usually write a procedure decla-ration

con p = [[ val v ; updw ; res r := r0; D • T ]],

where D contains no parameter declarations, in the form

proc p(val v ; updw ; res r := r0; ) = [[D • T ]].

We define adaption of a block B with (formal) parameters to an actual pa-rameter list (e, x , y) by syntactic reduction, as follows:

[[ val v ; updw ; res r := r0; D • T ]](e, x , y)

= [[ var v , w , r := e, x , r0; D • T ; x , y := w , r ]].

All parameters become local variables after the reduction. The value parame-ters are assigned the actual parameter values on block entry, and the updateparameters are assigned the values of the variables that they are bound to. Theactual result parameter is assigned its value at block exit. Note that we have adefault value for the result parameter on block entry. This means that the resultparameter will have a well defined value also in the case that it is never assignedto.

A procedure declaration is eliminated in the same way as before: each pro-cedure identifier is replaced by the block that it denotes. If the procedure hasparameters, then adaption is subsequently required for parameter passing.

Example We could write the sort program SORT0 using a local procedure, asfollows:

SORT0′ ≡ [[ con swap = [[upd l , r • l , r := r , l ]]•do x1 > x2 → swap(x1, x2)[] x2 > x3 → swap(x2, x3)[] x3 > x4 → swap(x3, x4)[] x4 > x5 → swap(x4, x5)od ]].

Syntactic substitution expands the call swap(x1, x2) as follows:

swap(x1, x2)

= {substitute definition of swap for the identifier}

[[upd l , r • l , r := r , l ]](x1, x2)

= {adaption}

[[ var l , r := x1, x2 • l , r := r , l ; x1, x2 := l , r ]]

= {simplification}

x1, x2 := x2, x1.

EX1 EX2 EX3 EX4

x2 x3 x4 x5x1

Removing the procedure declaration will thus reduce SORT0′ to the equivalentprogram SORT0.

7 Action system

We will model parallel and reactive systems as guarded commands of a specialform, which we refer to as action systems A:

A ::= [[D • do A1 [] . . . [] Amod ]] {action system}

The local variables declared in D and the global variables together form the state

variables of the action system. The set of state variables accessed in action A

is denoted vA and the set of variables declared in D is denoted vD . The actionsystem may also declare constants, such as procedures. For simplicity, we assumehere that there are no parameter declarations in an action system.

In the absence of local declarations, the action system reduces to a simpleiteration statement. The sorting program SORT0 of the previous section is anexample of such an action system.

An action system provides a global description of the system behavior. Thestate variables determine the state space of the system. The actions determinewhat can happen during an execution. The execution terminates when no actionis enabled anymore.

Access relation Let us denote by rA the variables that are read by action A (theread access of A), by wA the variables that are written by A (the write access ofA). Thus vA = rA ∪ wA.

Figure 1 shows the access graph of the system SORT0. The actions andvariables of the system form the nodes in this graph. An edge connects actionA to variable x , if x ∈ vA. A read access is denoted by an arrow from x to A

and a write access by an arrow from A to x . A read-write access is denoted byan undirected edge.

Fig. 1. Access relation for SORT0

Sequential and parallel execution The semantics given to guarded commandsabove prescribes a sequential execution for action systems. This gives us theinterleaving semantics, where only one action is executed at a time, in a nonde-terministic order constrained only by the enabledness of actions.

However, action systems are intended to describe parallel systems. Hence,we permit the execution of actions to proceed in parallel also, i.e., we permitoverlapping action executions.

Consider again our simple sorting program. If both actions

x2 > x3 → x2, x3 := x3, x2 and x4 > x5 → x4, x5 := x5, x4

are enabled, then either one could be executed next. Because the two actionsdo not share any variables, the effect is the same if we execute the actions oneafter the other, in either order, or if we execute them in parallel, so that theexecutions are in fact overlapping.

On the other hand, even if the actions

x1 > x2 → x1, x2 := x2, x1 and x2 > x3 → x2, x3 := x3, x2

are both enabled, we do not want to execute them in parallel, because theirsimultaneous execution could interfere with each other, as they both access vari-able x2.

Atomicity constraint Two actions are said to be in conflict with each other, ifthey both refer to a common variable, and one of them may update this variable.More precisely, actions A and B are in conflict with each other, if

vA ∩ wB 6= ∅ or wA ∩ vB 6= ∅.

There is a write-write conflict between A and B if wA ∩ wB 6= ∅, and there isa read-write conflict between these actions, if rA ∩ wB 6= ∅ or wA ∩ rB 6= ∅.Actions that are not in conflict with each other are said to be independent.

We will permit parallel execution of independent actions. We assume thatthe actions are started one by one, but in such a way that an enabled action isonly started if it does not conflict with any action that is already being executed.We will say that a parallel execution of an action system respects atomicity, if itsatisfies this constraint.

Correctness of parallel execution When atomicity is respected, an interleavingsemantics is also an appropriate abstraction for parallel execution. Parallel exe-cution of actions gives the same result as a nondeterministic sequential execution.A parallel execution is guaranteed to terminate if and only if the sequential ex-ecution is guaranteed to terminate. It will be guaranteed to establish a certainpostcondition if and only if the purely sequential execution is guaranteed toestablish the same postcondition. We can therefore reason about a parallel exe-cution of a guarded command as if it was a purely sequential execution. As longas the parallel execution of the guarded command respects atomicity, any totalcorrectness results that hold for the purely sequential program will also hold forany parallel execution of it.

Note that even under such a parallel execution regime, there may still benondeterministic choices that have to be made. This occurs when there are twoactions that both become enabled as the result of some other action terminating,but which are in conflict with each other. In this case, we choose nondetermin-istically either one for execution. Once this action is being executed, it thenprevents the other action from being started.

8 Parallel composition of action systems

The action systems provide us with a model for parallel program execution. Inthe same way as procedures provide a mechanism for modularizing sequentialprograms, we want a mechanism for modularizing parallel programs or actionsystems. The standard way of achieving this modularization is by parallel com-position of process. We follow this same paradigm, considering action systemsas processes and defining parallel composition of action systems.

We extend the language of action systems by a parallel composition operation,

A ::= . . . | A1 || A2. {parallel composition}

We define the meaning of parallel composition by syntactic reduction:

[[D • do A od ]] || [[D ′ • do A′ od ]]

=

[[D ; D ′ • do A [] A′ od ]].

The parallel composition is defined when the local constant, variable and proce-dure names in D and D ′ are all disjoint (so that D ; D ′ is a proper declaration).This can always be achieved by renaming inside the block, prior to forming thecomposition. Parallel composition thus merges local variables and constants, aswell as the actions of the two systems.

Any parallel construct may be reduced away by syntactic reduction, resultingin a program without parallel composition. Combining this with the definitionof procedures, we can thus reduce any program with parallel composition andprocedures to a simple guarded command in our base language.

Parallel composition is associative and commutative, because the ordering ofthe variable declarations and actions does not affect the meaning of the actionsystem. Hence, parallel composition generalizes directly to composition of morethan two action systems.

Example We can describe the sorting program as a parallel composition of twosmaller action systems:

SORT0′′ = LOW 0 ||HIGH 0

LOW 0 = do x1 > x2 → x1, x2 := x2, x1 {EX 1}

[] x2 > x3 → x2, x3 := x3, x2 {EX 2}

od

HIGH 0 = do x3 > x4 → x3, x4 := x4, x3 {EX 3}

[] x4 > x5 → x4, x5 := x5, x4 {EX 4}

od.

Carrying out the (trivial) syntactic reduction shows that SORT0′′ = SORT0.

Arbitrary partitioning The parallel composition of a collection of action systemsgives a new action system, which is the union of the original systems. Thus,parallel composition can be seen as a partitioning of the whole system into anumber of parts (action systems), assigning each local variable and procedureto some part. Graphically, parallel composition corresponds to a partitioning ofthe access graph of an action system into a collection of subgraphs, where eachsubgraph forms an action system of its own.

An interesting question is now whether every partitioning of an action system(or, equivalently, of the access graph) can be described as a parallel compositionof some action systems. In fact, it is easy to see that this is not the case. Ifwe have an action that accesses a given local variable, then we cannot put thataction in one part and the local variable in another, because the scope rules forblocks will then prevent the action from accessing the local variable.

In order to permit arbitrary parallel decomposition of action systems, weneed to make the scope rules of blocks more permissive. We show in the nextsection how to do this, in a way that will permit any partitioning of an actionsystem to be described as a parallel composition of component action systems.

9 Variables and scopes

We relax the rigid scope rules of blocks by permitting identifiers to be exportedfrom and imported to blocks, thus making these more like modules in the Modula-2 or Oberon sense. Besides partitioning local variables among processes, we alsopartition global variable and constants such as procedures among these. Thefirst allows us to model persistence of data, while the second allows modelingencapsulation of data and remote procedure calls between processes.

The block notation [[ varw := w0 • S ]] wraps a number of different thingsinto a single construct:

(i) It associates the variables w with the statement S ,

(ii) it creates a new variable w on entry to the block,

(iii) it initializes the variable w to w0,

(iv) it hides the variable w from the environment, and finally,

(v) it destroys the variable w on exit from the block.

To gain more flexibility in parallel composition, and in order to be able to modeldistributed systems in a realistic fashion, we need to take these different aspectsapart. We do this by generalizing the block construct, as described below.

Hiding and revealing For parallel composition, it is useful to permit an identifierdeclared inside one action system to be visible to other action systems in aparallel composition. The other action systems may be given read/write/executeaccess to these identifiers.

An identifier x is made visible to the outside by marking: x∗ indicates that theidentifier can be accessed by the environment. (We could indicate more restrictedaccesses if needed, but we consider only general access here for simplicity). Weextend the syntax of declarations as follows:

D ::= . . . |D∗ {revealing a declaration}

(If more than one identifier is declared in D , then each identifier is understoodto be marked.)

Created and existing variables Another distinction that we also want to make isthat between a variable that is created by an action system, and a variable thatalready exists when the action system starts up. A created variable is precededby the key word var, while an existing variable does not have this keyword. Inthe same way as each created variable is associated with a specific action system,we may also associate existing variables with an action system. This permits usto model the way in which variables that exist at startup of an action systemare distributed among the parallel components of the action system. We extendthe syntax as follows:

D ::= . . . |w {existing variable}

where w is a list of variables.

Accessed variables We may also need to keep track of the identifiers that are usedinside an action system but which are not associated with the action system (i.e.,the identifiers that have to be imported from other action systems in a parallelcomposition). We write

A : v

when we want to indicate explicitly the global identifiers v that are used in Abut not associated with A. The list v may contain both variable and constantidentifiers.

Generalized blocks With these extensions, the block construct itself only meansthat certain variables are associated with certain statements. Whether thesevariables are new or existing ones, or whether they are visible or hidden fromthe environment is indicated separately. For simplicity, we will in the sequelassume that the variables that are created at block entry are destroyed at blockexit. We could have a separate notation by which we could created variables thatare not destroyed at block exit (or, dually, indicate that some global variablesshould be destroyed at block exit).

10 Program modules

The above extensions to the block construct give us a module description lan-

guage, where the action systems are the modules and parallel composition is thecomposition operator for modules. Any partitioning of an action system can bedescribed as a parallel composition of action systems, because identifiers thatare defined in one module and needed in another module can be made accessibleby exporting.

We complete the picture by extending two other useful composition opera-tors, sequential composition and hiding, to action systems:

A ::= . . .| A1; A2 {sequential composition}| [[D • A1 ]] {hiding}

Thus, action systems can be built out of primitive action systems by using paral-lel composition, sequential composition and hiding. We again define the meaningof action system described in this way by syntactic reduction.

Sequential composition Sequential composition of action systems corresponds toexecuting these in phases, where each action system describes one phase, to bestarted only when the previous phase has been completed. We define the meaningof sequential composition by the following reduction rule:

[[D∗

1; E1 • do A1 od ]]; [[D∗

1; E2 • do A2 od ]]

=

[[D∗

1; E1; E2; var a := 1 •

do a = 1 → A1 [] a = 1 ∧ ¬gA1 → a := 2 [] a = 2 → A2 od ]].

We require that the revealed declarations are the same in both components,whereas the hidden components can be different. The reduction introduces anew local variable a that keeps track of which phase is being executed in thereduced system (note that g → (h → S ) = g ∧ h → S ).

Hiding Hiding corresponds to nested partitioning of the access graph. We definethe meaning of hiding by the following reduction rule:

[[D∗

1; E∗; F1 • [[D∗

2; E∗; F2 • do A od ]] ]]

=

[[D∗

1; E∗; F1; D2; F2 • do A od ]].

A nested action system is thus defined to be equivalent to the flattenedsystem. The variables that are revealed in the inner block are not revealed bythe outer block, unless explicitly required so. To re-export a variable to an outerblock, we need to repeat the declaration in the outer block. Repeated declarations(E∗ above) are merged into a single declaration on the syntactic reduction.

Hiding and revealing Hiding/revealing a variable or procedure only makes sensefor parallel composition of action systems, and provides an abstraction mecha-nism for modules. Hiding vs. revealing for parallel composition corresponds tocreating a new variable vs. using an existing variable for sequential composition:creating a variable on block entry and destroying it on exit means that the vari-able is not accessible to preceding or succeeding action systems. A variable thatis to be accessible for successive action systems must therefore exist before andafter the execution of the action system, it cannot be created inside the actionsystem.

Open and closed systems The scoping rules allow us to distinguish between open

and closed action system. A closed action system does not reveal any identifiers,nor does it import any, while an open action system either reveals or importssome identifiers.

The sorting program SORT0 described previously is an open system, wherethe variables to be sorted are assumed to be associated with the environment(i.e., another action system that is composed in parallel with the sorting actionsystem). It would now be described with explicit mentioning of the importedvariables:

SORT0 ≡ [[ do EX 1 [] . . . [] EX 4 od ]] : x1, x2, x3, x4, x5 : IN .

The following is also an open version of this system, but now the variablesto be sorted are considered part of the action system, rather than part of theenvironment.

SORT0′ = [[ x1∗, x2∗, x3∗, x4∗, x5∗ : IN • do EX 1 [] . . . [] EX 4 od ]].

Here the variables x1, . . . , x5 could also be accessed by other action systems ina parallel composition.

The sorting program as a closed system hides the variables to be sorted insidethe system:

SORT1 = [[ x1, x2, x3, x4, x5 : IN • do EX 1 [] . . . [] EX 4 od ]].

Note that even if the variables x1, . . . , x5 are hidden, they are assumed to havewell defined values before the action system starts execution, and these variableswill remain in the state space after the action system has terminated. Hence, wecan talk about these variables in pre- and postcondition specifications, e.g., tostate that they work correctly with respect to a given specification.

Nesting of parallel and sequential components A closed system cannot commu-nicate with its environment during execution, but it can change the state of thesystem. Hence, a closed system behaves as an ordinary sequential statement. Wecan therefore permit a closed action system to be used as a building block inconstructing sequential statements. In particular, a closed action system can beused to construct an action that is part of an action system at a higher level,

which in turn may be part of a parallel composition. This gives us nesting of par-allel and sequential program constructs to any depth. The nesting is permittedby the following extension of the language:

S ::= . . . | A, {action system}

where A is a closed action system.

11 Shared variable communication

We have shown above how to decompose action systems into parallel compo-nents, but we have not described how these components communicate with eachother. In the following three sections, we will show that communication is al-ready built into the approach, and is a consequence of the way the action systemis partitioned. No new facilities need to be added for communication betweenprocesses.

Let us first partition action system SORT1 into parallel components by start-ing from the actions. We let process LOW 2 contain actions EX 1,EX 2, and pro-cess HIGH 2 contain the actions EX 3,EX 4. We also partition the global variablessuch that variables x1, x2 belong to LOW 2, because they are only accessed byactions in this process, and variables x4, x5 belong to HIGH 2 for similar reasons.The variable x3 is shared between the two processes, and forms a partition ofits own that we call VAR2.

The partitioned action system SORT2 is then expressed as

SORT2 = [[LOW 2 ||VAR2 ||HIGH 2 ]],

where

LOW 2 = [[ x1, x2 • do EX 1 [] EX 2 od ]] : x3

VAR2 = [[ x3∗ ]]

HIGH 2 = [[ x4, x5 • do EX 3 [] EX 4 od ]] : x3.

Merging these three action systems together again gives us the original actionsystem SORT1. Note that we omit the empty loop in VAR2, for brevity. Thepartitioning of SORT2 is illustrated in Figure 2

This partitioning of the actions gives us a shared variable model for con-currency. The action systems LOW 2 and HIGH 2, considered as processes, canexecute in parallel. They communicate by the shared variable x3. The atomicityrestriction is enforced if we require that variable x3 is accessed under mutualexclusion by the two processes. (A correct implementation must in general alsoensure that no deadlock occurs because of the way in which shared variables arereserved).

12 Shared action communication

Starting from the variables rather than from the actions gives us another modelof communication. We place variables x1, x2, x3 in action system LOW 3 and

EX1 EX2 EX3 EX4

x2 x4 x5x1 x3

LOW2 VAR2 HIGH2

EX1 EX2 EX3 EX4

x2 x4 x5x1 x3

LOW3 ACT3 HIGH3

Fig. 2. Partitioning of SORT2

variables x4, x5 are in action system HIGH 3. The actions EX 1,EX 2 only accessvariables in LOW 3 and are hence put into this part. Action EX 4 only accessvariables in HIGH 3, and is therefore put into this part. Action EX 3 is sharedbetween the two processes, and forms a part of its own.

This gives us the following sorting program:

SORT3 = [[LOW 3 ||ACT3 ||HIGH 3 ]],

where

LOW 3 = [[ x1, x2, x3∗ • do EX 1 [] EX 2 od ]]

ACT3 = [[ do EX 3 od ]] : x3, x4

HIGH 3 = [[ x4∗, x5 • do EX 4 od]]

The partitioning of SORT3 is illustrated in Figure 3.

Fig. 3. Partitioning of SORT3

By expanding the parallel composition, we can easily see that this gives usthe same action system as before, i.e., SORT3 = SORT1.

The action EX 3 is shared in this case, because it involves both processes. Allother actions are private to either processes. In this partition, the processes thushave disjoint sets of variables, and they communicate by carrying out shared

actions. The shared action is only executed if each process is ready for it. Ashared action can update variables in one or more processes in a way that maydepend on the values of variables in other processes. In this way, informationis communicated between processes. Note that there is nothing to prevent morethan two actions to participate in a shared action, thus providing in general forsynchronized n-way communication [5, 14, 15].

The notion of a shared action is a considerable abstraction here, because itis obvious that neither process can determine for themselves, by only looking attheir local variables, whether the shared action is enabled or not. Hence, somekind of prior communication or centralized scheduling would be needed in orderto implement the shared action. Additional restrictions on the action system canbe enforced, to make the communication mechanism efficiently implementablein a distributed environment [5]. The communication mechanisms of CSP andOccam are special cases of the shared action mechanism described here.

13 Remote procedure calls

The shared action model achieves synchronous communication in a parallel com-position. However, the variables associated with a certain process are accessedby shared actions, which means that more information about the process is re-vealed to the environment than what we might want to. We present here a thirdmodel for communication, which also gives us synchronous communication, buthas better locality properties. This method is based on the use of procedures.

An action system A communicates with another action system B by callinga procedure exported by B. Information is passed between the processes viathe parameters: value parameters send information from A to B, while resultparameters return information in the other direction. The procedure mechanismthus corresponds to remote procedure call communication.

The atomicity of an action also includes all procedure invocations that mayoccur during the action execution. This guarantees that the proof rules for simpleiteration statements are still valid when reasoning about the total correctness ofaction systems with procedure calls.

The body of a procedure is a statement which is always executed wheneverthe procedure is invoked. Hence, the remote procedure mechanisms is ratherweak, as there is no way in which an action system can refuse to accept aprocedure call from another action system. There is, however, a rather simplefix to this: use actions as statements, as is permitted by the refinement calculusextension of guarded commands.

The procedure then has an enabledness condition, just as an action in aloop or conditional statement has. This enabledness condition must be satisfiedwhen the procedure is called. If it is not satisfied, then the action from whichthe procedure has been called is considered not to have been enabled in the

EX1 EX2 EX3 EX4

x2 x4 x5x1 x3

LOW4 HIGH4

swap

first place. A correct implementation of this mechanism has to guarantee thatin such a situation the effect of the action is unmade and some other action isexecuted in stead, if possible. Alternatively, we could have a mechanism whichcan in advance determine whether the action will be enabled or not, and preventexecution of an action that would turn out not to be enabled.

Hence, an action with procedure calls (within a single action system or be-tween action systems) is either executed to completion without any interferencefrom other actions, or it is not executed at all. A parallel implementation has torespect this atomicity in order to be considered correct.

Example We illustrate this communication mechanism again with the sortingexample. We make the following partitioning of the action system:

SORT4 = [[LOW 4 ||HIGH 4 ]],

where

LOW 4 = [[ x1, x2, x3;

proc swap∗(upd a) = (a > x3 → a, x3 := x3, a) •

do EX 1 [] EX 2 od ]]

HIGH 4 = [[ x4, x5 •

do true → swap(x4) {EX3’}

[] EX 4 od ]] : swap.

Thus, the action system LOW 4 reveals a procedure swap that the action systemHIGH 4 calls, with the parameter x4. The procedure is enabled if x4 > x3,in which case the two values are swapped. Note that the enabledness of theaction that calls swap is only determined at the calling end (which has therequired information). The variables x1, . . . , x5 are now all hidden. The onlyway of accessing x3 is by using the swap procedure. The outermost bracketshide the swap procedure from the environment.

The access relation and partitioning of SORT4 is described in Figure 4.

Fig. 4. Partitioning of SORT4

Expanding parallel composition We illustrate the definitions given above by sim-plifying SORT4. We have that

SORT4

= {reducing parallel composition}

[[ [[ x1, x2, x3, x4, x5;

proc swap∗(upd a) = (a > x3 → a, x3 := x3, a) •

do EX 1 [] EX 2 [] true → swap(x4) [] EX 4 od ]] ]]

= {expanding procedure call, performing adaption}

[[ [[ x1, x2, x3, x4, x5;

proc swap∗(upd a) = (a > x3 → a, x3 := x3, a) •

do EX 1 [] EX 2

[] true → [[ var a := x4 • a > x3 → a, x3 := x3, a; x4 := a]]

[] EX 4

od ]] ]]

= {simplifying expanded call to EX 3}

[[ [[ x1, x2, x3, x4, x5;

proc swap∗(upd a) = (a > x3 → a, x3 := x3, a) •

do EX 1 [] EX 2 [] EX 3 [] EX 4

od ]] ]]

= {flattening nested action system}

[[ x1, x2, x3, x4, x5;

proc swap(upd a) = (a > x3 → a, x3 := x3, a) •

do EX 1 [] EX 2 [] EX 3 [] EX 4

od ]]

= {omitting redundant procedure call}

[[ x1, x2, x3, x4, x5 •

do EX 1 [] EX 2 [] EX 3 [] EX 4

od ]]

= {definition}

SORT1.

The simplification step is based on

true → [[ var a := x4 • a > x3 → a, x3 := x3, a; x4 := a]]

= true → (x4 > x3 → x3, x4 := x4, x3)

= true ∧ x4 > x3 → x3, x4 := x4, x3

= x4 > x3 → x3, x4 := x4, x3

= EX 3.

14 Module description

We will show how an ordinary modular programming language could be adaptedto the action system framework, as we have described it above. We choose theprogramming language Oberon (or Oberon-2) as our base language, because itis simple and does not have any features for parallel programming. However,other languages could do as well, such as Modula-2 or Modula-3, C++, ObjectPascal, Ada and so on.

We are interested in an actual language implementation of action systems,because we then gain a uniform language in which both specifications and im-plementations of reactive systems can be expressed. Action systems are mainlyused for specification of a parallel system. However, having an implementation ofaction systems provides us with a tool for observing the behavior of the specifiedsystem at an early stage, to find out inefficiencies in a real parallel implementa-tion and observe other features of interest.

Implementing action systems in Oberon means that a subclass of action sys-tems are efficiently executable in the Oberon system using the Oberon compiler.We want to extend the Oberon language so that also more general action systemscan be executed. It is then not important that the more general mechanisms areefficiently implemented. We can be satisfied with a considerably less efficient im-plementation, because we are executing just a specification, which anyway hasto be refined to a more efficient program before final delivery.

Action execution Let us first look at simple action systems without parallel orsequential composition, hiding or procedures. The action system

A = [[ varw = w0; do A1 [] . . . [] Amod ]] : v . (1)

cannot be directly described as an Oberon module, because the iteration state-ment should be executed nondeterministically, but Oberon only has determin-istic execution. We could implement the iteration as a deterministic iterationstatement, but then this would not be a good model of parallel execution.

The Oberon system does, in fact, already have a mechanism that is verysimilar to an action system: the Oberon loop. The Oberon loop is a circular listof tasks, which are parameterless procedures. Two special tasks are standard:listening to mouse and keyboard input, and performing garbage collection. Theuser can also insert his own tasks into the loop. The system runs by cyclingthrough the Oberon loop forever. This basic event loop controls the interactionwith the user.

We can use this mechanism to simulate action execution. We consider anaction to be just a parameterless procedure that is inserted into the Oberonloop. As the Oberon system is single-threaded, this means that an action will beexecuted to completion without any interference from any other action. Hence,this mechanism guarantees atomicity of action execution.

However, there are two problems with this implementation. First, the Oberontasks are always executed, i.e., there is no enabling condition. An action, on theother hand, need not be enabled. We can solve this problem by extending the

notion of a procedure so that it can have an enabling condition. Thus, we getthe following syntax for actions in an Oberon extension:

ACTIONEX 1;

WHEN x1 > x2

BEGIN x1, x2 := x2, x1 ENDEX 1.

The scheduler needs the explicit guard in order to determine when an actionsystem has terminated. We introduce a new key word for actions (rather thancall these procedures), because actions behave differently from procedures: Anaction is executed autonomously by the system, whereas a procedure is onlyexecuted if it is explicitly called.

Secondly, the scheduling in the Oberon loop is deterministic, whereas weneed to have nondeterministic scheduling in order to model parallelism. Thescheduler for the Oberon loop can, however, be changed so that it cycles in anondeterministic fashion through the actions in the loop.

Initialization There are no local variables in our example, so the variable decla-ration and initialization part of the module are both empty. An implementationof the initialization part of the action systems in Oberon would have to, in ad-dition to assigning initial values for the variables, also install the actions of themodule in the Oberon loop (if this is not done directly by the compiler). We may,in fact, permit the initializations in the modules to be arbitrary statements, foradded flexibility.

Parallel composition An Oberon program is just a collection of modules thatare initialized one by one. After the initializations, these modules continue tobe available, so that they can be manipulated by the tasks in the Oberon loop,and the actions in the modules are executed in parallel, autonomously. Thus,parallel composition of an action system is implicit in the collection of modules.The Oberon loop will take care of the execution of the actions of the modules, in anondeterministic fashion which simulates atomicity respecting parallel execution.

Variables and hiding The Oberon language already knows about importing andexporting variables, so these aspects do not provide any difficulties. What thelanguage does not have is a facility to distribute existing variables among mod-ules. This means that existing variables have to be assumed to be globally visible,and have to be explicitly imported into a module in order to be used.

A facility for taking existing variables into use would correspond to havingpersistent data in the system. Standard Oberon lacks this feature, but it ispresent in some extensions of the Oberon system. A restricted form of persistenceis available in most programming languages, as a facility for binding internal filevariables to external files in a file systems.

The Oberon language does not permit nested module declarations, but othersimilar languages do, e.g., Modula-2.

Procedures Procedures as we need them are already present in the Oberon lan-guage, except that the procedures are always enabled. We can solve this problemsyntactically by adding an enabling condition to procedures also, in the sameway as we have enabling conditions for actions.

This, however, gives us an implementation problem instead. What shouldwe do when we are about to execute a procedure that turns out not to beenabled. The semantics requires us to backtrack to the action from which theprocedure originally was called, and choose some of the action instead. Restoringthe system state to what it was before the action execution can be a very costlyoperation. We may, however, use this as the general strategy, and then assumethat the compiler is smart enough to detect the situations when more efficientimplementations can be done.

Example The following is the Oberon equivalent of action system SORT4 (theUSE clause models the placement of existing variables in a module):

MODULELOW 4;

USE x1, x2, x3 : INTEGER;

PROC swap∗(VAR a : INTEGER);

WHEN a > x3 BEGIN a, x3 := x3, a END swap;

ACTIONEX 1;

WHEN x1 > x2 BEGIN x1, x2 := x2, x1 ENDEX 1;

ACTIONEX 2;

WHEN x2 > x3 BEGIN x2, x3 := x3, x2 ENDEX 2;

BEGIN ENDLOW 4.

MODULEHIGH 4;

IMPORTLOW 4;

USE x4, x5 : INTEGER;

ACTIONEX 3;

BEGINLOW 4.swap(x4) ENDEX 3;

ACTIONEX 4;

WHEN x4 > x5 BEGIN x4, x5 := x5, x4 ENDEX 4;

BEGIN ENDHIGH 4.

15 Conclusions

We have above shown how to extend the simple guarded command languagewith modularization features that are considered useful and necessary in theconstruction of large programs. The meaning of these modularization constructsis defined by syntactic reduction, showing how each construct can be removed

by syntactic transformation that reduces a program containing the constructto one that does not contain it. Ultimately, we can reduce a large modularizedprogram to a simple guarded command. This is not, however, the way that thesereduction rules are intended to be used. In stead, they are intended to providejustification for module refinement rules. For each kind of module that we havedefined (procedure, process, data module) we can give conditions under whichthe replacement of this module with another module is correct, in the sense thatit preserves the correctness of whole program. Justifying these refinement rulesrequires the reduction rules.

Another purpose of the treatment here has been to show that the conceptualbasis required for the main modularization facilities is actually rather small.Most properties needed are already present in the basic language of guardedcommands, when the extensions to this language that the work on refinementcalculus has brought forward are included. The modularization facilities intro-duced here are not restricted to programming languages, but work equally wellfor specification languages. In fact, many of the constructs that we have definedare such that they cannot be efficiently implemented in their full generality.Hence, what we really have is a specification language with modularization fa-cilities. A programming language should be seen as a restricted subset of thespecification language, where the restrictions are such that they facilitate effi-cient execution of programs written in the language.

Acknowledgments The work reported here was supported by the Academy ofFinland. We want to thank Michael Butler, Eric Hedman, Xu Qiwen, MarinaWalden and Joakim von Wright, for comments and discussions on the modular-ization issues described here.

References

1. R. Back. Correctness Preserving Program Refinements: Proof Theory and Appli-cations, volume 131 of Mathematical Center Tracts. Mathematical Centre, Ams-terdam, 1980.

2. R. Back. A calculus of refinements for program derivations. Acta Informatica,25:593–624, 1988.

3. R. Back. Refinement calculus, part II: Parallel and reactive programs. In REXWorkshop for Refinement of Distributed Systems, volume 430 of Lecture Notes inComputer Science, Nijmegen, The Netherlands, 1989. Springer–Verlag.

4. R. Back. Refining atomicity in parallel algorithms. In PARLE Conference onParallel Architectures and Languages Europe, Eindhoven, the Netherlands, June1989. Springer Verlag.

5. R. Back and R. Kurki-Suonio. Decentralization of process nets with centralizedcontrol. In 2nd ACM SIGACT-SIGOPS Symp. on Principles of Distributed Com-puting, pages 131–142. ACM, 1983.

6. R. Back and R. Kurki-Suonio. Distributed co–operation with action systems.ACM Transactions on Programming Languages and Systems, 10:513–554, Octo-ber 1988.

7. R. J. R. Back, A. J. Martin and K. Sere. Specification of a Microprocessor. De-partment of Computer Science, Abo Akademi University, Turku, Finland, 1993.

8. R. Back and K. Sere. Stepwise refinement of action systems. Structured Program-ming, 12:17–30, 1991.

9. R. J. R. Back and K. Sere. Action Systems with Synchronous Communication. Toappear in E.–R. Olderog, editor, IFIP TC 2 Working Conference on ProgrammingConcepts, Methods and Calculi (PROCOMET ’94), pages 107–126. Elsevier, 1994.

10. R. Back and J. von Wright. Refinement calculus, part I: Sequential programs.In REX Workshop for Refinement of Distributed Systems, volume 430 of LectureNotes in Computer Science, Nijmegen, The Netherlands, 1989. Springer–Verlag.

11. R. Back and J. von Wright. Trace refinement of action systems. To appear inProc. of CONCUR ’94, Uppsala, Sweden, Augusti 1994.

12. K. Chandy and J. Misra. Parallel Program Design: A Foundation. Addison–Wesley, 1988.

13. E. Dijkstra. A Discipline of Programming. Prentice–Hall International, 1976.14. N. Francez. Cooperating proofs for distributed programs with multiparty interac-

tion. Information Processing Letters, 32:235–242, September 1989, North–Holland.15. N. Francez, B. T. Hailpern and G. Taubenfeld. SCRIPT– A communication ab-

straction mechanism and its verification. Science of Computer Programming, 6:35–88, 1986.

16. C. A. R. Hoare. Communicating sequential processes. Communications of theACM, 21(8):666–677, August 1978.

17. INMOS Ltd. occam Programming Manual. Prentice–Hall International, 1984.18. C. C. Morgan. Data Refinement by Miracles. Information Processing Letters

26:243–246, January 1988.19. C. Morgan. Programming from Specifications. Prentice-Hall, 1990.20. J. M. Morris. A Theoretical Basis for Stepwise Refinement and the Programming

Calculus. Science of Computer Programming, 9:287–306, 1987.21. G. Nelson. A Generalization of Dijkstra’s Calculus. ACM Transactions on Pro-

gramming Languages and Systems, 11(4):517–562, October 1989.22. United States Department of Defence. Reference manual for the Ada programming

language. ANSI/MIL–STD–1815A–1983, American National Standards Institute,New York, 1983.

23. N. Wirth. The programming language Oberon. Software - Practice and Experi-ence, 18(7), 1988.

This article was processed using the LaTEX macro package with LLNCS style