Algebraic Approaches to Graph Transformation, Part I: Basic Concepts and Double Pushout Approach

32

Transcript of Algebraic Approaches to Graph Transformation, Part I: Basic Concepts and Double Pushout Approach

ALGEBRAIC APPROACHES TO GRAPH TRANSFORMATIONPART II:SINGLE PUSHOUT APPROACH AND COMPARISON WITHDOUBLE PUSHOUT APPROACHH. EHRIG, R. HECKEL, M. KORFF, M. L�OWE, L. RIBEIRO, A. WAGNERTechnische Universit�at Berlin, Fachbereich 13 Informatik, Franklinstra�e 28/29,D-10587 Berlin, GermanyA. CORRADINIDipartimento di Informatica, Corso Italia 40,I-56125 Pisa, ItalyAbstractThe algebraic approaches to graph transformation are based on theconcept of gluing of graphs corresponding to pushouts in suitable categor-ies of graphs and graph morphisms. This allows one to give not only anexplicit algebraic or set theoretical description of the constructions butalso to use concepts and results from category theory in order to buildup a rich theory and to give elegant proofs even in complex situations.In the previous chapter we have presented an overview of the ba-sic notions and problems common to the two algebraic approaches, thedouble-pushout (DPO) approach and the single-pushout (SPO) approach,and their solutions in the DPO-approach. In this chapter we introduce theSPO-approach to graph transformation and some of its main results. Westudy application conditions for graph productions and the transforma-tion of more general structures than graphs in the SPO-approach, wheresimilar generalizations have been or could be studied also in the DPO-approach. Finally, we present a detailed comparison of the DPO- andthe SPO-approach, especially concerning the solutions to the problemsdiscussed for both approaches in the previous chapter.1 IntroductionThe algebraic approach of graph grammars has been invented at TU Berlinin the early 70'ies by H. Ehrig, M. Pfender and H.J. Schneider in order togeneralize Chomsky grammars from strings to graphs [1]. Today this algebraicapproach is called double-pushout approach, because it is based on two pushoutconstructions, in contrast to the single-pushout (SPO) approach where directderivations are de�ned by a single pushout in the category of graphs and partialgraph morphisms. A general introduction with main application areas and anoverview of both algebraic approaches is presented in [2], in this handbook.1The main aim of this chapter is to provide an introduction to the basicnotions of the SPO-approach, to present some of its main results and relevantextensions and generalizations, and to compare the notions and results of theSPO-approach with the corresponding ones presented in chapter [2] (in thishandbook) for the DPO-approach. After this introduction, we suggest to read�rst Section I.2 a, where many relevant concepts and results common to bothalgebraic approaches are introduced informally, in terms of problems.The following section gives an introduction to the basic notions of theSPO-approach, like production, match, and (direct) derivation, and describesthe historical roots of the SPO-approach. In Section 3 we give answers to theproblems of Section I.2, concerning independence and parallelism, embeddingof derivations, and amalgamation and distribution of derivations. In severalgraph grammar approaches it is possible to formulate application conditionsfor individual productions. But in very few cases the theoretical results canbe extended to include these conditions. In Section 4 the SPO-approach andthe results concerning independence and parallelism are extended in order toallow for user-de�ned application conditions, as recently done in [3]. In Sec-tion 5 we show how the SPO-approach can be applied to more general kindsof graphs and structures like attributed graphs [4], graph structures [5] (in-cluding for example hypergraphs), generalized graph structures [6], and evenmore general structures in high-level replacement systems [7]. In Section 6 wecompare the main concepts and results of the two approaches, presented in [2]and this chapter, w.r.t. the problems stated in the overview of both approachesin Section I.2. Finally in the conclusion we point out that both approaches aresuitable for di�erent application areas and discuss some main ideas for furtherresearch.2 Graph Transformation Based on the SPO ConstructionIn this section we introduce the basic notions of the single-pushout approachto graph transformation. In order to be able to follow the formal de�nitions ofthis section the reader should be familiar with the notions introduced in SectionI.3 for the double-pushout approach, although the informal motivation in thefollowing can be understood without any preliminary knowledge.2.1 Graph Grammars and Derivations in the SPO ApproachAs introductory example we use the model of a (simple version of a) Pacmangame.aAll along this chapter, references to [2] are preceded by \I." (for Part I).2

kill

moveP

eatPacman Graph PGFigure 1: Pacman GameExample 1 (Pacman game). The game starts with a number of �gures (Pac-man, ghosts, and apples) placed on a board. Pacman and each of the ghostsmay autonomously move from �eld to �eld (up, down, left, right). Apples donot move. Pacman wins if he manages to eat all the apples before he is killedby one of the ghosts.Each of the states of the Pacman game can easily be modeled as a graph (seeFigure 1). Vertices are used to represent Pacman, ghosts and apples as well asthe �elds of the board. These di�erent kinds of vertices are distinguished usingdi�erent graphical layouts (the obvious ones for Pacman, ghosts and applesand the black dots for the �elds). Edges pointing from a vertex representing a�gure to a �eld vertex model the current position of the �gure on the board.Neighborhood relations between �elds, are explicitly represented using edgestoo.Each possible activity in the Pacman game causes a change of the cur-rent state and hence is modeled by a transformation of the graph representingthis state. To describe which activities, i.e., which graph transformations, arepossible we use productions. The production moveP in Figure 1 describes thatPacman is moving one �eld. All objects are preserved but the edge pointingfrom the Pacman vertex to the �eld vertex is deleted and a new one is inserted.The production can be applied to a graph if Pacman is on a �eld which hasa neighbor. As the e�ect of the application of this production, Pacman movesto a new position while everything else is left unchanged. Such an applicationof a production to a current graph is called derivation or transformation. Tomove the ghosts we have a production (moveG) which is not drawn but whichfollows the same scheme. The production eat can be applied when Pacman andan apple are on the same �eld. The result is that the apple and its outgoing3edge are deleted. Analogously if Pacman and a ghost are on the same �eld theproduction kill can be applied and Pacman is deleted. utGraphs and (total) graph morphisms have been formally introduced inDe�nition I.6. In contrast to spans of total morphisms in the DPO-approach,we use partial morphisms in order to describe, for example, the relationshipbetween the given and derived graph in a derivation. The aim is to model adirect derivation by a single pushout instead of two.De�nition1 (partial graph morphism). Let G = (GV ; GE; sG; tG; lvG;leG) be a graph. Recall that GV ; GE denote the sets of vertices and edges of G,sG; tG its source and target mappings, and lvG; leG the label assignments forvertices and edges, respectively. A subgraph S of G, written S � G or S ,! G,is a graph with SV � GV , SE � GE , sS = sGjSE , tS = tGjSE , lvS = leGjSE ,and leS = leGjSE . A (partial) graph morphism g from G to H is a totalgraph morphism from some subgraph dom(g) of G to H, and dom(g) is calledthe domain of g. utBy de�ning composition of these morphisms by composition of the com-ponents, and identities as pairs of component identitiesb, the graphs over a �xedlabeling alphabet and the partial morphisms among them form a category de-noted by GraphP .Usually, a production L p�! R consists of two graphs L and R, called theleft- and the right-hand side, respectively, and a partial morphism p betweenthem. The idea is to describe in the left-hand side which objects a graph mustcontain such that the production can be applied, i.e., all objects (vertices andedges) which shall be deleted and the application context which shall be pre-served. The right-hand side describes how this part of the graph shall look likeafter the transformation, i.e., it consists of all the objects which are added to-gether with the application context. The morphism p makes clear which objectsin the left-hand side correspond to which ones in the right-hand side. Objectswhere the morphism is unde�ned are deleted. All objects which are preservedby the morphism form the application context. If an object in the right-handside of the production has no preimage under the morphism, it is added. Notethat the role of the application context coincides with that of the interface graphK in the DPO productions (see De�nition I.7). In the formal de�nition belowthe concept of production is enriched with a name, which is used to describethe internal structure of composed productions as, for example, the parallelproduction (see also the corresponding De�nition I.7 for the DPO-approach).De�nition2 (production, graph grammar). A production p : (L r! R)consists of a production name p and of an injective partial graph morphismbNote that a partial morphism can also be considered as a pair of partial functions.4

r, called the production morphism. The graphs L and R are called the left-and the right-hand side of p, respectively. If no confusion is possible, we willsometimes make reference to a production p : (L r! R) simply as p, or also asL r! R.A graph grammar G is a pair G = h(p : r)p2P ; G0i where (p : r)p2P is afamily of production morphisms indexed by production names, and G0 is thestart graph of the grammar. utSince production names are used to identify productions, for example, indirect derivations, they should be unique, i.e., there must not be two di�erentproductions with the same name. This is ensured be the above de�nition ofgraph grammar. Later on, production names will be used to store the struc-ture of productions which are composed from elementary productions. In suchcases the production name is usually a diagram, consisting of the elementaryproductions and their embeddings in the composed production. In elementaryproductions, production names are sometimes disregarded. Then we just writep : L! R in order to denote the production p : (L p! R) where the productionmorphism p is also used as the name of the production. In this section, all theproductions are elementary and we use the notation just introduced.Example 2 (production). The �gure below shows a production r which insertsa loop and deletes two vertices (�2 and �3 resp. ). We consider the labelingalphabet to contain only one element �, i.e., the graph can be seen as unlabeled.The application context consists of the vertex �1. As indicated by the dashedline, r is de�ned for vertex �1 mapping it to the only vertex on the righthand side. r is unde�ned for the other two vertices. The numbers are used toillustrate the morphism such that we can omit the dashed lines later on if graphsare more complex. Hence graph L can for example be formally denoted byL = (f�1; �2; �3g;�; sL; tL; lvL; leL), where sL; tL and leL are empty functions.lvL is de�ned by lvL(�i) = � for i = 1::3.rL R11

23 utUsing partial instead of total morphisms, the categorical notion of pushoutcaptures not only the intuitive idea of gluing but that of gluing and deletion.Therefore the construction is much more complicated. In order to make it easierto understand we divided it into three steps. The �rst two steps are gluings asknown from Section I.3. In the third step the deletion is performed. Intuitively,5

deletion is not seen as inverse to addition here, but as equalizing morphismshaving di�erent domains of de�nition. Two morphisms with the same sourceand target objects are reduced to their equal kernel by removing all those itemsfrom their range which have di�erent preimages under the morphisms. Formallythis is captured by the categorical notion of a co-equalizer.De�nition3 (co-equalizer). Given a category C and two arrows a : A! Band b : A ! B of C, a tuple hC; c : B ! Ci is called co-equalizer of ha; bi ifc � a = c � b and for all objects D and arrows d : B ! D, with d � a = d � b,there exists a unique arrow u : C ! D such that u � c = d.A B

a

b

cC utFor our pushout construction of partial morphism a very speci�c co-equalizer is su�cient, which is constructed in the following.Construction4 (speci�c co-equalizer in GraphP ). Let a; b : A ! B betwo (partial) morphisms such that for each x 2 A, if both a and b are de�nedon x then a(x) = b(x). Then, the co-equalizer of a and b in GraphP is givenby hC; ci where� C � B is the largest subgraph of [b(A) \ a(A)] [ [(a(A) \ b(A)]), and� dom(c) = C and c is the identity morphism on its domain.Proof. It is obvious that c�a = c�b. The universal co-equalizer property followsfrom the fact that for each other morphism d : B ! D with d � a = d � b wehave dom(d) � dom(c). utExample 3 (deletion as a co-equalizer). Figure 2 shows the co-equalizer of theempty morphism a and a total morphism b in GraphP . According to Con-struction 4, C is the maximal subgraph of B such that for all x 2 C, eitherx 2 a(A) \ b(A), or x 62 a(A) and x 62 b(A). Thus, vertex 2 is deleted becauseit has a preimage under b but not under a. In order to obtain a graph, thedangling edge is deleted, too. utWith these preliminaries we are able to construct the pushout of two partialmorphisms in GraphP .Proposition5 (pushout in GraphP ). The pushout of two morphisms b :A! B and c : A! C in GraphP always exists and can be computed in threesteps, as shown in Figure 3: 6

dom(b)

dom(a)

A B C = dom(c)

1

2

1Figure 2: Deletion as a co-equalizerdom(c) A dom(b) B

C C’ D

(PO 1) (PO 2)

E

FFigure 3: Construction of pushout in GraphP[gluing 1] Construct the pushout hC 0; A ! C 0; C ! C 0i of the total morph-isms dom(c)! C and dom(c)! A in Graph(cf. Example I.3).[gluing 2] Construct the pushout hD;B ! D;C 0 ! Di of the total morphismsdom(b)! A! C 0 and dom(b)! B in Graph.[deletion] Construct the co-equalizer hE;D ! Ei of the partial morphismsA! B ! D and A! C ! C 0! D in GraphP .hE;C ! C 0 ! D ! E;B ! D ! Ei is the pushout of b and c in GraphP .Proof. Commutativity holds by construction. Assume that there is hF;C !F;B ! F i with A ! B ! F = A ! C ! F . Prove that dom(b) ! B !F = dom(b) ! A ! C 0 ! F . Use the fact that pushouts in Grapharepushouts in GraphP [5] to construct a universal morphism D ! F . Withthe universal co-equalizer property we obtain a unique morphism E ! F withB ! D ! E ! F = B ! F and C ! C 0 ! D ! E ! F = C ! F . utTo apply a production to a graph G, one has to make sure that G containsat least all objects necessary for performing the desired operations and the7

L //r��m R�� m�G //r� H(PO)Figure 4: Direct derivation as pushout in GraphP .application context. These are exactly the objects of the production's left-handside L. Hence we introduce the notion of a match as a total morphismmatchingthe left-hand side of the production with (a part of) G. The match must be totalbecause otherwise not all needed objects have a corresponding image in thegraph G.We allow that di�erent items fromL are mapped onto the same item inG. This avoids additional productions in some cases. The actual transformationof G is performed by removing the part matched by the production's left-handside and adding the right-hand side. The result is the derived graph H.De�nition6 (match, derivation). A match for r : L ! R in some graphG is a total morphism m : L! G. Given a production r and a match m for rin a graph G, the direct derivation from G with r at m, written G r;m=) H,is the pushout of r and m in GraphP , as shown in Figure 6. A sequenceof direct derivations of the form � = (G0 r1 ;m1=) : : : rk ;mk=) Gk) constitutes aderivation from G0 to Gk by r1; : : : ; rk, brie y denoted by G0 =)� Gk. Thegraph language generated by a graph grammar G is the set of all graphs Gksuch that there is a derivation G0 =)� Gk using productions of G. utTo construct the direct derivation from graph G with r at m we use Pro-position 5. Since match m is a total morphism, step one of the construction istrivial and can be omitted.Example 4 (direct derivation as pushout construction). The productionmoveP from Example 1 can be applied to the graph PG (Figure 1) atmatch m, leading to the direct derivation shown in Figure 5. Rememberthat due to the totality of the match the �rst step of the construc-tion given in Proposition 5 is trivial. Hence we start with the secondstep: An edge between Pacman and the vertex modeling the �eld hemoves to is added by performing the pushout of the total morphismsdom(moveP ) ! LmoveP ! PG and dom(moveP ) ! RmoveP . Afterwards theco-equalizer of LmoveP ! RmoveP ! PG0 and LmoveP ! PG! PG0 leads tothe derived graph PH which is the subgraph of PG0 where the edge betweenPacman and the vertex representing his old position is deleted. ut8

PG1

2

PG’1

2

LmoveP RmoveP

m

1

2

1

2

PH

PO

dom(moveP)1

2

1

2Figure 5: Direct derivation (example).In the example derivation above deletion is rather intuitive. In contrastFigure 6 depicts a situation where a vertex is meant to be preserved as wellas deleted on the left. The result of this derivation is the empty graph, i.e.,deletion has priority. This property of the approach is caused by the order ofthe construction steps: �rst gluing of con icting items and then their deletion.Similarly the problem of the dangling edge on the right of Figure 6 is solved:it is deleted, too. This is illustrated in Example 3.In the following we will characterize simple deletion by properties of thematch. If a match is d(elete)-injective then con ict situations are avoided. If itis d-complete, deletion does not cause the implicit deletion of dangling edges.De�nition7 (special matches). Given a match m : L! G for a productionr : L ! R. Then m is called con ict-free if m(x) = m(y) implies x; y 2dom(r) or x; y 62 dom(r). It is called d-injective if m(x) = m(y) impliesx = y or x; y 2 dom(r). Finally, m is d-complete if for each edge e 2 GEwith sG(e) 2 mV (LV � dom(r)V ) or tG(e) 2 mV (LV � dom(r)V ) we havee 2 mE (LE � dom(r)E). utLemma8 (pushout properties). If (H; r� : G ! H;m� : R ! H) is thepushout of r : L ! R and m : G ! H in GraphP , then the followingproperties are ful�lled:1. Pushouts preserve surjectivity, i.e. r surjective implies r� surjective.2. Pushouts preserve injectivity, i.e. r injective implies r� injective.3. r� and m� are jointly surjective.4. m con ict-free implies m� total.Proof. See [8]. ut9

1,2

12

1r

1

1

r

P.O. P.O.mm*

r*

m m*

r*

2 2Figure 6: Deletion with con icts.2.2 Historical Roots of the SPO-ApproachSingle-pushout transformations in a setting of some sort of partial morphismshave been investigated already by Raoult [9] and Kennaway [10]. The followinghistorical roots are taken from [5].Raoult [9] introduces two conceptually very di�erent approaches. The �rstone is described in the category of sets and partial mappings. A rule is a partialmorphism r : L! R, i.e. a partial map which respects the graph structure onall objects of L it is de�ned for. A matchm : L! G in some graph G is a totalmorphismof this type. The result of applying r atm is constructed in two steps.First, the pushout (H; r� : G! H;m� : R! H) of r and m in the category ofsets and partial maps is built. In the second step, a graph structure is estab-lished on H such that the pushout mappings r� and m� become morphisms.He characterizes the situations in which this graph structure uniquely exists;double-pushout transformations with gluing conditions are special cases of thesesituations. The second model of graph transformation in [9] uses another kindof partiality for the morphisms: a rule is a total map r : L! R, which is onlypartially compatible with the graph structure. Let rewrite(r) denote the set ofobjects which are not homomorphically mapped by r. A match m : L ! G istotal which means now rewrite(m) = ;. Application of r at m is again de�nedby two steps. First construct the pushout (H; r� : G ! H;m� : R ! H) of randm in the category of sets and total mappings and then impose a graph struc-ture on H such that the pushout mappings become as compatible as possible,i.e. such that rewrite(r�) = m(rewrite(r)) and rewrite(m�) = r(rewrite(m)).Raoult [9] gives su�cient conditions for the unique existence of this structure.This approach has the major disadvantage that objects cannot be deleted atall. Kennaway [10] provides a categorical description for the second approachof [9]. Graphs are represented the same way. Morphisms f : A ! B are pairs(f; hom). The �rst component is a total mapping from A to B. The secondcomponent provides a subset of A on which f respects the graph structure. A10

rule r : L! R is any morphism in this sense and a match m : L! G is a totalmorphism which now means hom = L. He shows that under certain conditionsthe two-step construction of [9] coincides with the pushout construction in thecategory of graphs and the so-de�ned morphisms.Unfortunately, only su�cient conditions for the existence of pushouts aregiven. Besides that, object deletion remains impossible. The concept in [10] hasbeen further developed in [11]. They introduce \generalized graph rewriting"which uses the same kind of graph morphism. The corresponding transform-ation concept not only involves a pushout construction but also a coequal-izer. Since both constructions are carried out in di�erent categories (of totalresp. partial morphisms) theoretical results are di�cult to obtain. The SPO-approach in [5] as discussed above is closely related to the �rst approach in [9].His concept of partial mappings which are compatible with the graph structureon their domain can be generalized to a concept of partial homomorphisms onspecial categories of algebras such that pushout construction in the categoriesis always possible. Hence, we get rid of any application conditions. If, however,the necessary and su�cient conditions of [9] are satis�ed, the construction ofpushout objects coincides with his two-step construction.Recently, Kennaway [12] independently started to study graph transforma-tion in some categories of partial morphisms of this type. His work is based onthe categorical formulation of a partial morphism provided by [13]. While [5]considers concrete algebraic categories, [12] stays in a purely categorical frame-work. Future research has to show how both approaches can bene�t from eachother. Van den Broek [14] introduces another kind of single-pushout transforma-tions based on \partial"morphisms.Partiality in this framework is described bytotal morphisms which map objects \outside their domain" to marked objectsin their codomain. In this chapter we follow the SPO-approach as introducedin [8,5] and extended in several subsequent papers mentioned below.3 Main Results in the SPO-ApproachIn this section we present some of the main results of the SPO-approach. Theyare concerned with the conceptual notions of parallelism, context embedding,synchronization and distribution, interpreted into the world of SPO derivations.The main reason for this choice is the fact that a most natural and very basicview on a graph transformation system is to see a graph as a system state anda production as a syntactical description of corresponding state transforma-tions obtained by direct derivations. By further (informal) arguments certainderivations can be classi�ed as to be concurrent, embedded into each other, or(somehow) synchronized or distributed.11Complementary, on the syntactical level, di�erent ways of composing pro-ductions can be considered, generating an integrated description, i.e., thecomposed production. As for derivations, we consider parallel, derived, andsynchronized/amalgamated productions. An elementary production essentiallyprovides a description of the direct derivations it may perform. A composedproduction, additionally, contains all the information about its constructionfrom elementary productions, which is stored in the production name. If, e.g.,p : r is a parallel production, the production name p contains the elementaryproductions and their embeddings into the resulting production morphism r.The (complete or partial) correspondence between composed derivations andderivations using (correspondingly) composed productions provides the essen-tial results of this section, which may be seen as answers to the problems statedin Section I.2. Related results are mentioned brie y.3.1 ParallelismThere are essentially two di�erent ways to model concurrent computations, theinterleaving and the truly concurrent model. This subsection provides the basicconcepts of these models in the world of SPO graph rewriting, including thesolutions to the Local Church-Rosser Problem I.1 and the Parallelism ProblemI.2.InterleavingIn an interleaving model two actions are considered to be concurrent (i.e.,potentially in parallel) if they may occur in any order with the same result.Modeling single actions by direct derivations, we introduce two notions of in-dependence, formalizing this concept from two di�erent points of view. Thecondition of parallel independence shall ensure that two alternative direct de-rivations are not mutually exclusive. Safely, we may expect this if these directderivations act on totally disjoint parts of the given graph. More precisely, adirect derivation d1 = (G p1;m1=) H1) does not a�ect a second d2 = (G p2 ;m2=) H2)if d1 does not delete elements in G which are accessed by d2. In other words,the overlapping of the left hand sides of p1 and p2 in G must not contain ele-ments which are deleted by p1. The vertices deleted from G by d1 are those inm1(L1�dom(p1)). An edge will be deleted from G if it is in m1(L1�dom(r1))or - which is an additional feature in the SPO approach - one of its incident ver-tices is deleted. Formally, we obtain the following de�nition of weakly parallelindependent derivations. 12

De�nition9 (parallel independence). Let d1 = (G p1;m1=) H1) and d2 =(G p2;m2=) H2) be two alternative direct derivations. Then we say that d2 isweakly parallel independent of d1 i� m2(L2) \ m1(L1 � dom(p1)) = ;.We call the derivations d1 and d2 parallel independent if they are mutuallyweakly parallel independent. utR2��m2� L2oo p2 m2@@@@@@@@ L1��m1~~~~~~~~ //p1 R1�� m1�H2 Goo p2� //p1� H1Weak parallel independence can be characterized in more abstract termson a categorical level.Characterization10 (parallel independence). Let d1 = (G p1;m1=) H1) andd2 = (G p2 ;m2=) H2) be two direct derivations. Then d2 is weakly parallel inde-pendent of d1 if and only if p1� �m2 : L2 ! H1 is a match for p2.Proof. Let there be a matchm02 = p1��m2 as above. Then m2(L2) � dom(p1�)by de�nition of composition. The commutativity of pushouts provides m1(L1�dom(p1)) \ dom(p1�) = ; which is the desired weak parallel independence.Vice versa, we must show that m02 = p1� � m2 is total. According to theconstruction of a derivation, each vertex which is not explicitly deleted by p1 ispreserved, i.e., v 2 G�m1(L1 � dom(p1)) implies v 2 dom(p1�). This impliesthat each preimage of v under m2 has also an image in H1. Each edge which isnot explicitly deleted by p1 is either (i) preserved or (ii) implicitly deleted bythe deletion of an incident vertex i.e., for each edge e 2 G�m1(L1� dom(p1))we either have (i) e 2 dom(p1�) which inherits the arguments for vertices;Otherwise, in case (ii), sG(e) 2 m1(L1�dom(p1)) or tG(e) 2 m1(L1�dom(p1)).By de�nition of parallel independence this implies sG(e) 62 m2(L2) or tG(e) 62m2(L2) which, by de�nition of graph morphisms excludes e 2 m2(L2). In otherwords e 2 m2(L2) implies e 2 dom(p1�). utThe condition of sequential independence shall ensure that two consecutivedirect derivations are not causally dependent. A direct derivation is weakly se-quential independent of a preceding one if it could already have been performedbefore that. Analogous to the parallel case above, weak sequential independencerequires that the overlapping of the right hand side of the �rst production andthe left hand side of the next must not contain elements which were generatedby the �rst. The idea of the stronger notion of sequential independence is thatadditionally the second will not delete anything which was needed by the �rst.13De�nition11 (sequential independence). Let d1 = (G p1;m1=) H1) andd02 = (H1 p2;m02=) X), be two consecutive direct derivations. Then we say that d02isweakly sequentially independent of d1 ifm02(L2)\m1�(R1�p1(L1)) = ;:If additionally m02(L2 � dom(p2)) \m1�(R1) = ;; we say that d02 is sequen-tially independent of d1, and the derivation (G p1;m1=) H1 p2;m02=) X) is calledsequentially independent. utL1��m1 //p1 R1 !!m1�BBBBBBBB L2~~m02|||||||| //p2 R2�� m02�G //p1� H1 //p2� XAgain we will provide a categorical characterization. The formulation ofthis statement is analogous to the case of weak parallel independence. Theproof has therefore been omittedCharacterization12 (sequential independence). Assume two direct de-rivations d1 = (G p1;m1=) H1) and d02 = (H1 p2;m02=) X) to be given. Then d02is weakly sequentially independent of d1 i� there is a match m2 : L2 ! Gfor p2 such that m02 = p1� �m2. The derivation d02 is sequentially independentof d1 i� d02 is weakly sequentially independent of d1 and d1 is weakly parallelindependent of the correspondingly existing derivation d2 = (G p2 ;m2=) H2). utBy de�nition, the weakly parallel independence of two direct derivationsimplies the existence of consecutive direct derivations. The de�nition of weaklysequential independent derivations contains a symmetric implication. The ar-gumentation could be summarized in the following way: weak parallel inde-pendence allows to delay a derivation | complementary, weak sequential in-dependence allows to anticipate a derivation. Formally this is captured by thefollowing lemma.Lemma13 (weak independence). Given a direct derivation d1 = (G p1;m1=)H1), the following statements are equivalent:1. There is a direct derivation d2 = (G p2;m2=) H2) which is weakly parallelindependent of d1.2. There is a direct derivation d02 = (H1 p2 ;m02=) X) which is weakly sequen-tially independent of d1.Up to isomorphism, a bijective correspondence between 1. and 2. above is givenby m02 = p1� �m2 and m2 = (p2�)�1 �m02.14

R2L2 H2GL1 XH1R1p1p2 m1m2 m�1m�2 p�1p�2 m�1�m�1�(2) (3)(1)Figure 7: Local Church RosserProof. This is a direct consequence of Characterizations 10 and 12 togetherwith the fact that, by Lemma 8, injectivity of productions implies injectivity ofp1� and thus m2 = (p1�)�1 � p1� �m2 as well as m02 = p1� � (p1�)�1 �m02. utAccording to our interpretation, the conditions of parallel and sequentialindependence formalize the concepts of concurrent computation steps. In a moreabstract sense, two such steps are concurrent if they may be performed inany order with the same result. The following Local Church-Rosser Theoremshows the correctness of the characterization of concurrent direct derivationsby their parallel and sequential independence. It provides the solution to theLocal Church-Rosser Problem I.1.Theorem14 (local Church-Rosser). Let d1 = (G p1;m1=) H1) and d2 =(G p2 ;m2=) H2) be two direct derivations. Then the following statements are equi-valent:1. The direct derivations d1 and d2 are parallel independent.2. There is a graph X and direct derivations H1 p2 ;m02=) X and H2 p1;m01=) Xsuch that G p1;m1=) H1 p2;m02=) X and G p2;m2=) H2 p1;m01=) X are sequentiallyindependent derivations.Up to isomorphism, a bijective correspondence between 1. and 2. above is givenby m02 = p1� �m2 and m01 = p2� �m1.Proof. Consider the diagram in Figure 7. Subdiagrams (1) and (2) depict thederivations d1 and d2, respectively. Subdiagram (3) represents the pushout ofp1� and p2�. The composition of pushout diagrams (2) and (3) yields a pushoutdiagram (2)+(3) which is a derivation H1 p2 ;m02=) X provided that m02 = p1��m2is a match, i.e., total. But this is ensured since d1 and d2 have been requiredto be parallel independent. Analogously we obtain a derivation H2 p1;m01=) X by15composing pushout diagrams (1) and (3). The stated sequential independenceand the bijective correspondence follow from Lemma 12. utExplicit ParallelismIn contrast to the interleaving model above, a truly parallel computation stephas to abstract from any possible interleaving order, i.e., it must not generateany intermediate state. Therefore, a parallel direct derivation is a simultaneousapplication of productions, which are combined into a single parallel production.Constructing this production as the disjoint union of the given elementary pro-ductions re ects the view that the overall e�ect of a parallel production applica-tion can be described by a `most independent' combination of both componentdescriptions. The formal de�nition below uses the fact that the disjoint unionof graphs (obtained from the disjoint union of carrier sets) can categorically becharacterized by a coproduct. The construction of the parallel production as acoproduct of elementary productions is recorded in the production name p1+p2,which is given by the coproduct diagram below. This notation is well-de�ned,i.e., the diagram below is uniquely determined by the term p1 + p2, if we �x acertain coproduct construction (like, for example, the disjoint union).L1 R1 L2 R2L1 + L2 R1 + R2p1 p2pin1L in2L in1R in2RDe�nition15 (parallel productions and derivations). Given two pro-ductions p1 : L1 ! R1 and p2 : L2 ! R2, the parallel productionp1 + p2 : (L1 + L2 p�! R1 + R2) is composed of the production name p1 + p2,i.e., the diagram above, and the associated partial morphism p. The graphsL1+L2 and R1+R2 (together with the corresponding injections in1L; in2L; in1R,and in2R) are the coproducts of L1; L2 and R1; R2 respectively. The partialmorphism L1 + L2 p�! R1 + R2 is induced uniquely by the universal propertyof the coproduct L1 + L2 such that p � in1L = in1R � p1 and p � in2L = in2R � p2.The application of a parallel production p1+p2 at a matchm constitutes a dir-ect parallel derivation, denoted by G p1+p2 ;m=) X. By referring to morphismsm1 = m � in1L and m2 = m � in2L we also write G p1+p2 ;m1+m2=) X. utDirect parallel derivations provide us with an explicit notion of parallelism,which shall now be related to the interleaving model that has been formulatedbefore. The Parallelism Problem I.2 asked for conditions allowing the sequen-tialization of a parallel derivation, and the parallelization of a sequential one.The following Weak Parallelism Theorem answers these questions in the SPO-approach. 16

L2 //p2�� in2L m2���������������������������� R2�� in2R� � � � � � � �

m2�� � � � � � � � � � � � ��L1 //p1 ##in1LHHHHHHHHH ��m1 ///////////////////// R1 $$in1RHHHHH ��m1� ///////////L1 + L2 //p��m1+m2 R1 + R2��(m1+m2)������G //p� ______________ XFigure 8: Weak ParallelismTheorem16 (weak parallelism). Given productions p1 : L1 ! R1 and p2 :L2 ! R2, the following statements are equivalent:1. There is a direct parallel derivation G p1+p2 ;m1+m2=) X such that G p2;m2=)H2 is weakly parallel independent of G p1;m1=) H1.2. There is a weakly sequential independent derivation G p1 ;m1=) H1 p2 ;m02=) X.Up to isomorphism, a bijective correspondence between 1. and 2. above is givenby m02 = p1� �m2.Proof. Constructing the parallel derivation means �rst to construct the colim-its (coproducts) for in1L and in2L as well as for in1R and in2R then the colimit(pushout) of p and m1 +m2 in a second step. In other words it means to con-struct the colimit of the diagram given by p1;m1 and p2;m2 as depicted in theFigure 8 above. Consider now Figure 7 of the Local Church Rosser Theorem.Also this shows a colimit constructed from the diagram given by p1;m1 andp2;m2. Due to the commutativity of colimit construction (see [15]) ensuringthat all colimits can iteratively be obtained from composing partial colimits,we conclude that both of these constructions coincide. c So, the result of theparallel derivation can equivalently be obtained by �rst constructing the colim-its (pushout) of p1;m1 and p2;m2 and second the colimit (pushout) for theresulting morphisms p1� and p2�. But this means �rst to construct the dir-ect derivations d1 = (G p1;m1=) H1) and d2 = (G p2;m2=) H2) represented bysub-diagrams (1) and (2) in Figure 7 respectively. Their preassumed weak par-allel independence leads then to a derivation d02 = (H p2;m02=) X) representedci.e. both colimits may at most di�er up to isomorphisms, but each obtained in one waycan also be obtained in the other way. 17

eateat killkill

eateat

killkill

1

2

1

2

1

2

G X

H1

Figure 9: Killing and eating can be parallelizedby subdiagram (2)+(3). By de�nition 10 we �nally observe that d02 is weaklysequentially independent of d1 as required.Vice versa, given an arbitrary sequentially independent derivation, we ob-serve that all the arguments above can uniquely be reversed. The bijectivecorrespondence between 1. and 2. is due to Lemma 13. utBefore looking at the counter-example below, the typical situation of The-orem 16 shall be illustrated by the Pacman-game.Example 5 (killing is weakly parallel independent of eating.). In Figure 9 asituation is shown where Pacman, a ghost and an apple are on the same �eld.We observe that two production applications are possible: Pacman may eat anapple and the ghost may kill Pacman. Weak parallel independence allows theghost to be merciful. Pacman may eat his last apple; nevertheless, the ghost willin both situations G and H1 be sure about his victim. In other words, killing isweakly independent of eating. Contrastingly, eating is not at all weakly inde-pendent of killing because killing implies that eating becomes impossible. TheWeak ParallelismTheorem 16 now ensures that the procedure of eating the lastapple and being killed can be shortcut by correspondingly applying the parallelproduction. utThe following example proves that there are parallel derivations which can-not be sequentialized. 18

�2 //p�� in2L���������������� id������������������������������ �4�� in2R� � � � � � � �

id�� � � � � � � � � � � � �� ��1 //p ""in1LEEEEEEEE ��id ...................... �3 ""in1REEEE ��id� ...........�1 �2 //p+p��id+id �3 �4��id+id������1;2 //(p+p)� ____________ �3 �4Figure 10: A Parallel Derivation which cannot be sequentializedExample 6 (non-sequentializable parallel derivations). Consider a production; : L! R where both L and R contain exactly one vertex �.Let L ;+;;id+id=) X be a parallel derivation with ; + ;. Note that the twocomponent matches m1 = m2 = id : L ! L are parallel dependent since thevertex in L is meant to be deleted. However, the derived graph X contains twovertices. Clearly this e�ect cannot be obtained by applying ; in two sequentialsteps. Let us compare this formally obtained result with our intuition: Weobserve that ; deletes a vertex and re-generates it afterwards. Correspondingly,the parallel production deletes two vertices and re-generates two. Hence we mayexpect that a match by which the two vertices in L+L are identi�ed leads to aco-match (m1 +m2)� : R+R! X which identi�es the two vertices. However,this does not happen in the parallel derivation which is due to the fact that,formally, the vertices in L and R are completely unrelated (i.e., there is noformal notion of `re'-generation). Hence, the two applications of ; generate twodi�erent vertices. utAdditional Remarks: The Weak Parallelism Theorem allows to de�ne an oper-ation on derivations, which shifts the application of a certain production onestep towards the beginning of the derivation. Iterated applications of this so-called shift operation lead to a derivation in which each production is appliedas early as possible. This maximally parallel derivation is introduced in [16] asthe canonical derivation. Another fundamental theoretical problem addressedin [16] are abstract derivations. It was shown that for each derivation there isa unique abstract canonical derivation. See also [2] for a detailed discussion ofthis topic.In the SPO approach, the results of this section have essentially been for-mulated in [8,5]. Originally, however, these problems have been investigated in19

the DPO-approach, see Section I.4 and I.5. The di�erences and similarities ofthe corresponding results are discussed to some depth in Section 6.In [17] a new notion of a concurrent derivation captures the idea of a con-current history. A complementary notion of a morphism describes a concurrentsubhistory relation; it is based on causal rather than sequential dependenciesbetween activities. This leads to a category of abstract concurrent derivationstaken as the concurrency semantics of a SPO-graph grammar. In addition aninterleaving semantics is proposed, given by a subcategory of abstract concur-rent derivations, partially ordered by a sequential subcomputation relation. Theconcurrency semantics is characterized as a con�guration domain of a primeevent structure. The explicit consideration of in�nite derivations leads to anotion of a fair derivation and a corresponding fair concurrency semantics.3.2 Embedding of Derivations and Derived ProductionsIn this section we answer the question, under which conditions a derivation canbe embedded into a larger context (cf. Problem I.3). Therefore, we �rst haveto formalize the notion of \embedding". Below, an embedding is de�ned as afamily of compatible injections between the graphs of the original derivationand the graphs of the embedded one.De�nition17 (embedding). Given derivations � = (G0 p1;m0=) � � � pk;mk�1=)Gk) and � = (X0 p1;n0=) � � � pk;nk�1=) Xk), an embedding of � into � is a familyof total injective morphisms e = (Gi ei�! Xi)i2f1;:::;kg such that ei �mi = nifor all i 2 f1; : : : ; kg, see also the diagram below. The embedding of � into� is denoted by � e�! �, and the �rst injection e0 : G0 ! X0 is called theembedding morphism of e. utL1 R1 : : : Lk Rkp1 pkG0 G1 Gk�1 GkX0 X1 : : : Xk�1 Xkm0 m�0 mk�1 m�k�1e0 e1 ek�1 ekGiven a derivation �, a possible embedding e of � into a derivation � iscompletely determined by the the embedding morphism e0. The existence of theembedding e : � ! � can therefore be characterized in terms of the derivation� and the embedding morphism e0. In the sequel, a derived production will beconstructed simulating the e�ect of the derivation �, such that the applicabilityof this derived production at the match e0 is equivalent to the embedding e of� into � induced by e0. First we consider the basic case of a direct derivation20

XGL ZHR(2)(1)pp�p��m m�e e� L1 R1 = L2 R2G H1 H2(3) (4)p1 p2pm1 m�2p�1 P �2p�m2Figure 11: Horizontal and sequential composition of direct derivations, and non-sequentializable derived derivation.leading to the notion of directly derived production:De�nition18 (directly derived production). The directly derivedproduction hdi : G p��! H of a direct derivation d = (G p;m=) H) is composedof the production name hdi and the partial morphism p�, i.e., the co-productionof the direct derivation. A direct derivation X hdi;e=) Z using hdi : p� is calleddirectly derived derivation. utThe following equivalence, relating directly derived derivations with em-beddings of the original derivation, is also known as the vertical compositionproperty of derivation diagrams.Proposition19 (directly derived production). Given the directly derivedproduction hdi : p� of a derivation d = (G p;m=) H) as in Figure 11 on the left,the following statements are equivalent:1. There is a directly derived derivation d0 = (X hdi;e=) Z).2. There is a direct derivation d00 = (X p;n=) Z) and an embedding he; e�i :d0 ! d00.Up to isomorphism, a bijective correspondence between 1. and 2. is given byn = e �m.Proof. The directly derived derivation in 1. is represented by the pushout dia-grams (1) and (2) on the left of Figure 11. Due to well-known pushout compos-ition properties, diagram (1+2) is a pushout, too, which represents the directderivation in 2. Vice versa, we can reconstruct the pushouts (1) and (2) from(1+2) by the pushout (1) of p andm and the pushout (2) of p� and e. Uniquenessof pushouts up to isomorphism ensures the bijective correspondence between1. and 2. utNow we want to extend this equivalence to derivations � = (G0 p1;m0=)� � � pk;mk�1=) Gk) with k > 1. To this aim we introduce the notion of \sequential21composition" of two productions, in order to obtain a single derived productionh�i from the composition of the sequence of its directly derived productionshd1i; hd2i; : : : ; hdki. We speak of sequential composition because the applicationof such a composed production has the same e�ect of a sequential derivationbased on the original productions.De�nition20 (sequential composition). Given two productions p1 : L1 !R1 and p2 : L2 ! R2 with R1 = L2, the sequentially composed productionp1; p2 : L1 p�! R2 consists of the production name p1; p2 and the associatedpartial morphism p = p2 � p1. utDerivations G p1 ;m1=) H1 p2 ;m2=) H2, where the match m2 of the second directderivation is the co-match of the �rst, may now be realized in one step usingthe sequentially composed production p1; p2. Vice versa, each direct derivationG p1 ;p2;m1=) H2 via p1; p2 can be decomposed into a derivation G p1;m1=) H1 p2;m�1=)H2, provided that the co-match m�1 of the �rst direct derivation is total. In theProposition below, this is ensured by the assumption that m1 is con ict-freew.r.t. p1.Proposition21 (sequential composition). Given two productions p1 andp2 and their sequential composition p1; p2 : p as above, the following statementsare equivalent:1. There is a direct derivation G p1;p2 ;m1=) H2 where m1 is con ict-free w.r.t.p1.2. There is a derivation G p1;m1=) H1 p2;m�1=) H2 such that m�1 is the co-matchof m1 w.r.t. p1.Up to isomorphism, a bijective correspondence between 1. and 2. above is givenby p = p2 � p1.Proof. By properties of pushouts in GraphP (see Lemma 8), the co-matchm� is total if and only if m is con ict-free w.r.t. p1. Then Proposition 21follows from pushout composition and decomposition properties, similar toProposition 19. utCombining directly derived productions by sequential composition we nowde�ne derived productions for derivations of length greater than one:De�nition22 (derived production). Let � = (G0 p1;m0=) � � � pk;mk�1=) Gk)be a derivation consisting of direct derivations di = (Gi�1 pi ;mi�1=) Gi) andhdii : p�i the corresponding directly derived productions. Then, the derivedproduction h�i : G0 p��! Gk of � is given by the production name h�i =22

hd1i; : : : ; hdki and the associated partial morphism p� = p�k � : : : � p�1. A directderivation K h�i;e=) Y using h�i : p� is called a derived derivation. utEach derivation sequence may be shortcut by a corresponding derived de-rivation. Vice versa, each derived derivation based on a d-injective match cor-responds to an embedding of the original derivation. The following theoremprovides the solution to the Derived Production Problem I.4.Theorem23 (derived production). Let � = (G0 p1;m0=) � � � pk;mk�1=) Gk) bea derivation, h�i : p� the corresponding derived production, and G0 e0�! X0 ad-injective match for h�i. Then the following statements are equivalent:1. There is a derived derivation X0 h�i;e0=) Xk.2. There is a derivation � = (X0 p1;n0=) � � � pk;nk�1=) Xk) and an embedding� e�! �, where e0 is the embedding morphism of e.Up to isomorphism, a bijective correspondence between 1. and 2. above is givenby p� = p�k � : : : � p�1 and ei �mi = ni for i 2 f0; : : : ; k � 1g.Proof. By induction over the length k of �: Let k = 1. Then h�i is a directlyderived production, and Theorem 24 follows from Proposition 19. Assume that1. and 2. are equivalent for derivations � of length k = l. Let �0 = (G0 p0;m0=): : : pl;ml�1=) Gl pl+1;ml=) Gl+1) be a derivation of length k = l + 1, � the pre�x of�0 consisting of the �rst l direct derivations, and dl+1 = (Gl pl+1 ;ml=) Gl+1) thelast direct derivation of �0. Then, there is a derivation �0 and an embedding�0 e0�! �0 i� there are embeddings � e�! � and hel; el+1i : dl+1 ! d0l+1 where elis also the last injection of e (cf. De�nition 17). We show that the embeddingse and hel; el+1i exist i� there are corresponding derived derivations using thederived productions h�i : G0 p��! Gl of the derivation � and hdl+1i : p of thedirect derivation dl+1. Then, Theorem 23 follows from Proposition 21 using thefact that, since e0 is injective, it is in particular con ict-free w.r.t. h�i.By applying the assumption, there is a derived derivation X0 h�i;e0=) Xl as in1. i� there is an embedding � e�! � into a derivation � = (X0 p1;n0=) � � � pl;nl�1=) Xl)as in 2. Using Proposition 19, the same equivalence holds between directlyderived derivations Xl hdl+1 i;el=) Xl+1 and direct derivations Xl pl+1 ;nl=) Xl+1which are embeddings of dl+1. This concludes the proof of Theorem 23. utThe following example is taken from [8]. It illustrates that a derived deriv-ation based on a non-d-injective match may not be sequentializable.Example 7 (non-sequentializable derived derivation). Consider the right dia-gram of Figure 11 on page 21, where the very same production is used twice for23generating the derived production. The original production deletes one vertexand generates a new one.The derived production speci�es the deletion of two vertices and the gen-eration of two others. But in fact by mapping the vertices to be deleted onto asingle one leads to the generation of two vertices out of one. This is due to thefact that there is no direct connection between deleted and generated items,which in this situation means that, indeed, two instead of one vertex is newlyadded. utLet us come back now to Problem I.3 which asked for an embedding ofa derivation into a larger context. This question is now answered using theequivalence above, that is, a derivation � may be embedded via an embeddingmorphism e0 if and only if the corresponding derived production h�i : p� isapplicable at e0. Since in the SPO-approach there is no further condition forthe application of a production but the existence of a match, this implies thatthere is an embedding � e�! � of the derivation � via each embedding morphisme0.Theorem24 (embedding). Let � = (G0 p1;m0=) � � � pk;mk�1=) Gk) be a deriv-ation and G0 e0�! X0 an embedding morphism. Then there is a derivation� = (X0 p1;n0=) � � � pk ;nk�1=) Xk) and an embedding � e�! �.Proof. Let h�i : p� be the derived production of � and X0 h�i;e0=) Xk the derivedderivation at the embedding morphism G0 e0�! X0. Since e0 is injective, itis in particular d-injective. By Theorem 23 this implies the existence of thederivation � and the embedding � e�! �. utAdditional Remarks: A derived production represents the overall e�ect of thederivation it is constructed from, and �xes the interaction of the productionsapplied in the derivation. This representation, however, can still be reducedby cutting of the unnecessary context, i.e., all those elements of the startingand ending graphs of the derivation which are never touched by any of theproductions. Such minimal derived productions (minimal w.r.t. the embeddingrelation) have been introduced in the SPO-approach in [5] for two-step deriva-tions, generalized in [18] to derivations of length k � 1.3.3 Amalgamation and DistributionIn this section we will investigate the simultaneous application of graph pro-ductions, which shall not work concurrently (as in a parallel derivation) butcooperatively, synchronizing their applications w.r.t. commonly accessed ele-ments in the graph (cf. Section I.2). The synchronization of productions w.r.t.24

certain deleted and/or generated elements is expressed by a common subpro-duction in the de�nition below.De�nition25 (subproduction, synchronized productions). Let pi :Li ! Ri be three productions for i = 0; 1; 2. We say that p0 is a subpro-duction of p1 if there is an embedding in1 = hin1L; in1Ri : p0 ! p1, i.e. , apair of total graph morphisms in1L : L0 ! L1 and in1R : R0 ! R1 such thatin1R � p0 = p1 � in1L. The productions p1 and p2 are synchronized w.r.t. p0,shortly denoted by p1 in1 p0 in2! p2, if p0 is a subproduction of both p1 and p2with embeddings in1 and in2, respectively. utL1 R1L0 R0in1L in1Rp0p1If synchronized productions p1 in1 p0 in2! p2 shall be applied simultaneouslyto one global graph, this may be modeled by the application of an amalgamatedproduction which is constructed as the gluing of p1 and p2 along p0. Formallysuch a gluing is described by a pushout construction. Since the amalgamatedproduction is a composed production, its production name has to record thisconstruction. So similar as for the parallel production, the name p1�p0 p2 of anamalgamated production is a whole diagram comprising the given synchronizedproductions p1 in1 p0 in2! p2 and their embeddings into the production morphismof the amalgamated production.De�nition26 (amalgamated productions and derivations). Let p1 in1 p0 in2! p2 be synchronized productions with pi : Li ! Ri for i 2 f0; 1; 2g.Then the amalgamated production p1 �p0 p2 : L p�! R, consists of theproduction name p1�p0 p2 and the associated partial morphism p. The produc-tion name p1 �p0 p2 is constructed on the left of Figure 12, where hin2L�; in1L�iand hin2R�; in1R�i are pushouts of hin1L; in2Li and hin1R; in2Ri, respectively, andL p�! R is obtained as the universal morphism satisfying p � in1L� = in1R� � p1and p�in2L� = in2R��p2. A direct derivation G p1�p0p2;m=) X using the amalgam-ated production p1�p0 p2 is called amalgamated derivation. By referring tomorphismsm1 = m�in2L� andm2 = m�in1L� we also write G p1�p0p2;m1�m2=) X.utA distributed graph models a state which is splitted into substates relatedby common interface states. Gluing the local states along their interfaces yieldsthe global state again. Below these ideas are formalized for two local statesrelated by one interface. 25L0 //p0��in1L !!in2LBBBB R0��in1R !!in2RCCCCL2 //p2�� in1L� R2�� in1R����������L1 //p1 !!in2L� CCCC R1 !!in2R� DDDDL //p R

L0 //p0�� in1L����������� ''in2L OOOOOOOOO R0�� in1R~~~~~~~~~~~ ''in2R OOOOOOOOOL2 //p2�� in1L�~~~~~~~~~~~~�� m2 R2~~ in1R�~~~~~~~~~~~~L1 //p1 ''in2L� PPPPPPPPP ��m1 88888888888888 R1 ''in2R� PPPPPPPPPL //p��m1�m2 R��m1�m2����G //p� _______________ XFigure 12: Amalgamated Production Amalgamated DerivationDe�nition27 (distributed graph). A distributed graph DG = (G1 g1 G0 g2! G2) consists of two local graphs G1 and G2, an interface graph G0and graph morphisms g1 and g2 embedding the interface graph into the twolocal graphs. The global graph G = � DG = G1�G0 G2 of DG is de�ned asthe pushout object of g1 and g2 in the diagram below. The distributed graphDG is a total splitting of G if the graph morphisms g1 and g2 are total. Ingeneral DG is called a partial splitting.G1 G2G0G1 �G0 G2(PO)g1 g2g�2 g�1 utA truly partial splitting models an inconsistent distributed state where,for example, there are dangling references between the local substates. In suchsituations, there is not general agreement if a certain item belongs to the cor-responding global state or not. Constructing the global graph as the pushoutobject of the splitting, this question is decided in favor of deletion, that is,con icting items are removed from the global state.Distributed graphs can be transformed by synchronized productions, whichspecify a simultaneous update of all local graphs.De�nition28 (distributed derivation). Let DG = (G1 g1 � G0 g2�! G2)be a distributed graph and Li mi�! Gi for i 2 f0; 1; 2g be matches for synchron-ized productions p1 in1 p0 in2! p2 as in Figure 13 on the left. The matches(mi)i2f0;1;2g form a distributed match for p1 in1 p0 in2! p2 into DG ifgk � m0 = mk � inkL for k 2 f1; 2g. In this case, the distributed deriva-tion d1jjd0d2 : DG =) DH with DH = (H1 h1 � H0 h2�! H2) is constructedby the local direct derivations d1 = (G1 p1;m1=) H1) and d2 = (G2 p2 ;m2=) H2)26

and the interface derivation d0 = (G0 p0;m0=) H0). The partial morphisms h1and h2 are induced by the universal property of the pushout hp�0;m�0i of hp0;m0isuch that the left and bottom squares of the right diagram of Figure 13 com-mute. If g1 and g2 as well as h1 and h2 are total, we speak of a synchronousdistributed derivation d1jjd0d2. utA distributed graph becomes inconsistent (i.e., a partial splitting) if wedelete an item in a local graph which has a preimage in the interface that isnot deleted. This situation can be characterized in set-theoretical terms:Characterization29 (synchronous distributed derivation). Let di =(Gi pi;mi=) H1) for i 2 f0; 1; 2g be direct derivations at con ict-free matches.A distributed derivation d1jjd0d2 : DG =) DH as in Figure 13 is synchronousif and only if DG is a total splitting, and for k 2 f1; 2g we have that y 2 G0with gk(y) 2 mk(Lk � dom(pk)) implies y 2 m0(L0 � dom(p0)).Proof sketch. The structures mk(Lk � dom(pk)) for k = 1; 2 and m0(L0 �dom(p0)) consist of those elements of Gk and G0 which are explicitly deletedby the direct derivations dk and d0, respectively. The distributed derivationd1jjd0d2 is synchronous if the morphisms hk in the right diagram of Figure 13are total. We sketch the \if" part of the proof of Characterization 29. For acomplete proof the reader is referred to [19].Let x 2 H0. Since p�0 and m�0 are pushout morphisms, they are jointlysurjective by Lemma 8, i.e., x has either a preimage y 2 G0 or z 2 R0. In the�rst case there is gk(y) 2 Gk since gk is total. Since y is preserved by p�0, itis not in m0(L0 � dom(p0)). Hence gk(y) 62 mk(Lk � dom(pk)), which impliesby some further arguments that gk(y) 2 dom(p�k), i.e., p�k � gk(y) is de�ned. Bycommutativity of the bottom square this implies that also hk(p�0(y)) = hk(x)is de�ned. If x has a preimage z 2 R0, m�k(inkR(z)) is de�ned since m�k is totalby con ict-freeness of mk (see Lemma 8). By commutativity of the left squarethis implies that also hk(m�0(z)) = hk(x) is de�ned. utFinally, we investigate the relationship between distributed and amalgam-ated derivations. Amalgamating a distributed derivation means to construct aglobal observation of the local actions. This is possible only if there is a con-sistent global view at least of the given distributed state DG. Hence DG isassumed to be a total splitting in the distribution theorem below. On the otherhand, distributing an amalgamated derivation means to split a global actioninto local ones. Therefore, the matches of the elementary productions have tobe compatible with the splitting of the global graph. The Distribution Theorembelow provides the solution to Problem I.5.Theorem30 (distribution). Let DG = (G1 g1 � G0 g2�! G2) be a totalsplitting of G = �DG, p1 in1 p0 in2! p2 synchronized productions and p1�p0 p2 :27L0 R0p0L1 R1p1m1 m�1G0 H0p�0G1 H1p�1m0 m�0L2 R2p2G2 H2p�2m2 m�2 Gk HkG0 H0Lk RkL0 R0p0m0 m�0p�0mk m�kpkp�kinkL inkRgk hkFigure 13: A distributed derivation d1 jjd0d2 : DG ! DH with d0 = (G0 p0;m0=) H0) anddk = (Gk pk;mk=) Hk) for k 2 f1;2g.L p�! R their amalgamated production. Then the following statements areequivalent:1. There are an amalgamated derivation G p1�p0p2 ;m=) H, and a distributedmatch (mi)i2f0;1;2g for p1 in1 p0 in2! p2 into DG, which is compatible withm, i.e., g�2 �m1 = m � in2L� and g�1 �m2 = m � in1L�.2. There is a distributed derivation d1jjd0d2 : DG =) DH with di =(Gi pi;mi=) Hi) for i 2 f0; 1; 2g.Proof sketch. The proof of this theorem in [20] uses the 4-cube lemmapresentedin [21], which is valid in every category and can be derived as a special case ofthe commutativity of colimits. utAdditional Remarks: Amalgamated derivations are introduced in the SPO-approach in [5], while corresponding concepts in the DPO-approach have beendeveloped in [22]. The Amalgamation Theorem in [5] is concerned with thesequentialization of an amalgamated derivation. It is based on the notion of aremainder (production) which can be roughly be considered as that part of agiven production which is not covered by its subproduction. A suitably givenamalgamated derivation can then be simulated by a derivation consisting of anapplication of the common subproduction, followed by a parallel application ofthe remainders [5]. The concept of amalgamated productions and derivationswas generalized to cases including more than two elementary productions. Cor-responding developments in [19] were motivated by the idea to specify deriva-tions in which a variable number of mutually interacting productions must besynchronized. 28

Distributed graphs and derivations in the SPO-approach are introduced in[20], where also the Distribution Theorem is formulated. The comparison ofseveral kinds of global and (synchronous and asynchronous) distributed deriv-ations led to a hierarchy theorem for distributed derivations. Splitting a stateinto two substates with one interface is, of course, a very basic kind of a dis-tribution. In [19] this is generalized to arbitrary many local states, pairwiserelated by interfaces. Even more general topologies are considered in [23,24],in the DPO-approach.4 Application Conditions in the SPO-ApproachUsing the rule-based formalism introduced so far we may easily and intuitivelydescribe, how given graphs shall be transformed into derived graphs. For spe-cifying when these transformations should occur, however, we are restricted topositive application conditions concerning the existence of certain nodes andedges, which can be speci�ed within the left-hand side of the productions. Inthis section we introduce the possibility to specify also negative applicationconditions for each particular production, and extend the results of Section 3.1concerning independence and parallelism to productions and derivations withapplication conditions.4.1 Negative Application ConditionsWe use the model of the Pacman game to motivate the use of negative applic-ation conditions.Example 8. Recall the production moveP from Figure 1. As e�ect of this pro-duction Pacman moves to a new �eld not taking into account whether this�eld is dangerous for him or not. If Pacman moves to a �eld where a ghostis waiting, he may be killed in the next derivation step. An intelligent playerof the Pacman game would like to apply the moveP production only if thereis no ghost on the �eld Pacman is moving to. Hence we have a negative ap-plication condition for our production. The production ImoveP which modelsan intelligent moving of Pacman is depicted as the left upper production ofFigure 14. The forbidden context, i.e., the ghost with the edge pointing to the�eld Pacman wants to move to, is enclosed by a dotted line and crossed out,in order to denote that these elements must not exist if the production shall beapplied. Analogously one could think of an intelligent behavior of the ghosts.If they want to kill Pacman they have to occupy as many �elds as possible.Hence a ghost should only be moved to a �eld on which there is not alreadyanother ghost (see production ImoveG in the right upper part of Figure 14).29

LImoveP LImoveG

ImoveP ImoveGFigure 14: Intelligent moving of Pacman and ghosts.By now we have shown that negative application conditions can be used toinclude rather simple strategies of the game in our model. Graph transform-ations are by default non-deterministic. But for our model it is desirable torestrict this non-determinism, i.e. to have priorities on the productions. Mov-ing has a lower priority than eating an apple for Pacman or than killing Pacmanfor a ghost. These priorities can also be coded into negative application condi-tions for the move productions, such that they are only applicable if the eat resp.kill production is not applicable to Pacman resp. the same ghost (see lower partof Figure 14). Note that intelligent moving with lower priority has a negativeapplication condition consisting of two forbidden contexts, called constraints inthe following, one taking care of the �eld to move to and one making sure thatmoving is of lower priority. These application conditions with more than oneconstraint are satis�ed if each of the constraints is satis�ed. utThe general idea is to have a left-hand side not only consisting of one graphbut of several ones, connected by morphisms L l�! ^L, called constraints, withthe original left-hand L. For each constraint, ^L� l(L) represents the forbiddenstructure, i.e., the dotted bordered part of the example productions. A matchsatis�es a constraint if it cannot be extended to the forbidden graph ^L, i.e.,if the additional elements are not present in the context of the match. If aconstraint is non-injective and surjective, i.e., the forbidden structure ^L� l(L)is empty but items of L are identi�ed by l, it can only be satis�ed by a match notidentifying these items. Thus, negative constraints can also express injectivityrequirements for the match. A match satis�es a negative application conditionif it satis�es all constraints the application condition consists of.De�nition31 (application conditions).1. A negative application condition, or application condition for short,over a graph L is a �nite set A of total morphisms L l�! ^L, calledconstraints.2. A total graph morphism L m�! G satis�es a constraint L l�! ^L, writtenm j= l, if there is no total morphism ^L n�! G such that n � l = m. m30

1 2

3 5

1 2

34

1 2

3 5

1 2 1 2

3 3

lp

int

moveP

m

L R

Llp

Lint

?

n6

GFigure 15: Formal representation of the conditional production LImovePsatis�es an application condition A over L, written m j= A, if it satis�esall constraints l 2 A.3. An application condition A is said to be consistent if there is a graphG and a total morphism L m�! G s.t. m satis�es A.4. A production with application condition ^p : (L p�! R;A(p)), orconditional production for short, is composed of a production name ^pand a pair consisting of a partial graph morphism p and an applicationcondition A(p) over L. It is applicable to a graph G at L m�! G if msatis�es A(p). In this case, the direct derivation G p;m=) H is called directconditional derivation G ^p;m=) H. utExample 9. In Figure 15 we show the formal representation of the productionLImoveP : (moveP; flp; intg) of Figure 14 consisting of the unconditional pro-duction moveP of Figure 1 and two constraints. Morphisms are denoted bynumbers at corresponding nodes. The constraint int states that there must notbe a ghost at the �eld Pacman wants to move to, and lp ensures that thereis no apple at Pacmans current position. Accordingly the match m for movePsatis�es lp since we cannot extend m to ^Llp, whilem does not satisfy int. Hencem does not satisfy flp; intg. If, however, Pacman moves to the left, i.e. vertex�2 of L is mapped to vertex �6 in G, int is satis�ed as well and the productioncan be applied. utIt is possible to de�ne application conditions which can not be satis�ed byany match, i.e., the corresponding conditional production is never applicable.This is due to the fact that contradictions may appear between the positive31requirements of the productions left-hand side and the negative constraints. Atrivial example for such a contradiction is a constraint which is an isomorph-ism. More generally, a contradiction occurs if the forbidden items of ^L � l(L)can be mapped onto the required ones in L, i.e., a total morphism ^L n�! Lexists, extending the identity on L. Hence, a negative application condition Ais consistent in the sense of De�nition 31.3 (i.e., free of these contradictions)i� the match L id�! L satis�es A.Lemma32 (consistency of application conditions). An application con-dition A over L is consistent if and only if it is satis�ed by idL.Proof. If A is satis�ed by idL, A is consistent by De�nition 31. Vice versa,let idL not satisfy A. Then there are a constraint L l�! ^L 2 A and a totalmorphism ^L n�! L s.t. n� l = idL. Since for any given match L m�! G we havethat m = m � idL, this implies that (m �n) � l = m, i.e., m does not satisfy theconstraint l 2 A, and hence not A. utEven more powerful application condition, which can express, for example,cardinality restrictions on the number of incoming edges of a given node, areobtained if the the forbidden morphisms ^L n�! G of De�nition 31.2 are requiredto be injective. These application conditions with injective satisfaction, as theyare called in [3], may, for example, express the gluing condition of the DPOapproach (see Proposition I.9) consisting of the dangling condition and theidenti�cation condition: For a given production L p�! R we let� the identi�cation condition of p be the set IC(p) of all total surjectivemorphisms l, except isomorphisms, such that for all l we have l(x) = l(y)for some x; y 2 L with x 6= y and x 62 dom(p), and� the dangling condition of p be the set DC(p) of all total morphismsL l�! ^L such that l is surjective up to an edge e (and possibly a node)with s(e) or t(e) in l(L � dom(p)).Now a matchm satis�es the gluing condition if and only if it injectively satis�esthe application conditions IC(p) and DC(p), i.e., i� there is no total injectivemorphism ^L n�! G for any constraint L l�! ^L 2 IC(p) [ DC(p) satisfyingn � l = m. Using this equivalence, DPO derivations can be characterized byconditional SPO derivations.4.2 Independence and Parallelism of Conditional DerivationsFollowing the line of [3] we extend the results of Section 3.1 on independenceand parallelism to conditional derivations. Thereby, we provide a solutions to32

(2)

(1) (2)

(1)Figure 16: Parallel and sequential independence (example).the Local Church-Rosser Problem I.1 and the Parallelism Problem I.2 for theSPO-approach with application conditions.InterleavingAccording to Section 3, two derivations d1 and d2 are considered to be parallelindependent if they may occur in any order with the same result. For uncon-ditional derivations this is mainly a problem of deletion, i.e., none of the twoderivations should delete something that is needed for the match of its altern-ative. Taking into account negative application conditions specifying forbiddenapplication contexts, we have to ensure that, in addition, no such forbiddencontext is established.De�nition33 (parallel independence). Let d1 = (G ^p1;m1=) H1) and d2 =(G ^p2;m2=) H2) be two direct conditional derivations using ^p1 : (p1; A(p1)) and^p2 : (p2; A(p2)), respectively. Then we say that d2 is weakly parallel inde-pendent of d1 if m02 = r1� �m2 : L2 ! H1 is a match for p2 that satis�es theapplication condition A(p2) of ^p2 (see left diagram of Figure 17). Direct con-ditional derivations d1 and d2 are parallel independent if they are mutuallyweakly parallel independent. utExample 10. Let (1) and (2) in the left side of Figure 16 be applications of theproductions moveP and moveG of Figure 1, respectively. Then these uncon-ditional direct derivations are parallel independent. If we apply the conditionalproduction ImoveP of Figure 14 instead of moveP , preventing Pacman fromapproaching the ghost, (1) is not weakly parallel independent of (2). utA derivation d02 is sequentially independent of its predecessor d1 if d02 mayoccur also alternatively to, or even before d1. In Section 3 this is shown tobe the case if the application context of d02 has already been present beforethe occurrence of d1. In the conditional case we additionally require that noforbidden context of d02 has been destroyed by d1.33De�nition34 (sequential independence). Let d1 = (G ^p1;m1=) H1) andd02 = (H1 ^p2;m02=) X) be two direct conditional derivations using ^p1 : (p1; A(p1))and ^p2 : (p2; A(p2)), respectively. Then we say that d02 is weakly sequentiallyindependent of d1 if m2 = (r1�)�1 �m02 : L2 ! G is a match for p2 that sat-is�es the application condition A(p2) of ^p2 (see right diagram in Figure 17).Let d2 = (G ^p2;m2=) H2) be the corresponding direct derivation. In case that,additionally, d1 is weakly parallel independent of d2 the derivation sequence(G1 ^p1;m1=) H1 ^p2;m02=) X) is called sequentially independent. utExample 11. In the right side of Figure 16 a sequential independent derivationsequence using kill and moveG of Figure 1 is shown. If, however, (2) resultsfrom an application of LImoveG of Figure 14, (2) is not independent of (1)because the ghost has to kill Pacman before leaving his �eld. utThe following theorem is a straightforward generalization of the LocalChurch-Rosser Theorem of Section 3 to conditional derivations. It providesthe solution to the Local Church-Rosser Problem I.1 for SPO-derivations withapplication conditions.Theorem35 (conditional local Church-Rosser). Let d1 = (G ^p1;m1=) H1)and d2 = (G ^p2;m2=) H2) be two direct conditional derivations. Then the followingstatements are equivalent:1. d1 and d2 are parallel independent.2. There are direct conditional derivations H1 ^p2;m02=) X and H2 ^p1;m01=) Xsuch that G ^p1;m1=) H1 ^p2;m02=) X and G ^p2;m2=) H2 ^p1;m01=) X are sequentiallyindependent conditional derivations.Up to isomorphism, a bijective correspondence between 1. and 2. above is givenby m02 = r1� �m2 and m01 = r2� �m1.Proof. Directly from De�nitions 33 and 34 and the Local Church-Rosser The-orem 14. utExplicit ParallelismNow we consider the construction of a parallel conditional production fromtwo conditional elementary productions. Starting with the parallel productionp1+p2 de�ned in De�nition 15 we have to �nd a suitable application conditionfor p1 + p2. For unconditional single-pushout derivations, the applicability of34

G L1H1m1^L2 L2 R2p2l2 XR1r1m02�p1�m02 m1�p2����� @@@R@@@R ����@@@R @@@R��� ���L1R1G p1L2 R2p2 H2H1m1� m2�m1m2p1� p2����� @@@R@@@R ���� ����@@@I 6m2n@@@I��� ���@@@I^L2 l2����@@@InAAA CCCFigure 17: Independence of conditional derivations.the parallel production is (trivially) equivalent to the applicability of the ele-mentary productions at their corresponding matches. Generalizing this we haveto construct A(p1 + p2) as conjunction of A(p1) and A(p2).If two application conditions are de�ned over the same left-hand side, theirconjunction is simply given by their union. In our case however, A(p1) andA(p2) are application conditions over L1 and L2, respectively. Thus the problemremains, how to extend the application conditions over L1 and L2 to the largergraph L1 + L2. Below the extension of a constraint l along a total morphismm is de�ned by the pushout of l and m.De�nition36 (extension). If L m�! G is a total morphism and L l�! ^L aconstraint, the extension m#(l) of l along m is given by the pushout diagram(1) in the left-hand side of Figure 18. The extension of an applicationcondition A over L is de�ned by m#(A) = fm#(l)jl 2 Ag. utProposition37 (extension). Let L m�! G be a total morphism and A anapplication condition over L. Then, for all matches G e�! K, e j= m#(A) i�e �m j= A.Proof. We show that for each l 2 A we have e j= m#(l) i� e �m j= l. Assumen s.t. (2) in Diagram 18 commutes. Then n0 = n � ^m and n0 � l = m � e bycommutativity of (1) and (2). Vice versa, let ^L n0�! K be given with n0�l = e�m.Then n with n � g = e exists by the universal property of (1). utNow we de�ne the parallel conditional production ^p1 + ^p2 as the parallelproduction of the underlying productions p1 and p2 of ^p1 and ^p2, equipped withthe extensions of the application conditions A(p1) and A(p2) of the componentproductions along in1L and in2L, respectively.35

LGm^L l(1)g^m Kn e(2) r r rr rrrrrrr ; ---� ?66?HHHY����l1l�1l�2l2 p1p1 + p2p2?6 �k�k� kIkI��? ??@@@@R^GFigure 18: Extension of constraints and construction of a parallel conditional productionDe�nition38 (parallel conditional production). Let ^p1 : (L1 r1�!R1; A(p1)) and ^p2 : (L2 r2�! R2; A(p2)) be conditional productions andp1 + p2 : L p�! R the parallel production of p1 and p2 according to De�n-ition 15. Then the parallel conditional production ^p1 + ^p2 : (p;A(p))is composed of the production name ^p1 + ^p2 and the pair (p;A(p)), whereA(p) = in1L#(A(p1)) [ in2L#(A(p2)). A conditional derivation G ^p1+^p2 ;m=) X us-ing ^p1 + ^p2 is called parallel conditional derivation. utExample 12. The construction above does not guarantee, that the applicationcondition A(p1 + p2) of ^p1 + ^p2 is consistent, even if both A(p1) and A(p2)are. In the right side of Figure 18 the production ^p1 : (p1; fl1g) adds a loopto a node if there isn't already one. The production ^p2 : (p2; fl2g) inserts anode in an empty graph. The application condition of the parallel production^p1+^p2 : (p1+p2; fl�1; l�2g) is not consistent (compare Lemma32) because we canreverse l�2 by identifying the two nodes. On the other hand, there is no graph towhich we can apply both ^p1 and ^p2, i.e., their application domains are disjoint.That this is no mere coincidence is shown by the following proposition. utProposition39 (applicability of parallel production). Let ^p1; ^p2 and^p1 + ^p2 be given as above together with matches L1 m1�! G and L2 m2�! G forp1 and p2 into G and let L1 + L2 m1+m2�! G be the parallel match for p1 + p2.Then m1 +m2 j= A(p1 + p2) if and only if m1 j= A(p1) and m2 j= A(p2).Proof. mk = m1+m2 � inkL for k = 1; 2 by universal property of L1+L2. ThenProposition 39 is a direct consequence of Proposition 37. utSince we can check consistency of application conditions by Lemma 32 thisprovides us with a method to decide whether a parallel production makes senseor not. Furthermore, we may now extend the Weak Parallelism Theorem 16 toconditional derivations, which solves the ParallelismProblem I.2 for conditionalderivations. 36

ImoveP

ImoveG+

l1*

l2*

+moveP

moveGFigure 19: Parallel production of ImoveP and ImoveG.Theorem40 (conditional weak parallelism). Given conditional produc-tions ^p1 and ^p2, the following two statements are equivalent:1. There is a direct conditional parallel derivation G ^p1+^p2;m1+m2=) X, s.t.G ^p2;m2=) H2 is weakly parallel independent of G ^p1 ;m1=) H1.2. There is a conditional derivation G ^p1;m1=) H1 ^p2;m02=) X, where H1 ^p2;m02=) Xis weakly sequentially independent of G ^p1;m1=) H1.Up to isomorphism, a unique correspondence between 1. and 2. above is givenby m02 = r1� �m2.Proof. Directly from Proposition 39, De�nition 33 and 34 and the ParallelismTheorem 16. utExample 13. In the left side of Figure 19 the parallel production of ImovePand ImoveG is shown, modeling a simultaneous move of Pacman and one ofthe ghosts. Its graphical representation is given on the right. Applying thisproduction to the graph in the upper left of Figure 16 we have an exampleof a parallel conditional derivation that cannot be sequentialized, because thealternative derivations using ImoveG and ImoveP (denoted by (1) and (2) inthe left side of Figure 16) are not parallel independent (cf. Example 10). utAdditional Remarks: The results of this section have been obtained in [3]. In thecase of application conditions with injective satisfaction (cf. Section 4), similarresults are possible. Moreover, most of the remaining results of Section 3 havealready been extended to such application conditions in [25,26]. In addition tothe left-sided negative application conditions of De�nition 31, also right-sidedapplication conditions and so-called propositional application conditions, i.e.,37propositional expressions over constraints, are considered in [26]. This is evenfurther generalized in [27] and [28] by conditional application conditions.Another interesting line of research is the generative power of graph gram-mars with (di�erent kinds of) conditional productions. In [3] it has been shownthat context-free graph grammars with positive and/or negative applicationconditions are strictly more powerful than unconditional ones. Similar invest-igations can be found in [27] for graph grammars without nonterminal labels.5 Transformation of More General Structures in the SPO-ApproachUntil now we presented SPO transformations and corresponding results basedon labeled graphs. In this section we will show that the SPO approach is notrestricted to this kind of graphs, but it is also applicable to more sophisticatedstructures. We will basically consider two kinds of extensions: a more powerfullabeling concept { leading to the concept of attributed graphs in Section 5.1; andmore complex graphical structures { leading to the concept of graph structuresin Section 5.2. By a generalization of graph structures we will then be ableto cope with these two di�erent extensions within the same framework. Thisframework, introduced in [6,17], opens new possibilities for de�ning graphs.(For example we can use instead of sets and labeling functions, graphs andlabeling graph morphisms { this idea was worked out in [29] for the de�nitionof class-based graph grammars.) Finally in Section 5.3 we review high-levelreplacement systems: a general axiomatic framework for transformation systemsand their properties.5.1 Attributed GraphsThe existence of labels allows us to distinguish vertices or edges within agraph according to their associated label. Thus, labels provide a very basictype concept for graphs. Usually in a software system the types of elementsdo not change during the execution of the system, and therefore the treatmentof labels presented in the last sections (where label morphisms were identities)is very reasonable. Nevertheless, often a situation occurs in which the labels(types) of the elements in a graph are not relevant for the application of a pro-duction. In this case labels should take the role of parameters (generic types).But actually, the strict typing through labels presented so far requires a num-ber of productions (one for each possible combination of labels) to describethis situation. For practical reasons this is not adequate, a higher-level kind oftyping concept is needed. From now on we will refer to this kind of high-leveltypes as attributes. The basic idea of attributes in graph transformations is38

to allow the handling of labels abstractly. This is realized by the presence ofcorresponding variables and a concept of assignment of these variables in anactual situation. In particular, the concept of attributes includes the presenceof operations on these sets (of attributes).Attributes are used in all graph grammar proposals for software engin-eering since they integrate structural (i.e. graphical) aspects of a system withdata-type aspects (i.e. calculation of values). This leads to compact descrip-tions in which e.g. well-known arithmetic operations need not arti�cially becoded into graphical structures. Similar concepts of combining structural andalgebraic aspects can be found in the theory of attributed string grammars [30]in algebraic high-level Petri-nets [31,32,33], which are a combination of Petri-nets and algebraic speci�cations, and in the speci�cation language LOTOS [34],which integrates CCS-like speci�cations for the structural part with algebraicspeci�cations for the data type component.In the SPO-approach, attributes have been integrated in [4] (see [35] fora corresponding extension of the DPO-approach). This integration preservesthe fundamental derivation concept of the algebraic approach i.e., both the ma-nipulation of the graphical structure and the calculation of the new attributesare combined within a (single) pushout construction. In [4] attributes werespeci�ed using algebraic speci�cations in the sense of [36]. Algebraic speci�c-ations provide a well-established formalism for treating data-types, variables,evaluations and substitutions. Moreover not only a set of types is available asattributes, but we can make use of the operations of the speci�cation in orderto indicate abstractly relationships between types. The proposal for the integ-ration of attributes into graph transformation reviewed here is a simpli�cationof the approach in [4]. It has already been used in [37].Before introducing formally the concept of an attributed graph, we need tointroduce some basic notions of universal algebra. A signature Sig = (S;OP )consists of a set of sorts and a family of sets OP = (OPw;s)w2S�;s2S of opera-tion symbols. For op 2 OPw;s, we also write op : w! s. A Sig-algebra A is anS-indexed family (As)s2S of carrier sets together with an OP -indexed family ofmappings (opA)op2OP such that opA : As1 � : : :�Asn ! As if op 2 OPs1:::sn;s.If w = s1 : : : sn 2 S�, we sometimes write Aw for As1 � : : : � Asn . ASig-homomorphism f : A ! B between two Sig-algebras A and B is asort-indexed family of total mappings f = (fs : As ! Bs)s2S such thatopB(f(x)) = f(opA(x)) for all x 2 As. The category Alg(Sig) has as objectsall Sig-algebras and as morphisms all total homomorphisms between them. Itis well-know that Alg(Sig) has all colimits. U : Alg(Sig)! SetP is a func-tor assigning to each Sig-algebra A the disjoint union of its carrier sets As,and to each homomorphism f the disjoint union the total functions fs, for all39s 2 S.Attributes are labels of graphical objects taken from an attribute algebra.Hence, an attributed graph consists of a (labeled) graph and an attribute al-gebra, together with some attribute functions connecting the graphical and thealgebraic part.De�nition41 (attributed graph). Given a label alphabet L and a signa-ture Sig = (S;OP ). Then AG = (AGV ;AGE; sAG;tAG; lvAG;leAG; AGA;avAG;aeAG) is a Sig-attributed graph, wherei) AGG = (AGV ; AGE; sAG; tAG; lvAG; leAG) is an L-labeled graph withlabeling functions lvAG; leAG for vertices and edges (cf. De�nition I.6),ii) AGA is a Sig-algebra,iii) avAG : AGV ! U(AGA) and aeAG : AGE ! U(AGA) are the vertex andedge attributing functionsA (Sig-attributed graph) morphism between two Sig-attributed graphsAGi = (AGiV ; AGiE ; sAGi; tAGi; lvAGi; leAGi; AGiA; avAGi; aeAGi) for i = 1; 2is a tuple f = (fG; fA) where fG = (fV ; fE) is a partial graph morph-ism, and fA is a total algebra homomorphism such that 8v 2 dom(fV )�U(fA)(avAG1(v)) = avAG2(fV (v)) and 8e 2 dom(fE)� U(fA) (aeAG1(e)) =aeAG2(fE(e)); f is total (injective) if fG and fA are total (injective). utProposition42 (category AGraphP ). Sig-attributed graphs and Sig-attributed graph morphisms form a category, called AGraphP .Proof. The proof is based on the fact that the composition of morphisms iswell-de�ned. utFor the de�nition of direct derivations of attributed graphs as single-pushout constructions, it is essential that AGraphP has pushouts. The con-struction of pushouts inAGraphP can be seen as a special case of [4]: pushoutsare constructed componentwise in GraphP and Alg(Sig). Actually, the cat-egory AgraphP has not only pushouts but all colimits (due to the fact thatcoproducts in AgraphP can be constructed componentwise in Set). As mostof the results presented in Section 3 are based on colimit constructions, theyshould carry over to the AgraphP setting. For a proof of the following theoremwe refer to [4].Theorem43. The category AGraphP has pushouts. utProductions, grammars, matches, and (direct) derivations using attributedgraphs are de�ned analogously to the De�nitions 2 and 6 in Section 2, by40

[succ(n)][n]

[4] [5]

m

eat(n)

Figure 20: Counting Pacmans apples.replacing the category GraphP of graphs and partial graph morphisms bysome category AGraphP of attributed graphs.Below we extend the Pacman example (Example 1 of Section 2) usingattributes for counting the apples Pacman has eaten.Example 14 (attributed graph transformation). For the graphical part AGG weuse the same labeled graphs as in Example 1. The signature of natural numbersSignature Nat: sorts natopns 0! natsucc : nat! nat+ : nat� nat! natis used as the signature for the attribute algebras AGA.Attributes in productions are usually taken from \syntactical" (term) al-gebras. Figure 20 shows the attributed graph production eat(n), where the left{and right-hand side graphs are attributed over the term algebra TNat(X) withvariables in X = fng. The graphical part is similar to production eat in Figure1 on page 40. d On the left-hand side, Pacman is attributed with the variablen representing the number of apples Pacman has eaten before the applicationof the production. On the right-hand side, n is replaced by succ(n), i.e., thenumber of apples increases by one.For the graphs to be rewritten, we �x some \semantical algebra", the al-gebra IN of natural numbers. As a transformation of labeled graphs does notchange the label set, this algebra is preserved by an attributed graph trans-formation as well. Figure 20 shows an application of the production eat(n) to agraph representing a state where Pacman has already eaten four apples, and isdSince the compatibility conditions for attributed graph morphisms (cf. De�nition 41) donot allow to change attributes, carrier loops have to be introduced at attributed vertices,which are deleted and re-generated each time an attribute is modi�ed. In order to simplifythe presentation, these carrier loops are often omitted in the drawings.41about to eat the �fth one. The graphical part is matched as usual. Then we haveto �nd an assignment to the variable n which is compatible with the matchingof the graphical part, i.e., n is mapped to 4. The derived graph is constructedcomponentwisely by taking the pushout in GraphP for the graphical part,and by extending the assignment n 7! 4 to succ(n), leading to succ(n) 7! 5. Inthis way, attributes in the derived graphs are calculated from attributes in thegiven graphs. ut5.2 Graph Structures and Generalized Graph StructuresIn order to be more exible in system modeling, often hypergraphs are usedinstead of graphs [38,39,40], see also [41] in this handbook.De�nition44 (hypergraph). A hypergraph G = (V;E; s; t) consists of a setof vertices V , a set of edges E, and two total mappings s; t : E ! V � fromthe set of edges into the free monoid over the set of vertices which provideeach hyperedge e 2 E with a sequences of n source and m target verticess(e) = v1 : : : vn and t(e) = v1 : : : vm, respectively. utSince in the algebraic approaches graphs are considered as algebras w.r.t.a certain signature, also hypergraphs are de�ned in this way. The signaturefor hypergraphs is given below, where hyperedges are sorted according to theirnumber of source vertices n and target vertices m.Signature HSig: sorts V; (En;m)n;m2INopns (s1; : : : ; sn; t1; : : : ; tm : En;m ! V )n;m2INExample 15 (hypergraph). In the following picture a concrete hypergraph of thissignature is shown, having 3 non-empty carrier sets, namely GV = f�1; �2; �3g,GE1;1 = fe1;1g, and GE1;2 = fe1;2g.

1,2

1,1

1,2

1,1

1,2

1,1

1,2e

es1

t1

t1

t2

2

3

1

G

s1 utGeneralizing partial graph morphisms, a partial homomorphismf : G! Hbetween two algebras can be de�ned as total homomorphism f ! : dom(f) ! Hfrom some subalgebra dom(f) � G. For each signature Sig this leads to acategory of algebras and partial morphisms Alg(Sig)P . In [5] it was shownthat this category is cocomplete if and only if all operations in Sig are unary,which motivated the following de�nition.42

De�nition45 (graph structure signature, graph structure). A graphstructure signatureGS is a signature which contains unary operator symbolsonly. A graph structure is a GS-algebra. utMost of the results presented in Section 3 have originally been elaboratedfor graph structures [8,5], which do not only include graphs and hypergraphs,but also hierarchical graphs and higher order graphs (having edges betweenedges).An even larger framework is obtained by the concept of generalized graphstructures [17]. In order to explain this let us reconsider De�nition 44, wherea hypergraph is given by two carrier sets GV ; GE and two operations sG; tG :GE ! G�V . Such an algebra can not be de�ned directly as a graph structuresince the mappings sG; tG are not between the carriers but from the set ofedges GE to the free monoid G�V over the set of vertices GV . This has led tothe in�nite graph structure signature for hypergraphs above. Using generalizedgraph structures instead we may realize directly the hypergraphs of De�nition44: A hypergraph is an algebra G = (GV ; GE; sG; tG : FE(GE) ! FV (GV )),where GV and GE are sets of vertices and hyperedges, and sG; tG are operationsbetween sets derived from the carrier sets by application of the functors FE andFV , associated with the sort symbols E and V , respectively. The functor FE isthe identity functor, i.e., FE(GE) = GE , and the functor FV maps every set ofvertices to its free monoid, i.e., FV (GV ) = G�V .On morphisms (partial functions) these functors are de�ned as follows:FE is the identity functor, i.e., FE(fE ) = fE . FV (fV ) = f�V shall be de�nedpointwisely: for each sequence of vertices l = v1v2 : : : vn 2 G�V f�V (l) =fV (v1)fV (v2) : : : fV (vn) 2 H�V if fV (vi) is de�ned for all i = 1 : : :n, and other-wise it is unde�ned.This allows to de�ne GGS-morphisms. Figure 21 depicts a partial morph-ism f = (fE ; fV ) : G ! H between generalized graph structures G =(GV ; GE; sG; tG) and H = (HV ;HE; sH ; tH). It consists of a pair of map-pings fE : GE ! HE and fV : GV ! HV between edges and vertices of Gand H, respectively, such that the top diagram commutes for each operationsymbol, i.e. f�V � sG � fE? = sH � fE ! and f�V � tG � fE? = tH � fE !, wherefE? : dom(fE ) ! GE and fE ! : dom(fE ) ! HE form the span-representationof fE .Thus generalized graph structures may be described as comma categoriesw.r.t. certain signatures where the compatibility requirement of morphisms hasbeen relaxed in order to allow partial morphisms (`weak homomorphisms').Analogously to comma-categories, colimits in GGS-categories can be construc-ted componentwisely followed | in the GGS-case | by a free construction(`totalization'). 43G�V H�VGE dom(fE) HEGV HVGE HEC ;;sG7 AAtG C ;;sH6 AAtH= //fV __________________ //f�V __________________

//fE __________________ ? _oo fE?_ _ _ _ _ _ _ � //fE! ______OOFE OO FEOOFV OO FVFigure 21: De�nition of a partial GGS morphism f : G! HThe concept of GGS provides a constructive approach for generating in-ternally structured categories of graph-like structures and partial morphismsfrom simpler ones. Among the large number of examples there are labeledand unlabeled graphs, hypergraphs, attributed graphs e, attributed and labeledhypergraphs, hierarchical graphs, graph-interpreted (or typed) graphs, graphgrammars, Petri Nets, and Algebraic High-Level (AHL) Nets. Properties ofthese categories, like cocompleteness, can be inferred from properties of thecomponents and the way in which they are composed.5.3 High-Level Replacement SystemsThe formal investigation of graph grammars based on di�erent kinds of graphshas often led to similar results based on similar proofs. This fact gave raiseto the question whether it is possible to `reuse' the proofs from one kind ofgraph grammar to obtain analogous results in another kind. The de�nitionof High-Level Replacement Systems [7], shortly HLR systems, was the answerto this question. They provide an abstract framework in which not graphs,but objects of an arbitrary (instance) category are transformed (generalizingthe ideas of graph grammars to arbitrary categorical grammars). Obviously,not every instance category gives raise to the same results; they depend verymuch on the properties of these categories. Hence, due to its categorical nature,the focus of HLR systems is not on the structure of objects (as in the GGSeRecall that attributed graphs cannot be seen as graph structures, due to the fact thatarbitrary signatures, like of booleans and natural numbers, may have non-unary operations.44

approach) but on the properties of its instance categories. Minimal conditionsare extracted which allow for de�nitions and theorems concerning, for example,parallelism, embedding, or amalgamation, and in this way such theorems can beproven for the biggest class of instance categories. Originally HLR systems werede�ned following the DPO approach [42]. Here we present the basic de�nitionsof HLR systems of type SPO, developed in [7]. In the following, HLR systemsare assumed to be of type SPO.The basic assumption of HLR systems is that the structures we want totransform belong to some category Cat. The theory of HLR systems de�nes thebasic concepts of production, derivation and grammar in a generic (categorical)way: a production is a morphism in Cat, a match belongs to a distinguishedclass of morphism, and direct derivations are given by pushouts. For example,if Cat=GraphP , we obtain the SPO framework as introduced in Section 2.De�nition46 (HLR system). Let Cat be a category andMatch be a sub-category of Cat such that Match and Cat coincide on objects.1. A production r : L! R is a Cat morphism.2. An object G can be directly derived to an object H using a productionr : L ! R if there is a pushout (1) in Cat such that m is a match, i.e.,it belongs to Match. L��m //r R�� m�G //r� H(1)3. A derivation is a sequence of direct derivations.4. An HLR system HLRS = (I; P; T ) over Cat is given by a start objectI, a set of productions P , and a class of terminal objects T � Cat. utThe HLR framework covers several concrete rewrite formalisms, which areobtained by choosing a concrete categoryCat of structures, satisfying some ba-sic conditions. The following conditions ensure that the well-known parallelismresults are valid in the instance category Cat.De�nition47 (SPO conditions and categories). The following condi-tions (1){(4) are called SPO-conditions for parallelism of HLR systems. Acategory Cat together with a subcategory Match as given in De�nition 46 iscalled SPO-category if these conditions are satis�ed.451. Existence of pushouts with Match morphisms, i.e. Cat has pushouts ifat least one morphism is in Match.2. Match has �nite coproducts, which are preserved by the inclusion func-tor �:Match! Cat.3. Pre�x closure of coproducts.f4. Pre�x closure of Match, i.e. if f � g 2Match then g 2Match. utExamples for SPO-categories are GraphP , Alg(GS)P (with GS being agraph structure signature), SPECP (category of algebraic speci�cations andstrict partial morphisms) [7]. The interpretation of productions and derivationsdepends on the instance category. For example, transformations of algebraicspeci�cations may be interpreted as interconnection of modules in the contextof modular system design [43]. In [7] it is shown that slightly di�erent versionsof the Local Church Rosser Theorem 14 and the Parallelism Theorem 16 holdin HLR-systems over SPO-categories, and it is an interesting topic for furtherresearch to generalize also the other results of Section 3 to the HLR-framework.6 Comparison of DPO- and SPO-ApproachSection I.2 g provided an informal introduction in the algebraic theory of graphrewriting, where many relevant concepts and results have been introduced interms of problems. In this section the solutions to these problems { giventhroughout [2] and this chapter for the DPO and SPO approaches, respectively{ are summarized and compared with each other. Working out the similaritiesand the di�erences of the corresponding results will allow to understand therelationships between the two algebraic approaches.Since in Section I.2 the concepts are introduced independently of a partic-ular approach, this provides us with an abstract terminology that can be con-sidered as a common \signature" for (algebraic) graph rewriting approaches.This signature has sort symbols for graphs, productions, and derivations, etc.,and operation or predicate symbols for (direct) derivation, parallel and se-quential composition, etc., and the various relations that may hold betweenfA morphism p : L ! P is called a pre�x of r : L ! R if there is x : P ! R suchthat x � p = r. This type of properties which make reference to pre�xes of productions andrequires some closure properties for all pre�xes is typical for the theory of HLRS. It is thedevice to control the interaction of \partiality" and \totality" of morphisms in Cat resp.Match (see [7] for more details).gRecall, that numbers preceded by \I." (for Part I) refer to Chapter [2] in this handbook,on the DPO-approach. 46

productions or derivations like the subproduction or the embedding relation.Most of the problems of Section I.2 ask for conditions, i.e., relations on deriv-ations, ensuring that some construction is de�ned for these derivations. Theparallelization condition of Problem I.2, for example, which is a unary relationon two-step sequential derivations, ensures that the synthesis construction isde�ned, leading to an equivalent direct parallel derivation.In [2] and this chapter, the signature of Section I.2 is interpreted in theDPO- and SPO-approach, respectively, by providing a de�nition for its sorts,operation and predicate symbols in terms of the productions, derivations andconstructions, etc., of the approach. Hence, the DPO- and the SPO-approachcan be considered as models of the same signature. The interpretation ofthe parallelization condition of Problem I.2.2 in the DPO-approach, for ex-ample, is given by De�nition I.13 of sequential independence, and the SynthesisLemma I.20 shows that this interpretation is indeed a solution to Problem I.2.2.In a similar way, many of the de�nitions of [2] and this chapter can be seenas interpretations of the signature of Section I.2, and the corresponding resultsshow that they satisfy the statement of the problems.This view allows for two di�erent ways of comparing the two algebraicapproaches. The �rst one is w.r.t. to their properties: A property of an approachis a statement using the abstract terminology of Section I.2 that is valid in thisparticular approach. The basic property of the SPO-approach, for example, isthe completeness of its direct derivations, that is, given a match for a productionthere is always a corresponding direct derivation. DPO direct derivations arenot complete because of the gluing condition. On the other hand, in the DPO-approach we have that direct derivations are invertible, which is not true inthe SPO-approach. Summarizing in this way the valid statements leads to anice abstract characterization of the two algebraic approaches in terms of theirproperties.It is well-known that models of the same signature can be compared byhomomorphisms. In our case the signature of Section I.2 de�nes a notion of\homomorphism" between graph rewriting approaches being interpretations ofthis signature. Such a homomorphism is given by mappings between the car-riers, i.e., productions, matches, derivations, etc., which are compatible withthe interpretation of the operation and predicate symbols, i.e., with the parallelor sequential composition of productions, for example. Thereby, it embeds oneapproach in the other approach, which is necessary in order to compare directlythe concepts of the two approaches. Below we show how the DPO-approach maybe embedded in the SPO-approach. This applies not only to the basic concepts,like productions and direct derivations, but also to the constructions and resultsof the theory, like the parallel production and the parallelism theorem. Beside47the direct comparison of corresponding concepts, such an embedding can beused in several other ways. On the one hand, we may transfer theoretical res-ults, concerning for example analysis techniques for graph grammars, betweenthe two approaches. On the other hand, it shows that we may simulate the con-structions of the DPO approach within the SPO approach, which may be quiteuseful if we aim at a common implementation for the algebraic approaches.Finally, if we restrict the SPO approach by suitable application conditions (asintroduced, for example, in Section 4), we may use the idea of a homomorphismin order to show the equivalence of the DPO-approach and the correspondinglyrestricted SPO-approach.The section is organized like Section I.2. In each of the following sections wesummarize the solutions to the problems of Section I.2, state the correspondingproperties of the approaches, and discuss the embedding of the DPO- in theSPO-approach w.r.t. to the concepts of this section.6.1 Graphs, Productions, and DerivationsAccording to Section I.2.1 each approach to graph rewriting is distinguishedby three characteristics: its notion of a graph, the conditions under which aproduction may be applied, and the way the result of such an application isconstructed. These characteristics de�ne what is called a direct derivation. TheDPO- and SPO-approach use the same kind of graphs (see De�nition I.6). Theproductions of the two approaches are essentially the same except of di�er-ences in the representation: A DPO-production L l � K r�! R is a span oftotal injective graph morphism (compare De�nition I.7). A SPO-productionis instead a partial graph morphism L p�! R, i.e., a total graph morphismdom(p) p!�! R from some subgraph dom(p) of L to R (cf. De�nitions 2 and1). Both concepts of production have been extended by a name, which is usedfor identifying the production and for storing its internal structure (in case ofa composed production).De�nition48 (translation of SPO- and DPO-productions). Let p :(L s�! R) be a SPO-production. Then D(p) : (L l � dom(r) r�! R) de-notes its translation to a DPO-production, where l is the inclusion ofdom(s) in L and r is the domain restriction of s to dom(s). Conversely, letp : (L l � K r�! R) be a DPO-production. Then, S(p) : (L s�! R) denotes itstranslation to a SPO-production, where dom(s) = l(K) and s = r � l�1.The partial morphism s is well-de�ned since l is supposed to be injective inDe�nition I.7. ut48

Hence, SPO-productions can be considered as DPO-productions where the in-terface graph K is represented in a unique way as a subgraph of the left-hand side graph L. The mapping S of DPO-productions p to SPO-productionsS(p) will be used throughout this section to translate the concepts and res-ults of the DPO-approach to the corresponding ones of the SPO-approach. Themost basic concept of each graph rewriting approach is that of a direct de-rivation. In fact, the mapping S translates each direct DPO-derivation to adirect SPO-derivation where the match is d-injective and d-complete (compareDe�nition 7).Proposition49 (translation of DPO-derivations). Let p : (L l � K r�!R) be a DPO-production, S(p) : L ! R its translation to a SPO-production,and L m�! G a match for p into a graph G. Then m is also a match for S(p)and there is a direct DPO-derivation d = (G p;m=) H) if and only if there isa direct SPO-derivation S(d) = (G S(p);m=) H) such that m is a d-injectiveand d-complete match for S(p). In this way, the mapping S is extended fromproductions to derivations.Proof. See [5]. utNot every direct derivation in the SPO-approach can be obtained from aDPO-derivation using the translation S. In contrast to DPO-derivations a directSPO-derivation G p;m=) H always exists if there is a match m for a productionp. This �rst and most important property distinguishing the DPO- and theSPO-approach is called completeness of direct derivations. Indeed, most of thedi�erences between the DPO- and the SPO-approach are caused by the factthat SPO derivations are complete while DPO derivations are not.Property50 (completeness of SPO-derivations). Direct derivations inthe SPO-approach are complete, i.e., for each production p : L ! R andeach match L m�! G for p into a graph G there is a direct derivation G p;m=) H.utBecause of the gluing condition (see Proposition I.9), DPO derivationsare not complete. In fact, being complete essentially means to be free of anyapplication conditions, that is, SPO derivations with application conditions asintroduced in Section 4 are incomplete as well. The gluing condition, however,causes another interesting property of the DPO-approach, the invertibility ofdirect derivations.Property51 (invertibility of DPO-derivations). Direct derivations inthe DPO-approach are invertible, i.e., for each direct derivation G p;m=) H us-ing production p : (L l � K r�! R) there is an inverse derivation H p�1 ;m�=) G49using the inverse production p�1 : (R r � K l�! L), where R m��! H is theco-match of G p;m=) H.Proof. See [44]. utConstructing the inverse of a direct derivation can be seen as a kind of\undo". SPO direct derivations are not invertible in general since they maymodel implicit e�ects, like the deletion of dangling edges, which are not in-vertible. Because of these \side e�ects", SPO-derivations are more di�cult tounderstand and to control than DPO-derivations and may be considered, incertain situations, as \unsafe". If direct derivations are invertible, there are noimplicit e�ects. Hence, DPO derivations show a \safe" behavior that is easierto determine beforehand. In view of the overall complexity of the formalism,this is important if people from other areas shall use graph rewriting techniquesfor modeling their problems.Most graph grammar approaches are not dedicated to a particular ap-plication area but may be considered as \general purpose" formalisms. Formany applications, however, a tailored approach is more adequate which is aspowerful as necessary and as simple as possible, in order to allow for a naturalmodeling of the problems. Then, standard application conditions like in theDPO-approach are needed to restrict the expressiveness of the approach if ne-cessary. Exceptions of these conditions, however, should be possible as well, ifthey are explicitly speci�ed by the user. The graph grammar based speci�cationlanguage PROGRES [45] (see also [46] in this handbook) provides an exampleof this concept, where e.g. injective matches are standard, but identi�cation ofvertices may be explicitly allowed.It depends on the choice of the standard application conditions and of thepossible exceptions, which approach is the most adequate one for a particularapplication. A very general solution is provided by user-de�ned applicationconditions, as introduced in Section 4. In particular in the SPO setting theycomplement in a nice way the generality of the pure approach.6.2 Independence and ParallelismInterleavingThe Local Church-Rosser Problem I.1 asked for two conditions formalizing theconcept of concurrent direct derivations from two di�erent points of view:1. Two alternative direct derivations H1 p1;m1(= G p2;m2=) H2 are concurrent ifthey are not in con ict (parallel independence).50

2. Two consecutive direct derivations G p1 ;m1=) H1 p2;m02=) X are concurrent ifthey are not causally dependent, i.e., if there are also direct derivationsG p2;m2=) H2 p1;m01=) X (sequential independence).For the DPO-approach these conditions are given the De�nitions I.12 and I.13,and the Local Church-Rosser Theorem I.14 ensures that they indeed solve Prob-lem I.1. The corresponding solution for the SPO-approach is provided by theDe�nitions 9 and 11 together with Theorem 14.Both solutions formalize the same intuitive idea. Due to the di�erent rep-resentation of productions in the DPO and SPO approach, however, the cor-responding conditions are stated in a di�erent way. The following propositionshows that they are equivalent via the correspondence of productions and de-rivations established in De�nition 48 and Proposition 49.Proposition52 (equivalence of DPO and SPO independence). Letp1 : (L1 l1 � K1 r1�! R1) and p2 : (L2 l2 � K2 r2�! R2) be two DPO-productions, and S(p1) : L1 ! R1 and S(p2) : L2 ! R2 their translation toSPO-productions. Two alternative direct DPO-derivations H1 p1;m1(= G p2;m2=) H2are parallel independent in the sense of De�nition I.12 if and only if the cor-responding SPO-derivations H1 S(p1);m1(= G S(p2);m2=) H2 are parallel independentin the sense of De�nition 9.Two consecutive direct DPO-derivations G p1;m1=) H1 p2;m02=) X are sequentialindependent in the sense of De�nition I.13 if and only if the correspondingSPO-derivations G S(p1);m1=) H1 S(p2);m02=) X are sequential independent in thesense of De�nition 11.Proof. According to De�nition I.12 the direct DPO derivations H1 p1;m1(=G p2;m2=) H2 are parallel independent if m1(L1) \ m2(L2) � m1(l1(K1)) \m2(l2(K2)). Using the translation S to SPO productions (see De�nition 48)this is the case i� m1(L1) \ m2(L2) � m1(dom(S(p1)) \ m2(dom(S(p2)))holds for the corresponding SPO derivations. The two direct SPO deriva-tions H1 S(p1);m1(= G S(p2);m2=) H2 are parallel independent if m1(L1) \m2(L2 �dom(S(p2))) = ; and m2(L2)\m1(L1 � dom(S(p1))) = ;, which is equivalentto m1(L1)\m2(L2) � dom(S(p2))) and m2(L2)\m1(L1(� dom(S(p1))), andhence to m1(L1) \m2(L2) � m1(dom(S(p1)) \m2(dom(S(p2))).In a similar way one shows the equivalence of the notions of sequentialindependence in the DPO and SPO approach. utIn Section I.2.2 independent derivations have been interpreted as to be con-current in the interleaving model of concurrency. In this view, Proposition 5251states that both approaches allow for the same amount of \interleaving paral-lelism".Recently, both algebraic approaches came up with a weaker, asymmetricnotion of independence, ensuring that two alternative direct derivation can atleast be sequentialized into one order. In the DPO-approach this concept hasbeen introduced in [47] under the name \serializability", while in the SPO-approach we speak of \weak parallel independence" in De�nition 9. In turn,the weak notion of sequential independence, introduced in De�nition 11, ensuresthat the second direct derivation does not depend on the �rst one, that is, theymay also occur in parallel.Explicit ParallelismIn order to represent parallel computations in a more explicit way, Section I.2.2assumed a parallel composition \+" of productions leading to the notions ofparallel production and derivation. In the DPO- and SPO-approaches this op-erator is interpreted as the coproduct (disjoint union) of productions, see De�n-ition I.15h and De�nition 15, respectively. In both approaches, a direct parallelderivation is a direct derivation using the parallel production. With respect tothese de�nitions, the ParallelismProblem I.2 asked for the relationship betweenparallel and sequential derivations. The basic requirement is, of course, thateach two direct derivations which are concurrent in the interleaving model canbe put in parallel using the explicit notion of parallelism. This property iscalled completeness of parallel derivations below. On the other hand, if explicitparallelism allows for exactly the same amount of concurrency as the interleav-ing model, we say that it is safe (w.r.t. interleaving parallelism). In this case,the e�ect of a parallel derivation can always be obtained by a sequential deriv-ation, too. With the following DPO parallelism properties we summarize theDPO solution to the Parallelism Problem I.2 provided by the DPO ParallelismTheorem I.17.Property53 (parallelism properties of DPO). Parallel derivations inthe DPO-approach satisfy the following completeness and safety properties:Completeness: Each sequentially independent DPO-derivation G p1;m1=)H1 p2;m02=) X satis�es the parallelization condition, i.e., there is an equi-valent direct parallel DPO-derivation G p1+p2 ;m=) X.hIn fact, De�nition I.15 introduces the parallel composition of an arbitrary, �nite numberof productions. 52

Safety: Each direct parallel DPO-derivation G p1+p2;m=) X satis�es the sequen-tialization condition, i.e., there are equivalent, sequentially independentDPO-derivations G p1;m1=) H1 p2;m02=) X and G p2;m2=) H2 p1;m01=) X. utDue to the completeness of SPO direct derivations, also its parallel directderivations are complete, i.e., a parallel production L1 + L2 p1+p2�! R1 + R2is applicable to any match L1 + L2 m�! G. Therefore, each two alternativedirect derivations H1 p1 ;m1(= G p2;m2=) H2 can be composed to a parallel directderivation G p1+p2 ;m1+m2=) X, regardless of their independence. In particular,this allows for the parallel composition of weakly (parallel or sequential) inde-pendent direct derivations using the same notion of parallel production. Hence,SPO parallel derivations are also complete w.r.t. weakly independent sequentialderivations, which is shown by the implication from 1. to 2. in the Weak Par-allelism Theorem 16. Contrastingly, in the DPO-approach, a more complex, socalled synchronized (parallel) composition had to be developed in [47] in orderto put two serializable (i.e., weakly parallel independent) direct derivations inparallel.On the other hand, because of its generality, SPO parallelism is no longersafe w.r.t. to sequential derivations, neither in the symmetric nor asymmetricversion. We can, however, distinguish those direct parallel derivations whichmay be sequentialized at least in one order by considering the correspondingpair of alternative derivations, which is shown by the implication from 2. to 1.in the Weak Parallelism Theorem 16.Finally let us investigate the relationship between DPO and SPO paral-lelism using the translation S introduced in Section 6.1. The �rst importantobservation is, that this translation is compatiblewith parallel composition, thatis, given DPO-productions p1 and p2, we have that S(p1)+S(p2) = S(p1+p2).iThis gives us the possibility to compare in a more direct way the relationshipsbetween parallel and sequential derivations in the two approaches. The follow-ing proposition states that the mapping S preserves the sequentialization andthe parallelization conditions and is compatible with the analysis and synthesisconstruction.Proposition54 (compatibility of S with analysis and synthesis). Letd = (G p1+p2 ;m=) X) be a direct parallel DPO-derivation, and �1 = (G p1;m1=) H1iIn fact, this is true only up to isomorphism since there are in�nitely many isomorphicparallel productions for the same two elementary productions. This problem can be solvedby assuming a �xed coproduct construction (like, for example, the disjoint union), whichinduces a coproduct functor as shown in Appendix A.1 of [2].53p2;m02=) X) and �2 = (G p2;m2=) H2 p1;m01=) X) its sequentializations. ThenS(d) = (G S(p1)+S(p2 );m=) X) may be sequentialized to S(�1) = (G S(p1);m1=) H1S(p2);m02=) X) and S(�2) = (G S(p2);m2=) H2 S(p1);m01=) X).Moreover, let � = (G p1;m1=) H1 p2;m02=) X) be a sequentially independentDPO-derivation and d = (G p1+p2 ;m=) X) the direct parallel DPO-derivation ob-tained by the synthesis construction. Then S(�) = (G S(p1);m1=) H1 S(p2);m02=) X) issequentially independent, and the corresponding direct parallel SPO-derivationis S(d) = (G S(p1+p2);m=) X).Proof. Let d be the direct parallel DPO derivation above. Then there are par-allel independent direct derivations H1 p1;m1(= G p2;m2=) H2. According to theanalysis construction in the DPO approach (cf. Lemma I.19), the match m02 ofthe second direct derivation in �1 is de�ned by m02 = r�1 � k1 in the diagrambelow, where p�1 : (G l�1 � D1 r�1�! H1) is the co-production of G p1;m1=) H1 andk1 exists by the above parallel independence s.t. subdiagram (1) commutes (cf.De�nition I.12). Gp�1 : ( D1 H1 )L2 (1) l�1 r�1m2 k1 m02S(p�1)Now let S(p�1) be the translation of p�1 a SPO production (cf. De�nition48). Then, m02 = r�1 � k1 = r�1 � (l�1)�1 �m2 = S(p�1) �m2 since k1 = (l�1)�1 �m2by commutativity of (1) and S(p�1) = r�1 � (l�1)�1 by De�nition 48. But thisis exactly the de�nition of the match of the second direct derivation in theSPO Weak Parallelism Theorem 16, i.e., G S(p1);m1=) H1 S(p2);m02=) X is indeed asequentialization of G S(p1)+S(p2);m=) X.The second part of Proposition 54 can be shown in a similar way by ex-changing m2 and m02. ut6.3 Embedding of Derivations and Derived ProductionsSection I.2.3 introduced two problems, the Embedding Problem I.3 and the De-rived Production Problem I.4, and it has been anticipated that the solution to54

the �rst problem is based on the solution to the second. The Embedding Prob-lem asked for a condition under which an embedding of a graph G0 via a morph-ism e0 : G0 ! X0 may be extended to a derivation � = (G0 p1;m1=) � � � pk;mk=) Gk),leading to an embedded derivation � = (X0 p1;n1=) � � � pk;nk=) Xk). The idea was,to represent the informations relevant for the embedding of � by a derived pro-duction h�i : G0 ; Gn such that there is an embedding � via e0 if and only ifthe derived production h�i is applicable at this match. The corresponding con-structions are given in Section I.6.1 for the DPO-approach and in Section 3.2for the SPO-approach. In both cases, the derived production h�i is obtainedas the sequential composition hd1i; : : : ; hdki of the directly derived productionshdii : Gi�1 ; Gi of the given derivation �. Then, horizontal and vertical com-position properties of direct derivations ensure that the derived production isapplicable at a given injective match if and only if the original derivation maybe embedded along this morphism.This is formulated as the derived productionproperty below, see Theorem I.44 and Theorem 23 for corresponding statementsin the DPO- and SPO-approach, respectively.Property55 (derived production). DPO- and SPO-approach satisfy thederived production property, that is, for each derivation � = (G0 p1=) � � � pn=)Gn) and each injection G0 e0�! X0 there is a derived derivation X0 h�i=) Xnusing the derived production h�i : G0 ; Gn if and only if there is an an em-bedding e : � ! � of � into a derivation � = (X0 p1=) � � � pn=) Xn) using theoriginal sequence of productions p1; : : : ; pn. utThe embedding problem is now reduced to the question if the derivedproduction is applicable at the embedding morphism e0. In the DPO-approachapproach this means that a derivation � may be embedded via e0 if this matchsatis�es the gluing condition of the derived production (which specializes tothe dangling condition since e0 is supposed to be injective). Since in the SPO-approach there are no further conditions for the applicability of a productionbut the existence of a match, the embedding condition is satis�ed for eachderivation G0 p1 ;m1=) � � � pk;mk=) Gk and each embedding morphism G0 e0�! X0.Therefore, we say that the embedding of SPO derivations is complete.Property56 (completeness of SPO embedding). The embedding of de-rivations in the SPO-approach is complete, i.e., for each derivation � =(G0 p1;m1=) � � � pk;mk=) Gk) and each embedding morphism e0 : G0 ! X0 thereis a derivation � = (X0 p1 ;n1=) � � � pk;nk=) Xk) and an embedding e : �! �. utAs anticipated above, the DPO-approach is not complete w.r.t. the embeddingof derivations.Finally it is worth stressing that the translation S of productions and de-rivations from the DPO-approach to the SPO-approach is also compatible with55the concepts of this section. For DPO-productions p1 and p2, for example, wehave that S(p1; p2) = S(p1);S(p2) provided that the left-hand side is de�ned.As a consequence, we may show that S is compatible with the construction ofderived productions, i.e., given a DPO-derivation � we have S(h�i) = hS(�)i.6.4 Amalgamation and DistributionThe concepts of amalgamated and distributed derivations are informally in-troduced in Section I.2.4. Their formalization in the DPO approach is givenin Section I.6.2 in the same chapter, while the SPO variants are introduced inSection 3.3. The main idea is more or less the same for both approaches: Syn-chronization of productions p1 and p2 is modeled by a common subproductionp0, i.e., a production that is related to the elementary productions by compat-ible embeddings. The application of two synchronized productions (to a globalstate) is modeled by applying their amalgamated production, that is, the gluingof the elementary productions along their common subproduction.A distributed graph DG = (G1 g1 � G0 g2�! G2) models a state that issplitted into two local substates, related by a common interface state. Gluingthe local graphs G1 and G2 along their interface G0 yields the global graph�DG = G1 �G0 G2 of the system. The local states are consistent (with eachother) if they form a total splitting of the global graph, i.e., if the interfaceembeddings g1 and g2 are total. The transformation of a distributed graphDG = (G1 g1 � G0 g2�! G2) is done by local direct derivations di = (Gi pi ;mi=)Hi for i 2 f0; 1; 2g where p1 in1 p0 in2! p2 are synchronized productions andmi : Li ! Gi are compatible matches for pi into the two local graphs and theinterface graph. A distributed derivation is synchronous if the given and thederived distributed graphs are total splittings.Distributed derivations in the DPO approach are synchronous by de�ni-tion. The distributed gluing condition, which is used as a standard applicationcondition for distributed derivations, ensures that no element in a local graphis deleted as long as it is referenced from the interface graph. This ensuresthat the derived distributed graph is again a total splitting. Hence, in the DPOapproach, distributed derivations show the same safe behavior like \normal"direct derivations. The prize we have to pay is a global application condition,which may not be easy to check in a real distributed system.A distributed derivation in the SPO approach always exists if there arecompatiblematches for synchronized productions in a distributed graph. Hence,also in the SPO approach, the basic property of direct derivations (i.e., theircompleteness) is resembled by distributed derivations. In contrast to the DPOapproach, a distributed SPO derivation needs no global application conditions.56

However, it may cause global e�ects: Deleting a vertex in a local graph which isreferenced from other local components leads to partial interface embeddings.Constructing the corresponding global graph all dangling references are deleted.It depends very much on the problem to be solved, whether global conditionsor global e�ects (i.e., DPO or SPO distributed derivations) are more appro-priate. Relying on the distributed gluing condition certainly leads to a moreabstract speci�cation, which assumes that dangling references are avoided bysome standard mechanism on a lower level. However, if the references itself aresubject of the speci�cation, as e.g. for garbage collection in distributed sys-tems, we need to model explicitly the inconsistencies caused by the deletion ofreferenced items.These observations are also re ected in the solutions to the DistributionProblem I.5. In the DPO approach, each distributed derivation satis�es theamalgamation condition, which means that each distributed computation canbe observed from a global point of view. Vice versa, an amalgamated derivationcan be distributed if its match can be splitted according to the splitting of thegiven graph and if this splitting satis�es the distributed gluing condition, seeTheorem I.46. In the SPO approach, a distributed derivation may be amalgam-ated if at least the given distributed graph represents a consistent state. Thedistribution of an amalgamated derivation, on the other hand, requires only theexistence of compatible matches. We do not state the corresponding propertieshere but refer to Theorem I.46 for the DPO approach and Theorem 30 for theSPO approach.7 ConclusionIn [2] and this chapter we have presented the two algebraic approaches andcompared them with each other. The double-pushout (DPO) approach was his-torically the �rst and has a built-in application condition, called the gluingcondition, which is important in several application areas in order to preventundesirable possibilities for the application of productions. The single-pushout(SPO) approach allows to apply productions without any application condi-tions, because the problem of dangling edges, for example, is solved by deletionof these edges. This is adequate in some application areas and problematic inother ones. In general it seems to be important to allow user-de�ned applica-tion conditions for productions in order to prevent application of productions inundesirable cases. In this chapter it is shown how to extend the SPO-approachto handle such user-de�ned application conditions, and in a similar way theDPO-approach could be extended.In fact, the DPO-approach could be considered as a special case of the SPO-57approach, because the SPO-approach with gluing condition is equivalent to theDPO-approach. However, as shown in the comparison of both approaches, theyprovide di�erent solutions to the general problems stated in Section 2 of [2].Moreover, it does not seem adequate to specialize all the results in the SPO-approach to the case with gluing condition, because the explicit proofs in theDPO-approach are much simpler in several cases. For this reason DPO andSPO should be considered as two di�erent graph transformation approacheswithin the context of this handbook.Finally, let us point out that most of the concepts and results presentedin [2] and this chapter are concerned with a single graph transformation sys-tem. This might be called the \theory of graph transformation systems in thesmall" in contrast to structuring and re�nement concepts combining and relat-ing di�erent graph transformation systems which may form a \theory of graphtransformation systems in the large". The \theory in the large" is especiallyimportant if graph transformation concepts are used for the speci�cation ofconcurrent, object oriented and/or distributed systems. In fact, both algebraicapproaches seem to be most suitable to handle such \problems in the large",and �rst attempts and results have been presented within the last couple ofyears (see [48,49,50,51,52]). Moreover, we believe that graph transformation inthe large will be a main topic of research and development in the future.Acknowledgment The research results presented in this paper have beenobtained within the ESPRIT Working Groups COMPUGRAPH (1989-1992)and COMPUGRAPH 2 (1992-1995). We are most grateful to Raoult and Ken-naway for initial motivation of the single pushout approach and to the membersof the working group for their contributions and stimulating discussions.References1. H. Ehrig, M. Pfender, and H. J. Schneider. Graph grammars: an algebraicapproach. In 14th Annual IEEE Symposium on Switching and AutomataTheory, pages 167{180, 1973.2. A. Corradini, U. Montanari, F. Rossi, H. Ehrig, R. Heckel, and M. L�owe.Algebraic Approaches to Graph Transformation I: Basic Concepts andDouble Pushout Approach. World Scienti�c, 1996. In this book.3. A. Habel, R. Heckel, and G. Taentzer. Graph grammars with negativeapplication conditions. Accepted for special issue of Fundamenta Inform-aticae, 1996.4. M. L�owe, M. Kor�, and A. Wagner. An algebraic framework for thetransformation of attributed graphs. In M.R. Sleep, M.J. Plasmeijer, and58

M.C. van Eekelen, editors, Term Graph Rewriting: Theory and Practice,chapter 14, pages 185{199. John Wiley & Sons Ltd, 1993.5. M. L�owe. Algebraic approach to single-pushout graph transformation.TCS, 109:181{224, 1993.6. M. Kor�. Single pushout transformations of generalized graph structures.Technical Report RP 220, Federal University of Rio Grande do Sul, PortoAlegre, Brazil, 1993.7. H. Ehrig and M. L�owe. Categorical principles, techniques and results forhigh-level replacement systems in computer science. Applied CategoricalStructures, 1(1):21{50, 1993.8. M. L�owe. Extended Algebraic Graph Transformations. PhD thesis, Tech-nical University of Berlin, 1990. short version in TCS (109):181 { 224.9. J. C. Raoult. On graph rewriting.Theoretical Computer Science, 32:1{24,1984.10. R. Kennaway. On \On graph rewriting". Theoretical Computer Science,52:37{58, 1987.11. J. Glauert, R. Kennaway, and R. Sleep. A categorical construction forgeneralised graph rewriting. Technical report, School of Information Sys-tems, University of East Anglia, Norwich NR4 7TJ, U.K., 1989.12. R. Kennaway. Graph rewriting in some categories of partial maps. InEhrig et al. [53], pages 475{489. Lecture Notes in Computer Science 532.13. E. Robinson and G. Rosolino. Categories of partial maps. Informationand Computation, 79:95 { 130, 1988.14. P.M. van den Broek. Algebraic graph rewriting using a single pushout.In Int. Joint Conf. on Theory and Practice of Software Development(TAPSOFT`91), LNCS 493, pages 90{102. Springer Verlag, 1991.15. H. Herrlich and G. Strecker. Category Theory. Allyn and Bacon,Rockleigh, New Jersey, 1973.16. M. L�owe and J. Dingel. Parallelism in single-pushout graph rewriting.Lecture Notes in Computer Science 776, pages 234{247, 1994.17. M. Kor�. Generalized graph structure grammars with applications to con-current object-oriented systems. PhD thesis, Technical University of Ber-lin, 1995.18. M. Kor�. Minimality of derived rules in single pushout graph rewriting.Technical Report 94/10, Technical University of Berlin, 1994.19. G. Taentzer. Towards synchronous and asynchronous graph transforma-tions. Accepted for special issue of Fundamenta Informaticae, 1996.20. H. Ehrig and M. L�owe. Parallel and distributed derivations in the singlepushout approach. Theoretical Computer Science, 109:123 { 143, 1993.Also in Tech. Rep. 91/01, Technical University of Berlin.5921. H. Ehrig and B. K. Rosen. Parallelism and concurrency of graph manip-ulations. Theoretical Computer Science, 11:247{275, 1980.22. P. B�ohm, H.-R. Fonio, and A. Habel. Amalgamation of graph transform-ations: a synchronization mechanism. Journal of Computer and SystemScience, 34:377{408, 1987.23. G. Taentzer. Hierarchically distributed graph transformation. In 5th Int.Workshop on Graph Grammars and their Application to Computer Sci-ence, Williamsburg '94, LNCS , 1996. Accepted.24. G. Taentzer. Parallel and Distributed Graph Transformation: Formal De-scription and Application to Communication-Based Systems. PhD thesis,Technical University of Berlin, Dep. of Comp. Sci., 1996.25. R. Heckel. Embedding of conditional graph transformations. In G. Va-liente Feruglio and F. Rosello Llompart, editors, Proc. Colloquium onGraph Transformation and its Application in Computer Science. Tech-nical Report B-19, Universitat de les Illes Balears, 1995.26. R. Heckel. Algebraic graph transformations with application conditions.Master's thesis, TU-Berlin, 1995.27. A. Wagner. On the expressive power of algebraic graph grammars withapplication conditions. In Int. Joint Conf. on Theory and Practice of Soft-ware Development (TAPSOFT`95), LNCS 915. Springer Verlag, 1995.28. R. Heckel and A. Wagner. Ensuring consistency of conditional graphgrammars { a constructive approach. Proc. of SEGRAGRA'95 "GraphRewriting and Computation", Electronic Notes of TCS, 2, 1995.http://www.elsevier.nl/locate/entcs .29. M. Kor�. Graph-interpreted graph transformations for concurrent object-oriented systems. Extended abstract for the 5th International Workshopon Graph Grammars and their Application to Computer Science, 1994.30. K. R�aih�a. Bibliography of attribute grammars. SIGPLAN Notices,15(3):35{44, 1980.31. C. Dimitrovici, U. Hummert, and L. Petrucci. Composition and net prop-erties of algebraic high-level nets. In Advances of Petri Nets, volume 483of Lecture Notes in Computer Science. Springer Verlag Berlin, 1991.32. W. Reisig. Petri nets and algebraic speci�cations. Theoretical ComputerScience, 80:1{34, 1991.33. H. Ehrig, J. Padberg, and L. Ribeiro. Algebraic high-level nets: Petrinets revisited. In Recent Trends in Data Type Speci�cation, pages 188{206, Caldes de Malavella, Spain, 1994. Springer Verlag. Lecture Notes inComputer Science 785.34. ISO. Information processing systems { Open Systems Interconnection {LOTOS { A formal description technique based on the temporal ordering60

of observational behaviour. International Standard ISO 8807, ISO, 1989.35. G. Schied. �Uber Graphgrammatiken, eine Spezi�kationsmethode f�ur Pro-grammiersprachen und verteilte Regelsysteme. Arbeitsberichte des In-stitus f�ur mathematische Maschinen und Datenverarbeitung (Informatik),University of Erlangen, 1992.36. H. Ehrig and B. Mahr. Fundamentals of Algebraic Speci�cation 1: Equa-tions and Initial Semantics, volume 6 of EATCS Monographs on Theor-etical Computer Science. Springer, Berlin, 1985.37. M. Kor�. True concurrency semantics for single pushout graph trans-formations with applications to actor systems. In Working papers of theInternational Workshop on Information Systems { Corretness and Re-usability IS-CORE'94, pages 244{258, 1994. Tech. Report IR-357, FreeUniversity, Amsterdam.38. A. Habel and H.-J. Kreowski. May we introduce to you: Hyperedge re-placement. In 3rd Int. Workshop on Graph Grammars and their Applic-ation to Computer Science, LNCS 291, Berlin, 1987. Springer Verlag.39. A. Habel. Hyperedge Replacement: Grammars and Languages. PhDthesis, University of Bremen, 1989.40. A. Habel. Hyperedge replacement: Grammars and Languages, volume 643of LNCS. Springer Verlag, Berlin, 1992.41. F. Drewes, H.-J. Kreowski, and Habel. Hyperedge Replacement GraphGrammars. World Scienti�c, 1996. In this book.42. H. Ehrig, A. Habel, H.-J. Kreowski, and F. Parisi-Presicce. From graphgrammars to High Level Replacement Systems. In Ehrig et al. [53], pages269{291. Lecture Notes in Computer Science 532.43. F. Parisi-Presicce. Modular system design applying graph grammar tech-niques. In ICALP'89. Springer Lecture Notes in Computer Science, 1989.44. H. Ehrig. Introduction to the algebraic theory of graph grammars. InV. Claus, H. Ehrig, and G. Rozenberg, editors, 1st Graph GrammarWorkshop, Lecture Notes in Computer Science 73, pages 1{69. SpringerVerlag, 1979.45. A. Sch�urr. Progress: A vhl-language based on graph grammars. InLNCS532. Springer, 1991.46. A. Sch�urr. Programmed Graph Replacement Systems. World Scienti�c,1996. In this book.47. A. Corradini and F. Rossi. Synchronized composition of graph grammarproductions. In 5th Int. Workshop on Graph Grammars and their Applic-ation to Computer Science, Williamsburg '94, LNCS , 1996. Accepted.48. H.-J. Kreowski and S. Kuske. On the interleaving semantics of trans-formation units - a step into GRACE. In 5th Int. Workshop on Graph61Grammars and their Application to Computer Science, Williamsburg '94,LNCS , 196. Accepted.49. H. Ehrig and G. Engels. Pragmatic and semantic aspects of a moduleconcept for graph transformation systems. In 5th Int. Workshop on GraphGrammars and their Application to Computer Science, Williamsburg '94,LNCS , 1996. Accepted.50. G. Taentzer and A. Sch�urr. DIEGO, another step towards a mod-ule concept for graph transformation systems. Proc. of SEGRAGRA'95"Graph Rewriting and Computation", Electronic Notes of TCS, 2, 1995.http://www.elsevier.nl/locate/entcs .51. F. Parisi-Presicce. Transformation of graph grammars. In 5th Int. Work-shop on Graph Grammars and their Application to Computer Science,Williamsburg '94, LNCS , 1996. Accepted.52. R. Heckel, A. Corradini, H. Ehrig, and M. L�owe. Horizontal and verticalstructuring of typed graph transformation systems. Accepted for specialissue of MSCS, 1996.53. H. Ehrig, H.-J. Kreowski, and G. Rozenberg, editors. 4th InternationalWorkshop on Graph Grammars and Their Application to Computer Sci-ence. Springer Verlag, 1991. Lecture Notes in Computer Science 532.

62

Indexalgebra, 39application condition, 30co-equalizer, 6con ict-free, 9constraint, 30d-complete, 9d-injective, 9dangling condition, 32derivation, 8amalgamated, 25conditional, 31derived, 22direct, 8directly derived, 21distributed, 26HLR, 45parallel, 16parallel conditional, 36derivationscompleteness of, 49invertibility of, 49translation of, 49Derived Production Problem, 55Derived Production Theorem, 23Distribution Problem, 57Distribution Theorem, 27embedding, 20completeness of, 55embedding morphism, 20Embedding Problem, 55Embedding Theorem, 24extension, 35generalized graph structure, 43gluing condition, 32graphattributed, 40distributed, 26graph grammar, 5graph language, 8graph morphismattributed, 40partial, 4graph structure, 43HLR system, 45categories, 45conditions, 45homomorphism, 39hypergraph, 42identi�cation condition, 32independenceparallel, 13, 33, 51sequential, 14, 33, 51Local Church-Rosser Problem, 50Local Church-Rosser Theorem, 15with application conditions, 34match, 8distributed, 26HLR, 45Parallelism Problem, 52production, 4amalgamated, 25conditional, 31derived, 22directly derived, 21HLR, 45morphism, 5name, 4parallel, 1663

parallel conditional, 36productionssequential composition of, 22synchronized, 25translation of, 48pushout, 6, 9, 40signature, 39graph structure, 43splittingpartial, 26total, 26start graph, 5subgraph, 4subproduction, 25Weak Parallelism Theorem, 17with application conditions, 3764