Goal-oriented decomposition of switching functions

11
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 12, NO. 5, MAY 1993 655 Goal-Oriented Decomposition of Switching Functions L. Diaz-Olavarrieta, K. Illanko, and S. G. Zaky, Member, ZEEE Abstract-A method for decomposing combinational func- tions for the purpose of obtaining a multilevel synthesis is pro- posed. A cascade of mapping stages is used to transform the function being synthesized into a simple one called the goal function. The goal function-usually a one-variable function- is selected at the outset, then used to guide the delinition of the transformations to be implemented by the mapping stages. The resulting circuits are comparable in size to those derived from minimal prime-implicantcovers. But goal-oriented synthesis is faster, yields circuits that are easier to test, and offers more llexibility in the choice of gates. I. INTRODUCTION HE synthesis of Boolean combinational functions in T the form of two-level circuits is a process that is well understood [ 11. Early work on multilevel circuits was mo- tivated by gate fan-in limitations [2]. Attempts were made to minimize the number of levels needed [3], usually at the expense of increasing internal fan-out in the circuit. However, multilevel realizations with a large number of levels are being increasingly used in VLSI implementa- tions, and often lead to a smaller number of transistors. The increase in the number of levels does not necessarily lead to longer delay, because gates are likely to have a smaller fan-in and hence a shorter delay. Also, a multi- level circuit that introduces a longer delay than its two- level counterpart may still be acceptable, because internal gate delays are seldom the limiting factor in the circuit’s operation. Because of the complexity of obtaining a minimum-cost implementation for a logic function, the objective of re- search in this area is to develop computationally tractable synthesis techniques that will obtain near-optimal results for a given set of constraints. Several techniques for mul- tilevel synthesis of combinational functions have been de- scribed in the literature [4]-[6]. Usually, the synthesis process starts by obtaining a two-level realization from a minimal prime-implicant cover. Then, techniques such as factoring [5], [7]-[ 111 are used to obtain a multilevel cir- cuit having a smaller number of gates. This paper introduces a new decomposition technique for use in multilevel synthesis of combinational functions. Manuscript received August 27, 1991. This work was partially supported by the Natural Sciences and Engineering Research Council of Canada un- der Grant A8994. This paper was recommended by Associate Editor R. K. Brayton. L. Diu-Olavarrieta was with the Department of Electrical Engineering, University of Toronto, Toronto, Ont., Canada. She is now with Bell North- e m Research, Ottawa, Ont., Canada. K. Illanko and S. G. Zaky are with the Department of Electrical Engi- neering, University of Toronto, Toronto, Ont., Canada. IEEE Log Number 9200545. The method yields circuits comparable in size to those derived by existing techniques. However, the synthesis process is faster and offers more flexibility in using dif- ferent types of gates. The resulting circuits are also char- acterized by having no internal fan-out. Hence, compo- nent delays are low and the circuits should be easy to test. Consider a functionf(X), where X is the n-tuple (xI, x2, - * - , x,), which we will call the input vector. For ease of reference, we define a point of F (X) as follows: Point: A point pj of an input vector X = {xl, x2, - - - , x,} represents a particular valuation, xi = ai, ai = 0 or l,i = 1, , n, wherej is the integer defined by the binary number a l a2 - a,. 0 For example, in the case of a three-variable function, f( p3) refers to the value of the function for xI = 0 and x2 = x3 = 1. Some recent work on the synthesis of combinational functions involved the use of transformations on the input variables to modify the function being synthesized [ 121. The technique described in this paper evolved from the work reported in [4], [13]-[15], which also uses such transformations. The function f(X) is realized using the structure shown in Fig. 1, which consists of K mapping stages followed by a block that implements a reminder function. Mapping stage j transforms its input vector, Xj-l, into the output vector, X’, where each XJ has the same structure as the primary input vector, X, and Xo = X. We will use the function in Fig. 2(a) as a running ex- ample in the discussion to follow. For this function,f( p4) = f( p5) = 0. Hence, stage 1 may map point p4 into ps by setting XI = X for all X, except at X = p4 for which XI is set equal to p5. The required transformation of the input variables is implemented by as shown in Fig. 2(b). The synthesis problem has now been reduced to that of realizing the remainder function, fl(X1), shown in Fig. 2(c). Since XI is never equal t0p4, f 1 ( p 4 ) is a don’t-care. In general, each mapping stage may map more than one input valuation, thus introducing several don’t-cares in the truth table of the function. Existing synthesis techniques that use the approach de- scribed above are based on the selection of mappings from a predefined set. The pairs of points at which the function has the same value have been called symmetries, and sev- eral types of symmetries have been defined. Edwards and Hurst [4] propose a synthesis procedure for a function f(X) that begins by examining the truth table of the func- tion to identify all Occurrences of these symmetries. A x; := XI; x: := x2; x: := x3 + XIZ2 0278-0070/93$03.00 0 1993 IEEE ~~ -

Transcript of Goal-oriented decomposition of switching functions

IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 12, NO. 5 , MAY 1993 655

Goal-Oriented Decomposition of Switching Functions L. Diaz-Olavarrieta, K. Illanko, and S . G. Zaky, Member, ZEEE

Abstract-A method for decomposing combinational func- tions for the purpose of obtaining a multilevel synthesis is pro- posed. A cascade of mapping stages is used to transform the function being synthesized into a simple one called the goal function. The goal function-usually a one-variable function- is selected at the outset, then used to guide the delinition of the transformations to be implemented by the mapping stages. The resulting circuits are comparable in size to those derived from minimal prime-implicant covers. But goal-oriented synthesis is faster, yields circuits that are easier to test, and offers more llexibility in the choice of gates.

I. INTRODUCTION HE synthesis of Boolean combinational functions in T the form of two-level circuits is a process that is well

understood [ 11. Early work on multilevel circuits was mo- tivated by gate fan-in limitations [2]. Attempts were made to minimize the number of levels needed [3], usually at the expense of increasing internal fan-out in the circuit. However, multilevel realizations with a large number of levels are being increasingly used in VLSI implementa- tions, and often lead to a smaller number of transistors. The increase in the number of levels does not necessarily lead to longer delay, because gates are likely to have a smaller fan-in and hence a shorter delay. Also, a multi- level circuit that introduces a longer delay than its two- level counterpart may still be acceptable, because internal gate delays are seldom the limiting factor in the circuit’s operation.

Because of the complexity of obtaining a minimum-cost implementation for a logic function, the objective of re- search in this area is to develop computationally tractable synthesis techniques that will obtain near-optimal results for a given set of constraints. Several techniques for mul- tilevel synthesis of combinational functions have been de- scribed in the literature [4]-[6]. Usually, the synthesis process starts by obtaining a two-level realization from a minimal prime-implicant cover. Then, techniques such as factoring [5], [7]-[ 1 11 are used to obtain a multilevel cir- cuit having a smaller number of gates.

This paper introduces a new decomposition technique for use in multilevel synthesis of combinational functions.

Manuscript received August 27, 1991. This work was partially supported by the Natural Sciences and Engineering Research Council of Canada un- der Grant A8994. This paper was recommended by Associate Editor R. K. Brayton.

L. Diu-Olavarrieta was with the Department of Electrical Engineering, University of Toronto, Toronto, Ont., Canada. She is now with Bell North- e m Research, Ottawa, Ont., Canada.

K. Illanko and S. G. Zaky are with the Department of Electrical Engi- neering, University of Toronto, Toronto, Ont., Canada.

IEEE Log Number 9200545.

The method yields circuits comparable in size to those derived by existing techniques. However, the synthesis process is faster and offers more flexibility in using dif- ferent types of gates. The resulting circuits are also char- acterized by having no internal fan-out. Hence, compo- nent delays are low and the circuits should be easy to test.

Consider a functionf(X), where X is the n-tuple (xI , x2, - * - , x , ) , which we will call the input vector. For ease of reference, we define a point of F (X) as follows:

Point: A point pj of an input vector X = {xl, x2, - - - , x,} represents a particular valuation, xi = a i , ai = 0 or l , i = 1, , n, wherej is the integer defined by the binary number a l a2 - a,. 0

For example, in the case of a three-variable function, f( p3) refers to the value of the function for xI = 0 and x2 = x3 = 1.

Some recent work on the synthesis of combinational functions involved the use of transformations on the input variables to modify the function being synthesized [ 121. The technique described in this paper evolved from the work reported in [4], [13]-[15], which also uses such transformations. The function f(X) is realized using the structure shown in Fig. 1, which consists of K mapping stages followed by a block that implements a reminder function. Mapping stage j transforms its input vector, X j - l , into the output vector, X’, where each XJ has the same structure as the primary input vector, X, and Xo = X.

We will use the function in Fig. 2(a) as a running ex- ample in the discussion to follow. For this function,f( p4) = f( p5) = 0. Hence, stage 1 may map point p4 into ps by setting XI = X for all X, except at X = p4 for which XI is set equal to p5. The required transformation of the input variables is implemented by

as shown in Fig. 2(b). The synthesis problem has now been reduced to that of realizing the remainder function, f l (X1) , shown in Fig. 2(c). Since XI is never equal t0p4, f 1 ( p 4 ) is a don’t-care. In general, each mapping stage may map more than one input valuation, thus introducing several don’t-cares in the truth table of the function.

Existing synthesis techniques that use the approach de- scribed above are based on the selection of mappings from a predefined set. The pairs of points at which the function has the same value have been called symmetries, and sev- eral types of symmetries have been defined. Edwards and Hurst [4] propose a synthesis procedure for a function f ( X ) that begins by examining the truth table of the func- tion to identify all Occurrences of these symmetries. A

x ; := XI; x : := x2; x : := x3 + XIZ2

0278-0070/93$03.00 0 1993 IEEE

~~ -

656 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 12, NO. 5 , MAY 1993

Function ... Fig. 1 . Structure for function synthesis using mapping.

X A

X’ h

(C) Fig. 2. A mapping example. (a) Karnaugh map for an example function

f ( X ) . (b) Mapping M: p4 -+ p s . (c) Remainder function.

particular incidence of symmetry is chosen and used in implementing the first mapping stage. Then the procedure is repeated, after selecting a suitable assignment for the don’t-cares introduced in the remainder function, f ’ (XI). Some of the detection techniques that can be used to search for different types of symmetry have been described in

Synthesis using symmetry is computationally intensive, because of the large number of possibilities in don’t-care assignment and symmetry selection. Unlike Edwards and Hurst’s approach, the technique proposed in this paper is not limited to any particular set of mappings. Instead, a simple operation is defined in which a single input val- uation is mapped into another. More-complex mappings are treated as either parallel or serial compositions of such simple mappings. The composition operations enable complex mappings to be derived from simple ones in much the same way as implicants are derived from minterms in conventional synthesis techniques. At the beginning of the synthesis procedure, a suitable function, which we will

[ 141-[ 161.

call the goalfinction, is selected for the last remainder function in Fig. 1. Then, the mapping stages are defined to transform the function being synthesized into the goal function [ 171. The goal function guides both the selection of mapping stages and the assignment of don’t care terms.

The paper describes the goal-oriented method (GOM) and examples for its use. A comparison of the results ob- tained by this method with factorization techniques is also given. We begin by introducing some pertinent notation and terminology.

11. DEFINITIONS AND NOTATIONS Each of the mappings in Fig. 1 can be defined as fol-

lows. Mapping: A mapping M transforms each point pi of its

input vector into a point pj of its output vector. The set of points for which j # i constitutes the reset component, R, and the remaining points constitute the identity com- ponent, I , of the mapping. A mapping will be described by listing its reset component in the form pi --+ p i , Pk PI, etc. 0

Let X = {XI, x2, * , x,} be the input vector and X’ = { x i , x i , - * * , x:} be the output vector of a mapping stage. In any mapping some input variables are trans- formed and some remain the same. Hence, we define the following terms.

Variant and Invariant Variables: In a mapping that transforms an input vector X into an output vector X’, all variables, x i , for which xl : = x i , are called the invariant variables of the mapping. The remaining variables are

0 Hamming Distance: The number of variant variables

of a mapping is called its Hamming distance, HD. 0 For example, the three-variable mapping p4 + p5 trans-

forms the input valuation 100 into 101. Hence, it has HD = 1, with x3 as its variant variable and xI and x2 are its invariant variables.

In the remainder of the paper, we will restrict the dis- cussion to mappings having a Hamming distance of 1. Any such mapping having xi as its variant variable will be written in one of the two forms:

called the variant variables of the mapping.

M ( x i ) h ( X - x i ) or M&)h(X - xi ) .

In this notation, h(X - x i ) is a Boolean expression in all the invariant variables of the mapping, such that h = 1 defines the valuation of these variables at which xi changes value. The symbol M ( x i ) h will be used when xi changes from 0 to 1, in which case the mapping will be said to be positive. M ( x i ) h represents a negative mapping, in which xi changes from 1 to 0. For example, the two mappings p1 + p5 andp3 -, p 2 will be written in the form M ( x , ) ~ 2 ~ 3 and M ( Z 3 ) Z l x 2 , respectively.

We will now give an alternate definition for the trans- formation X + X’ implemented by a mapping stage hav- ing a single variant variable.

Positive Mapping: A positive mapping M(xi )h (X - x i )

. r

DIAZ-OLAVARRIETA et al.: DECOMPOSITION OF SWITCHING FUNCTIONS 657

transforms X into X' such that

x: := 1 forx, = Oand h(X - x , ) = 1

.- .- x, otherwise

for k f i xi : = xk

0 Negative Mapping: A negative mapping M ( X , ) h ( X -

x,) transforms X into X' such that

x,' := 0, forx, = 1 andh(X - x , ) = 1

.- . - x,, otherwise

for k f i xi : = xk,

0 Positive and negative mappings are implemented by

x; := x, + h(X - x , )

x: := x, * ii(X - x,) .

For example, the symbolic representation and the imple- mentation equations for the two mappings mentioned ear- lier are as follows:

pi + p5; M(xl)X2~3; X; := XI + 32x3

p3 -+ p2; M(X3)31~2; X; := ~3 * (X ,X~) .

The general form of the circuit that generates the variant variable x: in a mapping stage is shown in Fig. 3, where the boxes labeled h implement the invariant function h (X - x,); Fig. 3(a) and (b) are for positive and negative map- pings, respectively. Fig. 3(c) shows a permutation map- ping, which is a parallel composition of positive and neg- ative mappings, as described below.

Composition Operations A mapping M that transforms two points Pk and pr into

p I and ps, respectively, can be regarded as a parallel com- position of two mappings, MI: pk + p I and M2: pr + ps. The parallel composition operation is defined as follows.

Parallel Composition: Consider two mappings MI and M2, whose identity components are Z, and Z2 and reset components are RI and R2, respectively. Their parallel composition, written MI E: M2, is a mapping M having an identity component Z and a reset component R, such that

0 fieorem I : The parallel composition operation is

Proofi Follows from the properties of the union and

An important special case of a parallel composition is that of two mappings that are the inverse of each other. The resultant mapping will be called a permutation map- ping, defined as follows.

Permutation Mapping: A permutation mapping maps

I = zI n z2, R = R~ U R ~ .

idempotent, associative, and commutative.

intersection operations. 0

X A

X' A

(b)

x - x i i p- xi'

(C) Fig. 3 . Mapping circuits. (a) M ( x , ) h ( X - x , ) . (b) M ( F , ) h ( X - x , ) . (c) ~ P ( x , ) h ( X - &).

two points pi and pi of its input vector into points p, and U

Instead of introducing a don't-care, a permutation map- ping interchanges two entries in the truth table of the function being synthesized. We will use the symbol P ( x i ) h (X - x i ) to represent a permutation mapping that has a single variant viable, xi. It transforms an input vec- tor X into an output vector X' such that

p i , respectively, of its output vector.

xi' := xi - for h(X - x i ) = 1

:= xi otherwise

for k # i . xi : = xk

A permutation mapping complements the variable when h = 1. Hence, it can be implemented by the XOR opera- tion

xi' := x; Q h(X - x; )

which yields the circuit configuration shown in Fig. 3(c). For example, the mapping P(x2)ZIX3 maps po into p2 and p2 into po, and is implemented by

x; := x2 0 21x3.

When this mapping is applied to the functionf(X) in Fig. 4, it yields the remainder function f ( X I ) shown.

Mapping Direction: The attribute of being positive, negative or a permutation will be referred to as the direc-

Theorem 2: The parallel composition of two mappings that have the same variant variable and the same direction is equivalent to a single mapping whose invariant function is the sum of the invariant functions of the two mappings.

tion of the mapping. 0

658 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 12, NO. 5 , MAY 1993

X' /-.

Fig. 5 .

(C) Fig. 4. Synthesis using a permutation mapping. (a) f (X) . (b) Mapping

Circuit (c)f ' (X') .

That is,

M(Xi)hI z M(Xi)h2 = M(Xi ) (h l + h2)

M(Xi)hl Z M(Xi)h2 = M(Ei)(hI + h2)

P(Xi)hI z P(Xi)h2 = P(X,)(hI + h2).

Proof: Consider first the case of two positive map- pings whose reset components are given by

RI = { pj I x ~ = 0, hl = l }

R2 = { pj I x ~ = 0, h2 = l}.

The reset component of the composite mapping is given by

RI U R2 = {pjlxi = 0, hi + h2 = l}.

Thus the combined mapping has the invariant function h, + h2. The cases of negative and permutation mappings

0 Consider again the function f (x ) given in Fig. 2(a).

can be proved in the same way.

The two mappings

Mi := p4 * p5: M ( x ~ ) x I X ~

My: p6 * p7: M ( X 3 ) X I X f

change the variant variable, x3 , from 0 to 1, the first when - xIx2 : = 10 and the second when x I x 2 : = 11. They are

implemented by

x3 := x3 + x,z2

x3 := x3 + XIX2

xlm, 1 3 1 5

(b)

= M ( x 3 ) x l . (b)fz(X2): afterM2 = M(.+Ix2. Remainder function forf(X) in Figure 2(a). (a) f I (X

X XI h h

X2 h

after MI

x2 U Fig. 6. A circuit realization for the function of Fig. 5.

respectively. (Note that superscripts have been dropped on the left-hand side to avoid cumbersome notation. They will be used only where ambiguity might arise.) The par- allel composition of these two mappings is a mapping MI that changes x3 from 0 to 1 when xI x2 : = 10 or 11. That is,

MI = Mi EM;' = M(x3) (XI E2 + XI x2) = M(x3)xI.

Mapping MI is realized by

x3 := x3 + XI.

It yields the remainder functionf ' (XI) given in Fig. 5(a), which has don't-care entries at both p4 and p6.

When mappings are connected in cascade as in Fig. 1, they will be regarded as forming a series composition.

Series Composition: Two mappings MI and M2 in which the output vector of MI constitutes the input vector of M2 are said to form a series composition. Such a com- position will be written in the form Mi' M2, which may be

0 For example, consider the remainder function f ' (XI)

in Fig. 5(a). Because of the don't-care atp6, the mapping Mz: p7 -t p6 may be applied to produce the remainder function in Fig. 5(b). The two map ings that transform the original function f ( X ) into f 2 (X ) are applied in se- ries, and can be represented in the form

read as MI followed by M2.

Y

M(x3)xP M(z3)xIx2. The remainder function, f ( X 2 ) can now be implemented

I

DIAZ-OLAVARRIETA et al. : DECOMPOSITION OF SWITCHING FUNCTIONS 659

simply asf2(X2) = X 3 , and leads to the circuit realization shown in Fig. 6.

111. THE GOAL-ORIENTED METHOD

The iterative synthesis procedure described in Section I leads to a realization of a functionf(x) in the form of a series composition of mappings, MPM; * - * OMK, fol- lowed by a circuit that implements the remainder func- tion. After stage j in this composition, the remainder function f’ (X’ ) is obtained fromf’ - (X’ - I ) by inserting don’t cares or exchanging 0 and 1 entries, according to the type of mapping implemented by that stage. The pro- cedure terminates after stage K, where the remainder functionfK(XK) is judged to be trivial to implement.

In goal-oriented synthesis, the last function, f (XK), is defined at the outset, and called the goal function, GF(X). Then, the mapping stages are selected such as to transform f(X) into GF(X). Any simple function can serve as a goal function. Suitable candidates are single- variable functions, i.e., GF = x, or X, for some i . Other simple functions of a small number of variables are also possible. However, in this paper we restrict the discussion to single-variable goal functions.

Assume that a suitable goal function has been selected according to some criterion. The role of the mapping stages is to map any point p, wheref( p , ) # GF( p,) into another point p, , such that f( p,) = GF( p,). For exam- ple, the mappings used in Fig. 5 to synthesizef(X) of Fig. 2(a) were in fact guided by the choice GF = X3. Since f ( X ) and GF(X) differ in value at p4 and p7, two map- pings MI and M2 were found to replace these entries with don’t cares, so that f (X) can be transformed into GF. In this realization, mapping MI consists of a parallel com- position in which M;’ is not needed for the purpose of transforming f into GF. It has been used only to simplify the first mapping stage from M(x3)xIX;? to M ( x ~ ) x I . We will now introduce the following definition.

Essential Point: For a given functionf(X) and a goal function GF(X), any point p, wheref( p,) f GF( p,) will be called an essential point of the mapping that transforms

The essential points are the points of the primary input vector that must be mapping into other points for f to be reduced to GF. In general, the mappings used to synthe- size a function need not transform only the essential points. Any point pI may be mapped into another point p, iff ( p , ) = f ( p, ) . When the point being mapped is not an essential point, that is when f( p I ) = GF( p,) , the map- ping will be called a redudant mapping, because it does not bringfany closer to GF. Redundant mappings will be used in parallel compositions with mappings that trans- form essential points to simplify their circuit realization.

In general, the goal-oriented method for synthesizing a function f (x) involves the following tasks [ 181.

1) Select a suitable goal function, GF(X). 2) Determine the essential points and the type of map-

finto GF. 0

ping needed for each point to transformf(X) into GF (X) .

3) Organize the essential-point mappings into groups that can be combined in parallel compositions. In- clude redundant mappings whenever they can be beneficial in simplifying the resulting composite mappings. Each parallel composition represents a mapping stage.

4) Select an ordering for the mapping stages and com- bine these mappings to produce a synthesis in the form of a series composition.

5 ) Obtain a circuit realization for each mapping stage and for the goal function.

As mentioned earlier, we restrict the discussion in this paper to single-variable goal functions and to mappings having a Hamming distance of 1; that is, each mapping stage will affect only one of its input variables. For a given goal function, there is a large number of possibilities for the selection of mappings and for arranging them into a seriedparallel composition. To reduce the complexity of the search for a suitable choice, we will use a specific recursive structure as described in the following section.

IV. SYNTHESIS ALGORITHM Assume that a goal function G F = x, has been selected

for implementing the function f(X), where x, is one of the n variables in X. With the restriction to single-variable goal functions and HD = 1 mappings, the task of iden- tifying possible mappings can be simplified considerably. Let pi be the HD = 1 neighbor of a point pi obtained by complementing the value of xg, and let M be the overall mapping needed to transform f(X) into GF(X). Mapping M must transform every point p i wheref( p i ) # GF( p i ) into pi. For each of these points, a positive, negative, or permutation mapping is needed, depending on the values

Since a positive mapping changes x, from 0 to 1, it must be used for all points where f( p i ) = f( pi) = 1. Similarly, a negative mapping is needed wherever f( p i ) = f( p i ) = 0. At points wheref( p i ) = 7 ( p i ) = Xg( pi) , the two points pi and pi must be interchanged using a per- mutation mapping. @,(pi) is a function that yields the valuation of variable x, at point pi .) Therefore, the overall mapping, M, can be decomposed into three mappings as follows:

Off(Pi) andf(F;) .

1) A positive mapping, M,: M(x,) h, ( X - x,) 2) A negative mapping, M- : M( X,) h- (X - x,) 3) A permutation mapping, P: P (x,) hp ( X - xg).

The positive mapping is a parallel composition of the mappings,pi +p i fo ra l lp i a twh ich f (p i ) =f(pi ) = 1, and the invariant function h+ (X - xg) is a function of n - 1 variables that identifies all pairs ( p i , p i ) affected by the mapping. Thus,

h+(X - x,) = 1, wheref(pi) = f ( p i ) = 1

= 0, otherwise

660 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 12. NO. 5, MAY 1993

and the mapping is implemented by the circuit

xg : = xg + h , (X - xg).

The negative and permutation mapping are defined in a similar manner, namely,

h-(X - xg) = 1, wheref(p,) = f ( p , ) = 0

wheref(p,) = f < P , > = xg(p,)

= 0, otherwise

hp(X - xg) = 1,

= 0, otherwise

and the corresponding circuits are -

xg : = xg - h- (X - xg)

xg := xg e hp(X - xg)

respectively. Consider now the case where the goal function is GF

= Xg instead of xg. The above equations for the minterms of the invariant functions become

h,(X - xg) = 1,

h-(X - xg) = 1,

wheref(p,) = f ( j j , ) = 0

wheref(p,) =f(P,) = 1

hp(X - xg) = 1 , wheref(p,) = mi) = Xg(P,). The three mappings, M,, M-, and P, affect different

points off(X); that is, their invariant functions have no common minterms. Hence, they can be applied one after the other, in any order. The order P ‘ME M, will be used for reasons to be discussed shortly. This series composi- tion leads to the mapping equation

(1)

which yields the circuit structure illustrated in Fig. 7. With the mapping represented by (l) , the problem of

synthesizing the n-variable function f (X) has been re- duced to that of synthesizing three functions of n - 1 variables, namely h, , h - , and hp. For each of these func- tions, a new goal function is selected, and the process is repeated. Thus, the synthesis algorithm consists of a re- cursive procedure, in which the number of variables is reduced by one in every stage of the recursion. The result is a decomposition that has the tree structure shown in Fig. 8.

xg := (xg e h P ) L + h ,

Use of Redundant Mappings in the example in Fig. 5 , the redundant mapping p16 --*

p7 was used to simplify the circuit implementation of the positive mapping MI. In the second stage, the negative mapping M2 maps point p7 back into p6. A similar situa- tion may arise in any series composition, where points affected by later mapping stages may be used to create redundant mappings for earlier stages. Since, by defini- tion, a redundant mapping need not be implemented, it will be recorded as a don’t-care entry in the invariant function of mapping stages in which it may be used.

Consider the series commsition P ’ M? M, described

I I I

I1 -1 Fig. 7 . One mapping stage in the recursive structure.

n-variable n-1 n-2 function variables variables

Fig. 8. A tree decomposition for the mapping M that transformsf(X) into GF(X).

above. The minterms of h- and h , correspond to points inf(X) wheref( pi) = f( pi ) . Since the permutation map- ping is applied first, that is, before the function is altered by the other two mappings, all points where h- = 1 or h , = 1 can be treated as don’t-cares in the truth table of hp. Similarly, the minterms of h, can be used as don’t- cares in h - . Hence, after deriving the three functions h, , h - , and hp as described above, some of the 0’s of these functions will be replaced by don’t-care entries. In the statement of the algorithm to be given shortly, don’t-cares are recorded in a separate function, d(X - x i ) , such that d ( p i ) = 1 if the corresponding h ( pi) is a don’t-care and d ( p i ) = 0, otherwise. Hence,

for hp: d(X - x i ) = h,(X - x i ) + h-(X - x i )

for h-: d(X - x i ) = h , (X - x i )

for h,: d(X - x i ) = 0 everywhere. Note that if a positive or a negative mapping is followed by a permutation mapping, no don’t-care entries are in- troduced, because a permutation mapping is used where f ( p i ) = 7 ( p i ) . For this reason, P is placed first in the composition P O M? M, . If the two mappings M- and M, are interchanged, don’t-care entries would be introduced in h , instead of h - . However, the order in which these two mappings are implemented has been found to make little difference in the final result.

Example: ~ ~ ~ ~ ~~~~ ~ ~ Consider the four-variable function in Fig. 9. The es-

r

DIAZ-OLAVARRIETA et al.: DECOMPOSITION OF SWITCHING FUNCTIONS

X f l Y 00 01 11 10

11

11

h-(X-xi) hp (X -x 1)

Fig. 9. An example function and the invariant functions of the three map- pings that transform it into GF = x, .

sential points of the required mapping for the goal func- tion GF = x1 are circled. Positive mappings are needed for points p 3 and p6 and negative mappings for points p8, p I 3 , and pls. The two remaining essential points, p , and p9 , must be interchanged. This leads to the “ 1 ” entries in the Karnaugh maps for h + , h - , and hp. Then don’t- cares are entered in h- where h , = 1 and in hp where h , = 1 or h- = 1 , thus completing the derivation of the invariant functions for one mapping stage in the recur- sion.

Handling of Don %Cares The function f (X) being implemented at the root node

of the tree in Fig. 8 may be incompletely specified. At lower-level nodes, don’t-cares may be introduced in the invariant functions as a result of the existence of redun- dant mappings. With goal-oriented synthesis, don’t-cares are handled in a straightforward manner. Once a goal function, GF, has been selected for a given node on the tree, a don’t-care entry at any point pi in the function being synthesized at that node is simply set equal to GF( p i ) .

V. SELECTION OF THE GOAL FUNCTION With the tree decomposition of Fig. 8, the main deci-

sion to be made at each node is the choice of the goal function. The definition of the mappings to be performed and the assignment of don’t-cares follow in a straightfor- ward manner.

661

Let g(X) be the function being synthesized at a given node on the tree. There are 2k possible single-variable goal functions, GF = x, or I-,, i = 1 * - k, where k is the number of variables at that node. We describe below two algorithms to select one of these functions. In both procedures, a number of goal functions are attempted at each node, and the function that yields the lowest-cost realization is selected, for some cost function. To reduce the size of the search space, we begin by choosing be- tween n, and 5, based on which of the two functions is closer to g (X).

The first-order Hadamard spectral coefficient (HSC) for any variable x, is equal to the number of agreements less the number of disagreements between f (X) and x, [16]. That is,

if NA = number of points wheref(X) = x,

and No = number of points where f (X) = XI

then HSC(x,) = NA - No. The function HSC(x , ) is a measure of the correlation be- tween f (X) and input variable x, . Hence, it will be used as a rough guide to the suitability of x, as a goal function. When computing HSC, don’t-cares are ignored. They are counted neither as agreements nor as disagreements be- tween f(X) and x,. If HSC(x,) ? 0, we will choose GF = x, as a potential goal function; otherwise, GF = 5, will be used. This choice reduces the number of possible goal functions from 2k to k. For example, for the function in Fig. 9, f (X) and x1 have the same value at 9 points and differ in value at 7 points, which yields HSC(x , ) = 2. The corresponding values for x2, x 3 , and x4 are -2 , 6, and - 2, respectively. Therefore, the potential goal functions that will be examined are xl , Is,, x3, and X4.

Cost Function The cost function should be selected based on the im-

plementation technology. The number of literals in the algebraic expression describing the way in which a logic function can be synthesized is often used as a cost mea- sure. For the purposes of this paper, this measure will be modified slightly to provide a better representation of the chip area needed in a VLSI implementation, which in turn is determined by the number of transistors. A suitable es- timate for this number is the total number of gate inputs plus the number of inversions [19], [20]. Inversions on primary inputs will be ignored. Thus, the expression

AB + (CD + E ) + FG which has 8 literals (the term between brackets counts as a new literal), will be assigned a cost of 12.

Algorithm 1 This algorithm involves a full search of all possible

choices of goal functions. Clearly, the complexity of such a search increases rapidly for a large number of variables, because it involves traversing the tree in Fig. 8 for all

662 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS. VOL. 12, NO. 5. MAY 1993

procedure Fullsearch (g , d , k, cost)

cost := 0 Initialize midost to a large number if k > kStop then { f o r i = 1 . . . & do

1. choose a potential goal function: if HSC(xi) 2 0 then G F := xi else G F :=

2. Assign don’t wes in g ( X ) to agree with GF

for the function g and the goal function GF

4. Add the incremental cost of this mapping stage to cost

5. k := k-1

if g ( X ) = 0 then rem

t

3. Compute hp(X-Xi), h-(X-X;) and h+(X-x;)

~ ( X - X ; ) := h+(X-Xi) + h-(X-Xi) dl Fullsearch (hp , d , , k , c ) cost := cost + c d(X-Xi) := h+(X-Xi) call FullSearch (h-, d , , k, c ) cost := cost + c d(X-x;) := 0 Call FullSeWCh (h+, d , , k, C) cost := cost + c

6. if cost c minCost then { minCost := cost

record G F , hp , h+ and h- instead of earlier values

I I

I else get 2-level synthesis and compute cost return end procedure FullSearch

Fig. 10. Decomposition algorithm using a full search for GF.

possible combinations of goal functions at various nodes. However, the algorithm is presented here to illustrate the basic approach and to provide a reference point for pur- poses of comparison.

The algorithm is given in Fig. 10 in the form of a re- cursive procedure, Fullsearch. Each instance of Full- Search implements a mapping for one node on the tree (steps 1 to 4). Then, in step 5 , the procedure calls itself to find the cost of the mappings at lower-level nodes. The recursion ends if a null sub-function is reached. Other- wise, it continues until the number of variables is reduced to a predetermined value, kStop, usually 2 or 3. At this point, the remainder function is implemented using any suitable 2-level synthesis technique. For each node, the procedure computes the cost of the mapping implemented at that node and adds it to the cost of the subtree having its root at that node. It also keeps track of the goal func- tions that lead to the lowest total cost.

Procedure Fullsearch has three input parameters, namely,

g the function being synthesized, d a function describing the don’t-care entries in g, k the number of variables.

The procedure has one output variable, cost, which is the

procedure Millcount (s, d . A, cost) if g ( X ) = 0 then rem cost := 0 Initialize minCount to a large number if k > Stop then { f o r i = l . . . k do

t 1. Choose a potential goal function: if HSC(x;) 2 0 then GF := x; else GF := q

2. Assign don’t cares in g ( X ) to agree with GF

for the function g and the goal function GF

4. count := total number of 1s in hp , h- and h+

5 . if count c minCount then { minCount := count

3. Compute hp(X-Xi), h-(X-Xj) and h+(X-x;)

Record G F , hp, h+ and h- instead of earlier values

I 1 Add the incremental cost of this stage to cost k := k-1 ~ ( X - X ; ) := h+(X-Xi) + h4X-x ; ) call MillCbW& (hp , d . , k, C) Cost := cost + c d(X-x;) := h+(X-Xi)

cost := cost + c d(X-x;) := 0 @ MMbW (h+, d , , k, C) cost := cost + c

call MinCmnt (h-, d , , k, c )

1 else get 2-level synthesis and compute cost rem end procedure MinCount

Fig. 1 I . Decomposition algorithm based on minimization of the number of minterms in the invariant functions of each mapping stage.

f = (hp B x ~ ) L + h+

Fig. 12. Implementation tree for the function in Fig. 9.

total cost computed at any level of the recursion. The re- sults obtained by applying procedure Fullsearch to some test functions will be given later.

Algorithm 2 In this algorithm, the complexity of the search is re-

duced substantially. At each node, the goal function that leads to the smallest number of minterms in h, , h - , and hp is selected. Thus, the decision is made on local infor-

DIAZ-OLAVARRIETA et al.: DECOMPOSITION OF SWITCHING FUNCTIONS 663

TABLE I TEST RESULTS: COST AND CPU TIME IN SECONDS ON A SUN 3/60

FullSearch MinCount MIS-I1 No. of HSC

Function Variables cost Time' cost Time cost Time o/x

z4ml1 z4m12 z4m13 f51ml f51m2 clip1 sa02 1 cm85al cm85a2 cm85a3

7 7 1 8 8 9

10 1 1 1 1 1 1

56 45 56 29 81 29 12 42 12 46 1390 65 30 1140 36 88 12 800 130

94 92 15 92

1 1 1 1 1 1 2 2 2 2

26 32 20 84 69 82 90 29 30 29

9 38

8 40 24

102 21

8 6 8

0 . 6 * *

2 .6 2 .1 0 .9 2 .0 1.6 3.1 1.6

mation, without the need to transverse the subtree ema- nating from that node for every possible goal function. As a result, the decomposition tree in Fig. 8 is traversed only once. The algorithm is described by procedure MinCount in Fig. 11. The main structural difference between pro- cedures FullSearch and MinCount is that the recursive calls in step 5 of FullSearch have been moved outside the fur loop in MinCount.

Example We will give the results of applying Algorithm 1 to the

example function in Fig. 9. Using kStup = 2, the algo- rithm selects GF = x1 for the root node of the decompo- sition tree. The invariant functions for this goal function were given in Fig. 9. For the next level, the goal func- tions x4, x2, and x2 are chosen for hp, h - , and h, , respec- tively. The full decomposition is given in Fig. 12, and the cost of the resulting circuit is 15.

VI. TEST RESULTS Algorithms 1 and 2 were implemented in C language

and applied to a number of test functions. The results are reported in Table I. Several 7- to 11-variable functions were selected from the benchmark set provided in the 1989 MCNC International Workshop on Logic Synthesis [2 13. Since these are multiple-output functions, one output was chosen for each test, and the resulting function was ap- propriately named. For example, z4m12 is output number 2 of the function z4ml. The table gives the cost and run- ning time in seconds on a SUN 3/60 workstation for the two algorithms. It also gives the corresponding numbers as obtained by the MIS-I1 synthesis procedure [ 1 13. This is a multilevel combinational logic minimization program developed at the University of California, Berkeley, based on the factoring method (UC Berkeley, MIS release #2.0). The MIS package is capable of dealing with multi-output functions; the numbers given in the table were obtained by using the program to compute only the output indi- cated.

As expected, the running time for FullSearch is very high. On the other hand, procedure MinCount is very fast, and it yields results that in most cases are only slightly

higher than those obtained with Fullsearch. Comparison with MIS-I1 shows that the circuit costs obtained by goal- oriented synthesis are comparable, but the synthesis is completed in a much shorter time. In this respect, it should be noted that the current implementation of GOM can only handle single-output functions.

For functions larger than those given in Table I , the current implementation of GOM required long computing times or did not find low-cost realizations. Marginal suc- cess has been achieved with some 14-variable functions using search algorithms that have a slightly higher com- plexity than MinCount.

Discussion The first-order Hadamard spectral coefficients,

HSC(x , ) , for the functions in Table I were examined. It was observed that for functions that have many spectral coefficients, GOM did not perform well. Thesefunctions are close to being random. On the other hand, when a few coefficients are significantly higher than the rest, GOM led to good results. These are functions that have a well- pronounced structure. This observation is supported by the last column in Table I, which gives the ratio of the standard deviation, U , to the average, X, of HSC(x,) for all input variables. Goal-oriented synthesis performs well for large values of U / %

The difficulties encountered with large functions are believed to be, at least in part, due to the fact that the current formulation is minterm-based. As a result, the computational effort and memory requirements become excessive. The concept of parallel compositions allows groups of mappings to be handled simultaneously. Hence it should be possible to develop alternate formulations that deal directly with larger cubes. For example, decompo- sitions maybe sought in which nodes in the tree of Fig. 8 are grouped into subtrees that can be treated as a unit. This is currently being investigated.

Examination of the decompositions produced by GOM revealed that common subfunctions may exist at different nodes. Such sub-functions may be shared to reduce the size of the circuit implementation, with a slight increase in internal fan-out. Hence, it may be beneficial to incor-

664 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 12, NO. 5, MAY 1993

TABLE 11 RESULTS OF SEARCH FOR COMMON SUB-FUNCTIONS

~~ ~~ ~~~ ~~ ~ ~

GOM +MIS- GOM MIS-I1 I1

No. of Function Variables Cost Time Cost Time Cost Time

z4ml- 1 7 56 1 26 9 26 7 clip-I 9 130 1 82 102 108 24 cm85a-I 11 92 2 29 8 31 14

porate GOM inside other synthesis packages, such as MIS-11, that are capable of performing factoring opera- tions. In one experiment, the output of GOM for a few of the functions in Table I was fed directly to MIS for further reduction. The results are given in Table 11. As expected, MIS was able to find common sub-functions and reduce the cost of the implementation. However, a more-appro- priate way to combine the two schemes would be to use GOM as a means for obtaining suitable decompositions for some of the sub-functions generated internally by MIS. Such approaches require further study.

VII. CONCLUSIONS A new method has been presented for obtaining multi-

level realizations for combinational functions. It is based on a decomposition procedure that uses a cascade of map- pings to transform the function being synthesized into a predetermined goal function. The selection of suitable goal functions is, in general, a computationally intensive task. However, a simple computationally efficient proce- dure has been shown to yield circuits of comparable size to those obtained by other multilevel synthesis tech- niques. The resulting circuits have a limited internal fan- out. Hence they will have lower delay and should be eas- ier to test. Goal-oriented synthesis can take advantage of the availability of different gate types, such as XOR.

REFERENCES

[I] R. K. Brayton, G. D. Hachtel, C. T. McMullen, and A. L. Sangio- vanni-Vincentelli, Logic Minimization Algorithms for VLSI Synthesis. Nonvell, MA: Kluwer Academic, 1984.

[2] D. L. Dietmeyer and Y. H. Su, “Logic design automation of fan-in limited nand networks,” IEEE Trans. Comp., vol. C-18, pp. 11-22, Jan. 1969.

[3] H.-P. Lee and E. S . Davison, “A transform for NAND network de- sign,” IEEE Trans. Comp., vol. C-21, pp. 12-20, Jan. 1972.

[4] C. R. Edwards and S. L. Hurst, “A digital synthesis procedure under function symmetries and mapping methods,” IEEE Trans. Comp.,

[5] H. J. Mathony and U. G. Baitinger, “CARLOS: An automated mul- tilevel logic design system for CMOS semi-custom integrated cir- cuits,” IEEE Trans. Computer-Aided Design, vol. 7, pp. 346-355, Mar. 1988.

[6] K. Bartlett, R. K. Brayton, G. D. Hachtel, R. M. Jacoby, C. R. Mor- rison, R. L. Rudell, A. Sangiovanni-Vincentelli, and A. R. Wang, “Multilevel minimization using implicit don’t cares,’’ IEEE Trans. Computer-Aided Design, vol. 7, pp. 723-740, June 1988.

I71 R. K. Brayton and C. T. McMullen, “The decomposition and fac- torization of Boolean expressions,” in Proc. Int. Symp. Circ. Syst., (Rome) pp. 49-54, 1982.

[8] J. A. Damnger, D. Brand, J. V. Gerbi, W. H. Joyner, and L. Tre-

vol. C-27, pp. 985-997, 1978.

villyan, “LSS: A system for production logic synthesis,” IBM J. Res.

[9] T. Saito, H. Sugimoto, M. Yamazaki, and N. Kawato, “A rule-based logic circuit synthesis system for CMOS gate arrays,” in Proc. 23rd. Design Automation Conf.., pp. 594-600, 1986.

[IO] K. Bartlett, R. K. Brayton, G. D. Hachtel, R. M. Jacoby, M. R. Lightner, P. H. Moceyunas, C. R. Momson, and D. Ravenscroft, “BOLD: A multiple-level logic optimization system,” in Proc. Intl. Conf. Computer-Aided Design, 1987.

[ l l ] R. K. Brayton, R. Rudell, A. Sangiovanni-Vincentelli, and A. R. Wang, “MIS: A multiple-level logic optimization system,” IEEE Trans. Computer-Aided Design, vol. CAD-6, Nov. 1987.

1121 Ming-Wen Du, “Variable transformation: A new approach to syn- thesizing combinational switching circuits,” in Proc. Workshop on Microelectronics & Information Syst., (Hsinchu, Taiwan), pp. 441- 460, 1986.

1131 C. R. Edwards and S. L. Hurst, “Preliminary considerations of the combinatorial and sequential digital systems under symmetry meth- ods,” Int. J . Electron., vol. 40, pp. 499-507, 1976.

141 D. M. Miller and J. C. Muzio, “Detection of symmetries in totally or partially specified combinational functions,” IEE J . Comp. and Dig. Tech., vol. 2, pp. 203-209, 1979.

151 J. C. Muzio, D. M. Miller, and S. L. Hurst, “Multivariable sym- metries and their detection,” in Inst. Elect. Eng. Proc., vol. 130, Sept. 1983.

161 S . L. Hurst, D. M. Miller, and J . C. Muzio, Spectral Techniques in Digital Logic.

171 L. Diaz-Olavameta and S. G. Zaky, “Goal-oriented synthesis of switching functions,” in Proc. Int. Symp. Circ. Syst., (Helsinki), pp.

[ 181 L. Diaz-Olavameta, “Mapping-based synthesis using the goal-ori- ented method,” Ph.D. dissertation, Dept. Elec. Eng., Univ. To- ronto, Canada, 1988.

[I91 T. K. Uehara and W. M. VanCleemput, “Optimal layout of CMOS functional arrays,” IEEE Trans. Comp., vol. C-30, pp. 305-312, May 1981.

[20] N. Weste and K. Eshraghian, Principles of CMOS VLSI Design: A Systems Perspective.

121) R. Lisanke, “Logic synthesis and optimization benchmarks, user guide-version 2.0,” Microelectronics Center of N. Carolina, Re- search Triangle Park, N.C., Tech. Report, Dec. 1988.

Dev., vol. 28, pp. 537-545, 1984.

San Diego, CA: Academic Press, 1985.

1855-1859, 1988.

Reading, MA: Addison-Wesley, 1985.

Liliana Dim-Olavarrieta received the B.Sc. de- gree in electronics and communications from UDLA, Mexico, and the M.Sc. degree from IN- AOE, Mexico, and the Ph.D. degree from the University of Toronto, Canada, both in electrical engineering.

Her main areas of interest are the design and test of VLSI circuits and the study of noise sources such as crosstalk, ground bounce, and external electromagnetic coupling. She is presently a member of the scientific staff at Bell Northern Re-

search in Ottawa, Ont., Canada, where she works in the area of electro- magnetic noise control in VLSI circuits. Prior to joining BNR, she was an assistant professor at NJIT, Newark, NJ.

J. Illanko received the B.Sc. degree, with Hon- ors, in electrical and electronic engineering from the University of Peradeniya, Sri Lanka, in 1985.

From 1986 to 1987 he served as an Assistant Lecturer in the Department of Electrical and Elec- tronic Engineering at the University of Perade- niya. He is now working toward his M.A.Sc. de- gree in electrical engineering at the University of Toronto, Canada, on a Commonwealth Scholar- ship.

DIAZ-OLAVARRIETA er al.: DECOMPOSITION OF SWITCHING FUNCTIONS 665

Safwat G. Zaky (S’68-M’69) received the B.Sc. degree in electrical engineering and B.Sc. degree in mathematics, both from Cairo University, Egypt, and the M.A.Sc. and Ph.D. degrees in electrical engineering from the University of To- ronto, Canada.

He is a Professor in the Departments of Elec- trical Engineering and Computer Science, Uni- versity of Toronto. Prior to joining University of Toronto, he was with Bell Northern Research, Bramalea, Ont., Canada, where he worked on ap-

plications of electro-optics and magnetics in mass storage and telephone switching. He was a senior visitor with the Computer Laboratory, Univer- sity of Cambridge, England, in 1980-81. His current research interests are in the areas of computer architecture, logic synthesis, and electromagnetic compatibility of digital systems. He has coauthored two books on computer organization and microprocessor structures.

Dr. Zaky is a member of the Association of Professional Engineers of Ontario.