Improving high-level and gate-level testing with FATE: A functional automatic test pattern generator...

10
Selected best papers from ETS’06 Improving high-level and gate-level testing with FATE: A functional automatic test pattern generator traversing unstabilised extended FSM G. Di Guglielmo, F. Fummi, C. Marconcini and G. Pravadelli Abstract: A functional automatic test pattern generator (ATPG) that explores the design under test (DUT) state space by exploiting an easy-to-traverse extended finite state machine (FSM) model has been described. The ATPG engine relies on learning, backjumping and constraint logic program- ming to deterministically generate test vectors for traversing all transitions of the extended FSM. Testing of hard-to-detect faults is thus improved. The generated test sequences are very effective in detecting both high-level faults and gate-level stuck-at faults. Thus, the reuse of test sequences gen- erated by the proposed ATPG allows also to improve the stuck-at fault coverage and to reduce the execution time of commercial gate-level ATPGs. 1 Introduction Many economical and practical reasons have induced the designers to start automatic test pattern generation at high abstraction levels [1, 2]. In this way, the verification of complex systems is more tractable and design errors can be identified early and removed, saving time and money. Thus, many high-level automatic test pattern generators (ATPGs) have been proposed to generate effective test sequences [3–9]. On the other side, gate-level ATPGs rep- resent the state-of-the-art for digital system testing [10 – 13]. However, they pay the achieved good fault coverage results in terms of time and required resources. In this paper, we propose an high-level ATPG framework which is fast, since it relies on simulation, but also very effective in covering corner cases, since it uniformly explores the state space of the design under test (DUT). The proposed ATPG has been primarily intended for func- tional verification to detect design errors early. However, it can be exploited to improve gate-level testing too. In fact, the test sequences generated by the proposed ATPG reveal to be very effective to cover gate-level stuck-at faults. Experimental results show how stuck-at coverage achieved by reusing high-level test sequences is comparable with stuck-at coverage achieved by a state-of-the-art com- mercial gate-level ATPG. Moreover, in a hierarchical testing context, the commercial ATPG benefits by simulat- ing functional test sequences before applying its generation engines, since this increases fault coverage and decreases execution time. The development of a functional ATPG requires to deal with four basic aspects: (a) the formalism used to model the DUT, (e.g. FSM [7], assignment decision diagram [4, 8], BDD [3], etc.); (b) the algorithm to take decisions to move from a state to another one during DUT state exploration (e.g. genetic algorithms [14], SAT-solving [8], constraint logic programming [9], linear programming [8], etc.); (c) the strategy to deterministically reach particular states of the DUT representing corner cases (e.g. learning [15], justification [7], backtracking [4], back-jumping [16], etc.); (d) the metrics to evaluate the quality of gener- ated test sequences (transition coverage [17], path coverage [18], statement coverage [18], fault coverage [3], etc.). In this context, the paper presents the functional ATPG FATE which addresses the previous aspects as follows (Fig. 1): (a) The extended FSM (EFSM) paradigm is used to model the DUT. In particular, FATE works on a special kind of EFSM whose transitions present a quite uniformly distribu- ted probability of being deterministically traversed [19]. (b) A constraint logic programming-based (CLP) strategy is adopted to deterministically generate test vectors that satisfy the guard of the EFSM transitions selected to be traversed. (c) A two-step ATPG engine is implemented which exploits CLP to traverse the DUT state space: first, a random walk-based approach is used to cover the majority of easy-to-traverse (ETT) transitions; then a backjumping- based mode is used to activate hard-to-traverse (HTT) tran- sitions. In both modes, learning is exploited to get critical information that improves the performance of the ATPG. (d) Transition coverage is used to verify the goodness of our ATPG, since 100% transition coverage represents a necessary condition for fault coverage and for more accu- rate coverage metrics. Then, we have adopted the high-level bit coverage fault model [20], to measure the functional coverage of the generated test sequences, which have been also simulated at gate-level on stuck-at faults to check fault coverage on logic-level implementations. To summarise, the main contribution of the paper consists of proposing a functional ATPG targeted to: generate functional test for multiprocess DUTs; exploit the EFSM model to guide a CLP-based solver during the DUT traversal; # The Institution of Engineering and Technology 2007 doi:10.1049/iet-cdt:20060139 Paper first received 31st August 2006 and in revised form 22nd February 2007 The authors are with the Dipartimento di Informatica - Universita ` di Verona, Strada Le Grazie 15, Verona 37134, Italy E-mail: [email protected] IET Comput. Digit. Tech., 2007, 1, (3), pp. 187–196 187

Transcript of Improving high-level and gate-level testing with FATE: A functional automatic test pattern generator...

Selected best papers from ETS’06

Improving high-level and gate-level testing withFATE: A functional automatic test pattern generatortraversing unstabilised extended FSM

G. Di Guglielmo, F. Fummi, C. Marconcini and G. Pravadelli

Abstract: A functional automatic test pattern generator (ATPG) that explores the design under test(DUT) state space by exploiting an easy-to-traverse extended finite state machine (FSM) model hasbeen described. The ATPG engine relies on learning, backjumping and constraint logic program-ming to deterministically generate test vectors for traversing all transitions of the extended FSM.Testing of hard-to-detect faults is thus improved. The generated test sequences are very effective indetecting both high-level faults and gate-level stuck-at faults. Thus, the reuse of test sequences gen-erated by the proposed ATPG allows also to improve the stuck-at fault coverage and to reduce theexecution time of commercial gate-level ATPGs.

1 Introduction

Many economical and practical reasons have induced thedesigners to start automatic test pattern generation at highabstraction levels [1, 2]. In this way, the verification ofcomplex systems is more tractable and design errors canbe identified early and removed, saving time and money.Thus, many high-level automatic test pattern generators(ATPGs) have been proposed to generate effective testsequences [3–9]. On the other side, gate-level ATPGs rep-resent the state-of-the-art for digital system testing [10–13].However, they pay the achieved good fault coverage resultsin terms of time and required resources.In this paper, we propose an high-level ATPG framework

which is fast, since it relies on simulation, but also veryeffective in covering corner cases, since it uniformlyexplores the state space of the design under test (DUT).The proposed ATPG has been primarily intended for func-tional verification to detect design errors early. However, itcan be exploited to improve gate-level testing too. In fact,the test sequences generated by the proposed ATPGreveal to be very effective to cover gate-level stuck-atfaults. Experimental results show how stuck-at coverageachieved by reusing high-level test sequences is comparablewith stuck-at coverage achieved by a state-of-the-art com-mercial gate-level ATPG. Moreover, in a hierarchicaltesting context, the commercial ATPG benefits by simulat-ing functional test sequences before applying its generationengines, since this increases fault coverage and decreasesexecution time.The development of a functional ATPG requires to deal

with four basic aspects: (a) the formalism used to modelthe DUT, (e.g. FSM [7], assignment decision diagram [4,8], BDD [3], etc.); (b) the algorithm to take decisions tomove from a state to another one during DUT state

# The Institution of Engineering and Technology 2007

doi:10.1049/iet-cdt:20060139

Paper first received 31st August 2006 and in revised form 22nd February 2007

The authors are with the Dipartimento di Informatica - Universita di Verona,Strada Le Grazie 15, Verona 37134, Italy

E-mail: [email protected]

IET Comput. Digit. Tech., 2007, 1, (3), pp. 187–196

exploration (e.g. genetic algorithms [14], SAT-solving [8],constraint logic programming [9], linear programming [8],etc.); (c) the strategy to deterministically reach particularstates of the DUT representing corner cases (e.g. learning[15], justification [7], backtracking [4], back-jumping[16], etc.); (d) the metrics to evaluate the quality of gener-ated test sequences (transition coverage [17], path coverage[18], statement coverage [18], fault coverage [3], etc.).

In this context, the paper presents the functional ATPGFATE which addresses the previous aspects as follows(Fig. 1):

(a) The extended FSM (EFSM) paradigm is used to modelthe DUT. In particular, FATE works on a special kind ofEFSM whose transitions present a quite uniformly distribu-ted probability of being deterministically traversed [19].(b) A constraint logic programming-based (CLP) strategyis adopted to deterministically generate test vectors thatsatisfy the guard of the EFSM transitions selected to betraversed.(c) A two-step ATPG engine is implemented whichexploits CLP to traverse the DUT state space: first, arandom walk-based approach is used to cover the majorityof easy-to-traverse (ETT) transitions; then a backjumping-based mode is used to activate hard-to-traverse (HTT) tran-sitions. In both modes, learning is exploited to get criticalinformation that improves the performance of the ATPG.(d) Transition coverage is used to verify the goodness ofour ATPG, since 100% transition coverage represents anecessary condition for fault coverage and for more accu-rate coverage metrics. Then, we have adopted the high-levelbit coverage fault model [20], to measure the functionalcoverage of the generated test sequences, which havebeen also simulated at gate-level on stuck-at faults tocheck fault coverage on logic-level implementations.

To summarise, the main contribution of the paperconsists of proposing a functional ATPG targeted to:

† generate functional test for multiprocess DUTs;† exploit the EFSM model to guide a CLP-based solverduring the DUT traversal;

187

† introduce a backjumping technique to move throughhard-to-traverse transitions avoiding the state explosionproblem;† improve gate-level testing by reusing functional testsequences to increase gate-level fault coverage and reduceexecution time.

2 Related work

The EFSM model has been selected since it allows a morecompact representation of the state space than traditionalFSM. Thus, the risk of state explosion is sensibly reduced.Moreover, the EFSM model joins the valuable character-istics of the three main formalisms proposed in the literature[21]: (1) state-oriented, particularly suited to model controlsystems; (2) activity-oriented, targeted for data-dominatedsystems; (3) structure-oriented, preferable to model theDUT as an interconnection of basic components.However, few papers in the literature propose ATPGbased on the EFSM model. The reason, that limits the useof EFSMs in the ATPG context, depends on the fact that tra-versing an EFSM is more difficult than traversing an FSM.In fact, moving between two states may require to satisfy acondition depending on primary inputs (PIs), but also oninternal registers. Thus, the presence of conditions invol-ving registers on the guard of transitions imposes thatalready existent approaches, developed for traversingFSMs, cannot be easily adapted to EFSMs.

In Refs. [22, 23] different strategies are proposed toremove transitions whose guard involves conditions on reg-isters (note as inconsistent transitions) for reusingFSM-targeted ATPGs. However, the removal of inconsis-tencies can lead to the state space explosion if the DUTdescription contains a large number of conditions. A differ-ent approach is proposed in Ref. [17], where the authorspresent an orthogonalisation process to extract an EFSMmodel from an hardware description language (HDL)description. Then, a stabilisation process is presented toimprove the traversing of the EFSM. Finally, a breadth-firstsearch is used to generate a set of test patterns which coversall the transitions on the stabilised EFSM. The main limit-ations of this approach are represented by the complexityof the orthogonalisation and stabilisation processes, whichmay lead to state explosion. Moreover, the breadth-firstsearch-based approach is surely less efficient than strategiesbased on learning and backjumping. In fact, these methodsimprove the performance of the ATPG by avoiding the star-vation of DUT exploration in areas of the state space veryfar from the desired target. In particular, backjumping is aspecial kind of backtracking strategy, also known as non-chronological backtracking, which rollbacks from an unsuc-cessful situation directly to the cause of the failure. Thus, itis more efficient than backtracking. In fact, the basic

Fig. 1 FATE flow

188

backtracking algorithm rollbacks to the most recentdecision point before proceeding to a different search direc-tion [24]. However, there is no guarantee that the mostrecent decision point is the source of failure. Thus, back-tracking, differently from backjumping, may require manyrollbacks before solving the conflict.Many papers propose to use backtracking at gate level to

search for a path that propagates a target value to a targetnet. On the contrary, to the best of our knowledge, onlythe MIX-PLUS ATPG [25] exploits backjumping for gate-level test generation. However, the approach ofMIX-PLUS is different from what proposed in this workfor the following reasons:

† MIX-PLUS uses backjumping at gate level, while FATEuses it at functional level.† MIX-PLUS needs to generate and dynamically maintainan implication graph for using backjumping. However, thesize of such a graph can exponentially increase forcomplex sequential circuits where justification is appliedfor several time frames. On the contrary, FATE staticallygenerates a list that reports which transitions of the EFSMupdate the value of each register. The size of such a list isfixed and it equals at maximum R � T, where R is the totalnumber of registers and T is the total number of transitionsof the EFSM (in the extreme case where each register isupdated in each transition).

The use of the implication graph is mandatory atgate-level, since the gate-level netlist does not includeinformation to bind a conflict point directly to its cause.Such a problem is solved at functional level by FATE. Infact, the adoption of the EFSM model, joint to an accuratelearning phase, allows FATE to: (1) deterministicallybackjump to the source of failure when a transition,whose guard depends on a previously set register, cannotbe traversed, (2) modify the EFSM configuration tosatisfy the condition on the register, and then (3) success-fully come back to the target state to traverse the transition.In this way, FATE can efficiently traverse EFSMs withoutrequiring stabilisation.

3 Computational model

We represent a digital system as a set of concurrent EFSMs,one for each process of the DUT. In this way, according tothe below Definition 1, we capture the main characteristicsof state-activity-oriented and structure-oriented models[21]. In fact, the EFSM is composed of states and transitions,thus it is state-oriented, but each transition is extended withHDL instructions that act on the DUT registers. In this sense,each transition represents a set of activities on data, thus, theEFSM is a data-oriented model too. Finally, concurrency isintended as the possibility that each EFSM of the same DUTchanges its state concurrently to the other EFSMs to reflectthe concurrent execution of the corresponding processes.Data communication between concurrent EFSMs is guaran-teed by the presence of common signals. In this way,structured models can be represented.

Definition 1: An EFSM is defined as a five-tupleM ¼ kS, I, O, D, T l where: S is a set of states, I is a set ofinput symbols, O is a set of output symbols, D is an-dimensional linear space D1 � � � � � Dn, T is a transitionrelation such that T:S � D � I ! S � D � O. A genericpoint in D is described by a n-upla x ¼ (x1, . . . , xn); itmodels the values of the registers of the DUT. A pairks, xl [ S � D is called configuration of M.

IET Comput. Digit. Tech., Vol. 1, No. 3, May 2007

An operation onM is defined in this way: ifM is in a con-figuration ks, xl and it receives an input i [ I, it moves to theconfiguration kt, yl iff ((s, x, i), (t, y, o)) [ T for o [ O.The EFSM differs from the classical FSM, since each

transition does not present only a label in the classicalform (i)/(o), but it takes care of the register values too.Transitions are labelled with an enabling function e andan update function u defined as follows.

Definition 2: Given an EFSM M ¼ kS, I, O, D, T l, s [ S,t [ T, i [ I, o [ O and the sets X ¼ fxj((s, x, i),(t, y, o)) [ T for y [ Dg and Y ¼ fyj((s, x, i),(t, y, o)) [ Tfor x [ Xg, the enabling and update functions are definedrespectively as

e(x, i) ¼1 if x [ X

0 otherwise

u(x, i) ¼

(y, o) if e(x, i) ¼ 1 and

((s, x, i), (t, y, o)) [ T;

undef : otherwise.

8><>:

An update function u(x, i) can be applied to a configur-ation ks1, xl if there is a transaction t: s1 ! s2, labellede/u, such that e(x, i) ¼ 1. In this case, we say that t canbe fired by applying the input i.Fig. 2 reports an example of two equivalent EFSMs that

can be extracted from the corresponding HDL code.

4 EFSM manipulation

Many EFSMs can be generated starting from the same HDLdescription of a DUT. However, despite from their func-tional equivalence, they can be more or less easy to betraversed.For example, let us consider the functional description of

Fig. 2a, where in1 and in2 are PIs, out1 and out2 are primaryoutputs (POs), and reg is an internal register. Figs. 2b and cshow the state transition graphs (STGs) of two EFSMs,respectively M and M0, which can be extracted from thecode of Fig. 2a. They are functionally equivalent but theprobability of exploring the whole state space navigatingM0 is more uniformly distributed than navigating M.Let us consider block B11 in Fig. 2a. What is the prob-

ability that such an ATPG generates a test sequence thatallows us to enter on B11 by traversing the EFSM ofFig. 2b? It is the probability of traversing t3 multiplied bythe probability of traversing t5 by assigning 0 to the inputin1. However, in1 is not involved in the enabling function

IET Comput. Digit. Tech., Vol. 1, No. 3, May 2007

of t5, thus the ATPG drives in1 randomly. If in1 is a 32 bit-sized integer, the probability of assigning 0 to in1 is 1/232.Thus, it is very unlikely that B11 can be reached by pseudo-deterministically traversing this EFSM. On the contrary, theprobability that the same ATPG generates a test sequence toenter on block B11 by traversing the EFSM of Fig. 2c is theprobability of traversing t2

S multiplied by the probability oftraversing t2

S. In this case, all information required by theATPG are available on the enabling functions, thus, it candeterministically assign the opportune values to PIs.

Thus, in [19], a set of theoretically-based automatictransformations has been proposed to generate a particularkind of EFSM that allows an ATPG to easily explorethe state space of the corresponding DUT. Such an EFSMis called semi-stabilised smallest EFSM (S2EFSM), sinceit is composed of few states and it is partially stabilizedto remove inconsistencies by following the algorithmproposed in [17].

As showed in [19], the procedure for generating anS2EFSM is linear with respect to the number of conditionalstatements included in the HDL description of the DUT andthe risk of state explosion is extremely reduced with respectto the generation of FSMs. In fact, the states of the DUTinternal registers are not required to be explicitly enumeratedin an S2EFSM, as it happens, indeed, for traditional FSMs.Onthe contrary, in an S2EFSM, the states of internal registers areimplicitly modeled by using the update functions. The sameconsideration applies for S2EFSM configurations: theATPG does not require the explicit enumeration of configur-ations. Thus, the S2EFSM paradigm can be used to modelwhatever RTL DUT that can be represented by using hard-ware description languages like VHDL or SystemC. TheS2EFSM presents the following characteristics:

† It is functionally and timing equivalent to the HDLdescription from which it is extracted, i.e. given an inputsequence, the HDL description and the correspondingS2EFSM provide the same result at the same time.† The update functions contain only assignment state-ments. This implies that information, needed by a determi-nistic ATPG to traverse the DUT state space, resides in theenabling functions of the S2EFSM. It has been theoreticallyshown that the probability of traversing transitions of theS2EFSM is more uniformly distributed than in an EFSMwhose update functions contain conditional statements [19].† The S2EFSM is partially stabilised to reduce the stateexplosion problem that may arise when stabilisation is per-formed to remove inconsistent transitions. Only transitionswhose probability of being traversed is very low are stabilised.

Fig. 2

a Simple HDL description of a DUTb EFSM which models the HDL codec Another EFSM for the same HDL code

189

The main problem of an S2EFSM with respect to a com-pletely stabilized EFSM is related to the presenceof register-dependent hard-to-traverse transitions whoseenabling functions have a low probability of being satisfiedwithout using backtracking or learning-based techniques.This kind of transitions are due to the presence of registersin the conditions of enabling functions.

For example, let us consider transition t4 of Fig. 3. To tra-verse such a transition, an ATPG, after reset occurs, mustmove from A to B on t1. If in1 is a 32-bit integer, theATPG traverses t1 by fixing reset at 0 and by choosingbetween 2322 1 values, different from 0, for in1. Then,the ATPG can generate 2322 1 different admissible con-figurations, when it moves on t1. However, only the con-figuration where reg ¼ 1 (obtainable by fixing in1 at 1)is valid to traverse t4.

Thus, the probability of firing t4 is extremely low forATPGs which implement an heuristic that exploits infor-mation local to the current configuration only without back-tracking. Thus, in the next section, we propose an ATPGengine to traverse S2EFSMs which deterministically acti-vates HTT transitions by exploiting learning and backjump-ing. Note that, the proposed ATPG works also oncompletely unstabilised EFSMs.

5 ATPG engine

The proposed ATPG works on the set of concurrentS2EFSMs representing the DUT in a three-step fashion.First, an off-line learning phase is performed on theS2EFSMs to collect information about location of registerswithin enabling and update functions. Then, in the secondphase, a pseudo-deterministic test pattern generationapproach is adopted to uniformly traverse easy-to-traversetransitions. During this phase, information on state and tran-sition reachability is also learned. Finally, in the third phase,the ATPG exploits information collected in the previoussteps, by means of a backjumping-based approach, to tra-verse transitions that have not been traversed yet.

In the last two phases, the ATPG exploits the informationprovided by the enabling functions of the S2EFSMs touniformly move across the transitions of each S2EFSM ofthe DUT. In this way, the capability of traversing HTT tran-sitions is increased. On the contrary, a random ATPG tendsto traverse only transitions whose enabling function pre-sents a high probability of being satisfied by assigningrandom values to PIs.

Fig. 3 EFSM with register-dependent hard-to-traverse transitions

190

5.1 Phase 1: Learning

The set of S2EFSMs representing the DUT is generatedaccording to the approach described in [19], starting froma functional description of the DUT. During the S2EFSMgeneration, we collect, for each register Treg

e , the followinginformation:

† the set of transitions Trege whose enabling functions

involve reg;† the set of transitions Treg

u whose update functions updatereg;† the set of registers, and the corresponding locations inenabling and update functions, which behave as counters.

The previous information is useful to deterministicallytraverse register-dependent HTT transitions by exploitingbackjumping, as described in Section 5.3.Moreover, a function eval_func is generated for each

S2EFSM of the DUT. The eval_func allows the ATPGto evaluate at run-time the enabling functions of the tran-sitions to be traversed. Each function consists of a singlecase statement with one alternative for each transition ofthe corresponding S2EFSM. When the function is invokedby the ATPG, the alternative related to the transition to betraversed executes as follows:

1. At each simulation cycle, the simulation state is frozenand the values of DUT internal registers are retrieved.Then, the evaluation of the enabling function of the tran-sition to be traversed starts.2. Some conditions of the enabling function can be evalu-ated without invoking a constraint solver, i.e. those whichinvolve only internal registers and constants. If such con-ditions are not satisfied, the eval_func returns false.Thus, the transition cannot be traversed, and the ATPGmust either choose a different transition, if it is running inrandom-walk mode (Section 5.2), or remove the cause ofunsatisfiability, if it is running in backjumping mode(Section 5.3). When conditions on registers are satisfied, aconstraint solver is called on conditions which involve PIs.3. If the constraint solver provides a solution for conditionson PIs, then the values returned by the solver are collectedto compose the test vector. If the constraint solver fails,false is returned like in step 2.

5.2 Phase 2: random walk

During the random walk phase, the ATPG randomly walksacross the transitions of the S2EFSMs representing theDUT. Thus, ETT transitions are very likely traversed.Starting from a reset condition, the ATPG randomly

selects a transition from each S2EFSM according to thescheduling policy described in [26]. Then, it tries tosatisfy the enabling function of each selected transition byexploiting the constraint solver invoked by the eval_func previously described. When it succeeds, the valuesfor PIs provided by the constraint solver are used to gener-ate a test vector. Finally, a simulation cycle is performed, byusing the generated test vector, to update the internal regis-ters of the DUT and to move to the destination state. Then,another transition is selected and the cycle repeats. Moreformally, given the set Tsi of transitions out-going from astate si, at step i, the ATPG works as follows:

1. Order the S2EFSMs according to the scheduling policyreported in [26].

IET Comput. Digit. Tech., Vol. 1, No. 3, May 2007

2. For each S2EFSM:

(a) Randomly choose a transition tsi [ Tsi.(b) Call the eval_func described in Section 5.1 to checkif the enabling function e of tsi is satisfiable, i.e. if it can betraversed by assigning opportune values to inputs involvedin e. For example, let x be an input of the S2EFSM, theenabling function x ¼ 0 can be activated by assigning 0to x. On the contrary, if x is an internal register, the satis-fiability of the enabling function depends on the previousassignment to x, i.e. on the current configuration of theS2EFSM. Note that, for DUT composed of more than oneS2EFSM, the transitions selected at step (a) may requireto assign conflicting values to the same PIs to be concur-rently traversed. In this case, according to the schedulingpolicy, the S2EFSM with the highest priority wins, whilethe others must consider such PIs as internal registerswhose value cannot be changed.(c) Assign to PIs involved in e the values provided by theconstraint solver, if e is satisfiable. Otherwise, remove tsifrom Tsi and come back to step 2.3. Generate random values for PIs not involved in theenabling functions of the selected transitions. To accom-plish this task, a random engine is invoked.4. Invoke the simulation engine to simulate the test vectorso obtained, move across the selected transitions, andcome back to step 2 to generate the next test vector.

Each time a test vector is generated, the traversed transitionis labelled with the pair ktest sequence number, test vectornumberl. A list of pairs of parametric length is saved foreach transition. In this way, the backjumping mode canexploit such lists to quickly recover the prefix of a testsequence which allows the ATPG to move from the resetstate to an already visited target state.The pseudo-code of the random walk is reported in Fig. 4.

IET Comput. Digit. Tech., Vol. 1, No. 3, May 2007

5.3 Phase 3: Backjumping

The ATPG automatically changes to the backjumping modewhen the computation time assigned to the random walkexpires or no coverage improvement is provided for longtime. Thus, the ATPG works as follows:

1. Collect the not yet traversed transitions in an ordered list.Not traversed transitions, out-going from a state alreadyvisited during the random walk phase, are inserted at thebeginning of the list. Such transitions should be moreeasily traversable with respect to transitions out-goingfrom states never reached (note that, if the list is notempty, there is always at least a transition out-going froman already visited state.).2. Pick up a transition t from the beginning of the list. Doesits enabling function involve only conditions on PIs?† If yes, retrieve the pair ktest sequence number, testvector numberl corresponding to the source state St of t.Then, load and simulate such a test sequence s up to thevector tv indicated in the pair. In this way, the DUTmoves from the reset state to St. Finally, invoke the con-straint solver to generate PIs values to traverse t, simulatethe new test vector so obtained, and go to step 8.† If no, the enabling function of t involves conditions onregisters. It could be the case that the enabling functionsis satisfiable by the register configuration generated bysimulating s (this may happen if tv is the last vector ofs.). Thus, in this case, generate values for PIs involved inthe enabling functions by means of the constraint solver,simulate the so obtained test vector, and go to step8. Otherwise go to step 3.3. For sake of clarity, let us suppose that the enabling func-tion of t is expressed in the conjunctive normal form (CNF),and that its unsatisfiability depends on clauses involving asingle register reg (if the unsatisfiability of t depends on

Fig. 4 Pseudo-code of the random walk phase

191

Fig. 5 Pseudo-code of the backjumping strategy

more than one register, the backjumping procedure isrepeated for each of them.). Then, extract an alreadyvisited transition tu from the set of transitions Treg

u whoseupdate function updates reg. If all transitions in Treg

u havenever been visited, then, froze the situation of t, move tostep 2 and solve the reachability problem of transitions inTregu , finally come back to the problem related to t.

4. Retrieve the pair ktest sequence number, test vectornumberl corresponding to the source state Stu of tu, andaccordingly load the test sequence to move from the resetstate to Stu. Thus, the ATPG backjumps from St to Stu.5. Use the Dijkstra’s shortest path search algorithm [27] toprovide a path p from Stu to St starting with tu, without wor-rying about the satisfiability of enabling functions involvedin the path (the satisfiability of enabling functions of pis addressed in step 7). The weight required by theDijkstra’s algorithm for each transition, to guide thesearch heuristic in a greedy fashion, is computed by consid-ering the number of registers involved in the enabling func-tion. Higher is such a number, lower is the weight assignedto the corresponding transition. This is motivated by theconsideration that the hardness in satisfying an enablingfunction increases proportionally to the number of involvedregisters. Thus, a path whose transitions involve few con-ditions on registers is easier to be traversed.6. Satisfy the enabling function of tu according to theconstraints derived from the enabling function of t asfollows. Let us suppose that etu is the enabling function oftu and etjreg

tu the part of the enabling function of t whichinvolves the clauses depending on reg, where each occur-rence of reg has been substituted with the right-sideexpression of the assignment that updates reg in the

192

update function of tu (see, for example, Fig. 6). Invokingthe constraint solver to satisfy the constraint etu ^ etjreg

tu

allows us to obtain a test vector which satisfies etuand setsthe value of reg in such a way that when simulationreaches transition t, following p, its enabling function willbe correctly activated. The last observation may be falseif there is a transition t0u = tu = t in p, such that t0uupdates reg after tu did. In this case, the ATPG moves theproblem from tu to t0u requiring a solution for et 0u ^ etjreg

t 0u .7. Satisfy the enabling function of transitions included in pby iteratively applying the constraint solver to generate thecorresponding test vectors. The test sequence obtained byjoining s, to move from the reset state to Stu, and the testvectors generated to traversep allows the ATPG to traverse t.8. Remove t from the list of untraversed transitions andcome back to step 1 until either the list of untraversed tran-sitions is empty, or computation time expires.

The pseudo-code of the backjumping strategy is reportedin Fig. 5. The backjumping-based approach allows the

Fig. 6 Backjumping strategy

IET Comput. Digit. Tech., Vol. 1, No. 3, May 2007

ATPG to traverse transitions not traversed during therandom walk, without requiring a complete stabilisationof the EFSM. However, the algorithm may fails when theregister involved in the enabling function of the transitionto be traversed behaves as a counter variable. For thisreasons, the basic backjumping mode has been extendedas reported in the next paragraph.

5.3.1 Addressing counters: Let us consider the case of aDUT with a register reg implementing a counter, asreported in Fig. 7. To traverse the target transition t, theATPG backjumps to the transition tu whose update functionupdates reg. Then, the path p ¼ tu, pt is provided by theDijkstra’s algorithm. However, directly traversing p, afterreaching Stu from the reset state, would be useless to traverset. In fact, the enabling function oft cannot be satisfied untilthe path p0 ¼ tu, pc has been traversed at least five times (ifreg is initially fixed to 0).The problem arising with counters can be avoided by sta-

bilising the EFSM. However, the stabilisation of a transitioninvolving a counter represents the best way to incur on thestate explosion problem as reported in [17].Thus, we propose to extend the backjumping mode of the

ATPG as follows. During the learning phase, all counter reg-isters are statically identified, as well as the transitions whoseupdate function contains them. This is quite easy, since, atfunctional level, a counter is detected each time an updatefunction contains an assignment where reg appears in boththe left and the right sides. Thus, at run-time, if transitiontu has been labelled as ‘counter inside’, the Dijkstra’s algor-ithm is invoked two times: one to search for a path from Stuþ1

to St, and the other to provide a path from Stu to Stu includingtu. Then, the constraint solver is exploited to compute howmany times p0 must be traversed before moving on p.Finally, steps 6 and 7 of the backjumping-based approachpreviously described are applied to generate test vectorsthat allow to traverse p and p0.This approach overcomes the strategy proposed in [17] to

avoid stabilisation of counter-dependent transitions. In fact,in [17], the authors restrict the problem to the case where theupdate function tu includes only counters of the formreg: ¼ regþ c, where c is a constant. Moreover, in[17] the incrementing loop of the counter must be composedof a single transition going from Stu to Stu. On the opposite,our approach, exploiting a constraint solver, allows us tosolve more complex situations, i.e. think to a counterwhose increment depends on several subsequent assign-ments distributed in a set of adjacent transitions.

5.4 Multi-process scheduling

A scheduling algorithm is required to sort the S2EFSMs of amulti-process DUT. In particular, two reasons induce us tointroduce a scheduling algorithm.A first motivation is due to the fact that the same PIs may

be involved in the enabling functions of the transitions oftwo or more S2EFSMs. For example, let us consider aDUT composed of two S2EFSMs, M and N. Moreover, let

Fig. 7 Example of counter behaviour

IET Comput. Digit. Tech., Vol. 1, No. 3, May 2007

us suppose that at a certain simulation cycle, the ATPGselects the transition tM from M and the transitions tN

from N. The enabling functions of tM and tN can be compa-tible or conflicting. In the first case, there exists a valueassignment to PIs that satisfies the enabling functions ofboth tM and tN. Thus, PIs can be deterministically fixed totraverse both tM and tN. In the second case, such an assign-ment does not exist. Thus, tM and tN cannot be traversedconcurrently, and one of them must be discarded and substi-tuted with a different transition to remove the conflict. Inthis case, the scheduling algorithm is used to decide the pri-ority of each S2EFSMs for fixing PIs. When a conflicthappens, the transitions to be substituted are selected start-ing from the S2EFSMs with the lowest priority.

A second reason that motivates the use of a schedulingalgorithm depends on the fact that not all S2EFSMs mustbe triggered at each simulation cycle. Think, for example,to an S2EFSM corresponding to a process whose sensitivitylist’s signals remain unchanged.

According to the previous considerations, a priority-based scheduling algorithm has been implemented. In thisway, at each simulation cycle, the ATPG generates thetest vector by deterministically fixing the PIs in order to tra-verse the transition of the highest-priority S2EFSM. Then,PIs not involved in such a transition, are deterministicallyassigned according to the transition of the next-priorityS2EFSM, and so on, until all PIs are fixed.

To implement such a policy, the scheduler relies on atwo-level-queue mechanism without feedback. The queuewith the highest priority permanently includes S2EFSMsextracted from clock-sensitive processes. Such S2EFSMsare simulated at each clock cycle. The second queue perma-nently includes S2EFSMs extracted from asynchronous pro-cesses that are triggered by signals modified by theprocesses of the first queue. Within each queue, theS2EFSMs are sorted by assigning a dynamic priority com-puted as the sum of a constant value, F and an agingfactor A.

The value F assigned to each S2EFSM is inversely pro-portional to the number of PIs included in its enabling func-tions. Larger is the number of PIs involved in the enablingfunctions, lower is the value F assigned to the S2EFSM, andlater the S2EFSM is navigated. In this way, the DUTexploration is more uniformly distributed. In fact, if anS2EFSM, which involves many PIs on the transition-to-be-traversed, is scheduled first, its decision on PIsvalues definitely constraints the behavior of the remainingS2EFSMs. This may cause an incomplete exploration ofthe DUT. Analogously, the ageing factor has been intro-duced to avoid that the behavior of low-priority S2EFSMsis always constrained by decisions taken by high-priorityS2EFSMs. Initially, the value A is 0 for all S2EFSMs.Then, each time an S2EFSM is forced to discard and substi-tute the transition to be traversed (because the valuesassigned to PIs by an higher-priority S2EFSM do notsatisfy its enabling function) the ageing factor of S2EFSMis incremented of a constant quantity. On the contrary, theageing factor is reset to 0 after the S2EFSM reaches thehighest priority. Thus, sooner or later, every S2EFSMbecomes the highest-priority one.

6 Experimental confirmation

The efficiency of FATE has been evaluated by using thebenchmarks described in Table 1, where columns reportthe number of primary inputs (PIs), primary outputs(POs), flip-flops (FFs) and gates (Gates). Column Trns.

193

194

Table 1: Benchmarks properties

DUT PIs POs FFs Gates Trns. GT (sec.) BC

ex1 66 32 130 10 754 7 0.1 907

b00 66 64 99 1692 7 0.1 1182

b04 13 8 66 650 20 0.3 408

b10 13 6 17 264 35 0.3 216

b11m 9 6 31 715 20 0.2 725

b00z 66 64 99 11 874 9 0.2 1439

fr 34 32 100 1475 10 0.2 1041

dlx 29 31 25 232 28 0.3 1167

diffeq 161 96 289 33 510 4 0.9 3017

am2910 23 16 145 1598 543 3.1 5236

prawn 11 23 84 1996 191 1.5 3716

shows the number of transitions of the S2EFSM modelingthe DUT and GT (sec.) the time required to automaticallygenerate the S2EFSM. Then, Column BC reports thenumber of bit coverage faults injected into the designs tocheck the fault coverage.

Such benchmarks have been selected because theypresent different characteristics which allow us to analyzeand confirm the effectiveness of FATE. b04, b10 havebeen selected from the well-known ITC-99 benchmarkssuite [2]. b11m is a modified version of b11, included inthe same suite, created by introducing a delay on somepaths to make it harder to be traversed. The originalHDL descriptions of b04, b10 and b11m contain a highnumber of nested conditions on signals and registers ofdifferent size. ex1, b00, b00z and fr contain conditionalstatements where one branch has probability 12 1/232 ofbeing satisfied, while the other has probability 1/232.Thus, they are very hard to be tested by a randomATPG. In particular, ex1, b00 and b00z are internalbenchmarks, while fr is a real industrial case, i.e. it is amodule of a face recognition system. diffeq is a data-dominated benchmark for solving differential equations.Finally, dlx is the controller of a RISC processor, am2910is a microprogram address sequencer and prawn is an8-bit microprocessor.

At functional level the effectiveness of FATE has beenevaluated by comparing it to a genetic algorithm-basedhigh-level ATPG, as shown below. Then, in Section 6.2

we present the comparison of FATE with a commercialgate-level ATPG.

6.1 Genetic algorithm against EFSM-based ATPG

FATE has been compared with a genetic algorithm-basedhigh-level ATPG [28], which out-performs a pure random-based ATPG but it is not aware about the EFSM structure,and with a pseudo-deterministic ATPG [26], which usesonly the random walk mode to traverse the DUT statespace. Table 2 reports the transition coverage (TC%), thestatement coverage (SC%), the the fault coverage (FC%),and the test generation time (T (sec.)), by using respectivelythe genetic algorithm-based ATPG (GA-ATPG), thepseudo-deterministic ATPG (PD-ATPG) and the proposedATPG (FATE). It can be observed that FATE outperformsboth the GA-ATPG and the PD-ATPG. The very low-transition coverage achieved by the GA-ATPG for somebenchmarks is due to the presence of a transition out-goingfrom the initial state, whose enabling function has an infini-tesimal probability of being traversed by randomly fixingthe values of PIs. Such a problem is partially solved bythe PD-ATPG which is aware about the enabling functionsof the S2EFSM, and definitely solved by the learning/backjumping-based ATPG that reaches 100% transitionand statement coverage for all benchmarks. Then, also theachieved fault coverage for all benchmarks is sensiblyincreased.

Table 2: Comparison between a GA-based ATPG, a pseudo-deterministic ATPG and FATE

DUT GA-ATPG PD-ATPG FATE

TC% SC% FC% T (sec.) TC% SC% FC% T (sec.) TC% SC% FC% T (sec.)

ex1 71.4 85.7 78.2 3.3 85.7 92.9 80.3 2.9 100.0 100.0 96.0 3.1

b00 28.6 26.7 1.1 3.0 85.7 87.0 48.7 2.6 100.0 100.0 52.5 2.9

b04 80.0 90.2 94.9 23.2 85.0 95.0 99.0 8.7 100.0 100.0 99.0 9.1

b10 37.1 66.7 87.0 5.7 40.0 69.7 93.0 5.7 100.0 100.0 94.0 6.8

b11m 90.0 80.0 37.0 5.7 95.0 82.2 39.0 5.1 100.0 100.0 54.6 5.1

b00z 22.2 31.0 13.7 4.1 66.6 75.9 44.3 5.0 100.0 100.0 51.8 5.4

Fr 20.0 13.3 0.86 10.3 80.0 86.7 70.4 4.9 100.0 100.0 84.0 5.2

Dlx 50.0 50.7 35.1 3.3 60.7 63.9 46.7 3.2 100.0 100.0 59.5 3.4

Diffeq 100.0 100.0 95.4 50.0 100.0 100.0 98.6 59.9 100.0 100.0 98.7 61.7

am2910 95.1 87.3 84.1 99.1 98.2 90.3 88.7 88.1 100.0 100.0 93.1 87.0

prawn 87.2 70.6 63.9 144.2 91.3 73.5 168.9 131.8 96.0 77.1 72.8 183.3

IET Comput. Digit. Tech., Vol. 1, No. 3, May 2007

Table 3: Comparison between FATE and a state-of-the-art gate-level ATPG

DUT F# GATE-ATPG FATE FATEþGATE-ATPG

FC% T (sec.) FC% T (sec.) FC% T (sec.)

ex1 36 536 98.6 321 97.5 20 98.6 175

b00 7030 50.8 2051 41.5 6 50.8 1876

b04 4136 85.5 84 86.3 2 86.6 44

b10 1980 90.3 35 92.4 1 92.4 23

b11m 3736 91.0 305 87.9 2 93.0 81

b00z 72 080 54.7 21 002 54.7 146 55.4 5805

fr 9060 69.4 2278 59.0 17 69.4 1269

dlx 1694 89.3 4 84.3 1 89.3 4

diffeq 1 85 620 98.8 408 98.6 70 98.8 99

am2910 1 0 946 85.1 992 82.5 18 85.5 33

prawn 12 234 94.6 390 91.6 69 94.9 131.2

6.2 Gate-level against EFSM-based ATPG

Table 3 reports the comparison between FATE and astate-of-the-art commercial gate-level ATPG. Columnsreport: the number of stuck-at faults (F#), the stuck-atfault coverage (FC%) and the execution time (T (sec.)) byusing the gate-level ATPG, respectively, in ATPG mode(GATE-ATPG), simulation mode (FATE), and incrementalATPG mode (FATEþGATE-ATPG). In ATPG mode, testvectors are automatically generated by exploiting theinternal engines of the gate-level ATPG. In simulationmode, we force the gate-level ATPG to simulate only testvectors generated by FATE to cover bit coverage faults.Finally, in incremental mode, the gate-level ATPG simu-lates the FATE test vectors and then it tries to increasethe fault coverage by exploiting its internal ATPGengines. The table shows that the stuck-at fault coverageachieved by reusing the test vectors generated by FATE iscomparable with the one achieved by the gate-levelATPG in ATPG mode. Moreover, when the gate-levelATPG is used in the incremental mode, it takes a greatbenefit by simulating test sequences generated by FATE.In particular, the final fault coverage is always greaterthan or equal to the one achieved by the ATPG mode.However, by comparing the execution times, we observethat the incremental mode evidently provides better per-formance than the ATPG mode; we save 42% of time onaverage.

6.3 Scalability of the backjumping-basedapproach

Experiments reported in the previous paragraphs show thatFATE provides very high-quality results for different kindsof benchmarks, i.e. microprocessor and controller (dlx,am2910, prawn), academic benchmarks developed adhoc for being hard to be tested (ex1, b00, b00z, b04,b10, b11m, diffeq) and industrial design ( fr). Theexecution times reported in Table 3 show that FATE is,on average, two orders of magnitude faster than a commer-cial gate-level ATPG that is used for testing million-gatesdesigns.Relying on the S2EFSM paradigm, FATE can be applied

to whatever DUT that can be represented by using HDLslike VHDL or SystemC. However, from a qualitativepoint of view, the efficiency of the backjumping-basedengine implemented in FATE depends on the followingaspects:

IET Comput. Digit. Tech., Vol. 1, No. 3, May 2007

† the number of internal register involved in the enablingfunctions and† the sequential depth of the DUT.

According to the previous considerations, as larger is thenumber of registers involved in the target enabling function(and as larger is the distance between a register R and thePIs on which R depends), as larger is the number of back-jumping steps required for finding a test sequence thatallows FATE to fire the target transition.

Finally, it is worth noting that the proposed ATPG ismore suited for control-dominated DUTs than data-dominated DUTs, since:

† the backjumping-based engine exploits the conditionsinvolved in the enabling functions of the EFSM for traver-sing the DUT space, and such conditions are, generally,more related to control signals than data signals;† the test generation procedure is transition oriented, thuswhen 100% transition coverage is achieved the ATPGstops, without considering if data signals not involved in theenabling functions of transitions have been exercised enough.

Futureworkswill deal with the definition of a fault-orientedengine to better address data-dominated DUTs.

7 Concluding remarks

The paper presented FATE: a deterministic functionalATPG which relies on (1) the EFSM formalism to modelthe DUT; (2) learning, CLP and backjumping; (3) a schedul-ing algorithm to allow a fair exploration of the DUT, bygiving to each S2EFSM the possibility of deterministicallyfixing PIs to reach the desired destination state. The adop-tion of the EFSM joint to the learning/backjumping-basedmechanism allows FATE to accurately address HTT tran-sitions, whose enabling function depends on registers,without requiring EFSM stabilization. In fact, the ATPGengine directly backjumps to the transition that updatesthe register involved in the HTT transition to opportunelyfix the register value. This represents an effective advantagewith respect to the use of backtracking in gate-level ATPGswhich blindly rollbacks to a decision point before proceed-ing towards a different direction.

The quality of generated test vectors has been compared,at high level, with a genetic algorithm-based ATPG, andat gate level, with a commercial ATPG. FATE outper-forms the genetic ATPG, while it provides comparable

195

stuck-at fault coverage with respect to the commercialATPG. However, the gate-level ATPG performancegreatly benefits by reusing the test vectors generated byFATE. In particular, stuck-at fault coverage increases forhalf of benchmarks and execution time is decreased of42% on average.

7 Acknowledgment

This work has been partially supported by the VERTIGOEuropean project FP6-2005-IST-5- 033709.

8 References

1 Kapuer, R.: ‘High level ATPG is important and is on its way!’. Proc.IEEEITC, 1999, pp. 1115–1116

2 High time for high-level test generation’, Panel at IEEE ITC 19993 Ferrandi, F., Fummi, F., and Sciuto, D.: ‘Implicit test generation for

behavioral VHDL models’, Proc. IEEE ITC, 1998, pp. 436–4414 Ghosh, I., and Fujita, M.: ‘Automatic test pattern generation for

functional register-transfer level circuits using assignment decisiondiagrams’, IEEE Trans. Computer-Aided Design Int. Circuits Syst.,2001, 20, (3), pp. 402–415

5 Corno, F., Cumani, G., Sonza Reorda, M., and Squillero, G.:‘Effective techniques for high-level ATPG’. Proc. IEEE ATS, 2001,pp. 225–230

6 Fin, A., and Fummi, F.: ‘Laerteþþ: an object oriented high-level TPGfor SystemC design’. Proc. ECSI FDL, 2003, pp. 658–667

7 Zhang, L., Ghosh, I., and Hsiao, M.: ‘Efficient sequential ATPG forfunctional RTL circuits’. Proc. IEEE ITC, 2003, pp. 290–298

8 Lingappan, L., Ravi, S., and Jha, N.: ‘Test generation fornon-separable RTL controller-datapath circuits using a satisfiabilitybased approach’. Proc. IEEE ICCD, 2003, pp. 187–193

9 Xin, F., Ciesielski, M., and Harris, I.: ‘Design validation of behavioralVHDL descriptions for arbitrary fault models’. Proc. IEEE ETS, 2005,pp. 156–161

10 Wang, C., Reddy, S., Pomeranz, I., Lin, X., and Rajski, J.: ‘Conflictdriven techniques for improving deterministic test patterngeneration’. Proc. ACM/IEEE ICCAD, 2002, pp. 87–93

11 Li, B., Hsiao, M., and Sheng, S.: ‘A novel SAT all-solutions solver forefficient preimage computation’. Proc. IEEE DATE, 2004,pp. 272–277

196

12 Graphics M.: ‘Flextest’, www.mentor.com13 Synopsys: ‘Tetramax’, www.synopsys.com14 Wu, Q., and Hsiao, M.: ‘Efficient ATPG for design validation based

on partitioned state exploration histories’. Proc. IEEE VTS, 2004,pp. 389–394

15 Ier, M., Parthasarathy, G., and Cheng, K.-T.: ‘Efficient conflict-basedlearning in an RTL circuit constraint solver’. Proc. IEEE DATE, 2005,pp. 666–671

16 Padmanabhuni, S.: ‘Extended analysis of intelligent backtrackingalgorithms for the maximal constraint satisfaction problem’. Proc.IEEE CCECE, 1999, pp. 1710–1715

17 Cheng, K., and Krishnakumar, A.: ‘Automatic generation offunctional vectors using the extended finite state machinemodel’ACM Trans. Design Automat. Electron. Syst., 1996, 1, (1),pp. 57–79

18 Myers, G.: ‘The art of software testing’ (Wiley-Interscience,New York, 1979

19 Di Guglielmo, G., Fummi, F., Marconcini, C., and Pravadelli, G.:‘EFSM manipulation to increase high-level ATPG efficiency’. Proc.IEEE ISQED, 2006

20 Fummi, F., Marconcini, C., and Pravadelli, G.: ‘Logic-level mappingof high-level faults’, INTEGRATION, The VLSI Journal, 2004, 38,pp. 467–190

21 Gajski, D., Zhu, J., and Domer, R.: ‘Essential issue in codesign’,University of California, Irvine, Technical report ICS-97-26, 1997

22 Hierons, R., Kim, T.-H., and Ural, H.: ‘Expanding an extended finitestate machine to aid testability’. Proc. IEEE COMPSAC, 2002,pp. 334–339

23 Duale, A., and Uyar, U.: ‘A method enabling feasible conformancetest sequence generation for EFSM models’, IEEE Trans.Computers, 2004, 53, (5), pp. 614–627

24 Russel, S., and Norvig, P.: ‘Artificial Intelligence: a modern approach’(Prentice-Hall, 2002)

25 Lin, X., Pomeranz, I., and Reddy, S.M.: ‘Techniques for improvingthe efficiency of sequential circuit test generation’. Proc. ACM/IEEE ICCAD, 1999, pp. 147–151

26 Di Guglielmo, G., Fummi, F., Marconcini, C., and Pravadelli, G.:‘Improving gate-level ATPG by traversing concurrent EFSMs’.Proc. IEEE VTS, 2006

27 Dijkstra, E.: ‘A note on two problems in connexion with graphs’,Numerische Mathematik, 1959, 1, pp. 269–271

28 Fin, A., and Fummi, F.: ‘Genetic algorithms: the philosophers stone oran effective solution for high-level TPG?’. Proc. IEEE HLDVT, 2003,pp. 163–168

IET Comput. Digit. Tech., Vol. 1, No. 3, May 2007