A Strategy to Improve Component Testability without Source Code

21
A strategy to improve component testability without source code Camila Ribeiro Rocha, Eliane Martins Institute of Computing (IC) Campinas State University (UNICAMP) P.O. Box 6176 – Campinas – SP ZC: 13083-970 {camila.rocha, eliane}@ic.unicamp.br Abstract. A software component must be tested every time it is reused, to guarantee the quality of both the component itself and the system in which it is to be integrated. To reduce testing costs, we propose a model to build highly testable components by embedding testing and monitoring mechanisms inside them. The approach is useful to component developers, who can use these built- in test capabilities in the testing phase, as well as to component users, who can verify whether the component fulfills its contract. Our approach proposes the insertion of built-in mechanisms directly into intermediate code, allowing its use even in COTS components. Keywords: testability – design for testability – OO components – UML 1 Introduction Component-based software development is a current trend in the construction of new systems. The main motivation is the possibility to significantly reduce development costs and time yet satisfying quality requirements. A software component can be defined as a “unit of composition with contractually specified interfaces and explicit dependencies. A software component can be deployed independently and is subject to composition by third parties” [Sz98]. A software component may be: in-house software designed for one project and then used in others; software created to be part of an in-house repository; third party software which might be either open-source software or commercial-off-the-shelf (COTS) components. For component-based development to be succesful, verification and validation activities should be carried out during component development and development of systems where they are used [Ad99]. In particular, components should be tested many times: individually and also each time it is integrated into a new system [We98]. Testing components presents, however, additional difficulties with respect to testing in-house software. These difficulties are mainly due to lack of information [BG03]: component providers (who develop the component) do not know in advance all different contexts in which the component will be used; component users (who develop a system using the component), on the other hand, typically have only provided interface access. This usually happens with COTS components whose source

Transcript of A Strategy to Improve Component Testability without Source Code

A strategy to improve component testability without source code

Camila Ribeiro Rocha, Eliane Martins

Institute of Computing (IC)

Campinas State University (UNICAMP) P.O. Box 6176 – Campinas – SP ZC: 13083-970

{camila.rocha, eliane}@ic.unicamp.br

Abstract. A software component must be tested every time it is reused, to guarantee the quality of both the component itself and the system in which it is to be integrated. To reduce testing costs, we propose a model to build highly testable components by embedding testing and monitoring mechanisms inside them. The approach is useful to component developers, who can use these built-in test capabilities in the testing phase, as well as to component users, who can verify whether the component fulfills its contract. Our approach proposes the insertion of built-in mechanisms directly into intermediate code, allowing its use even in COTS components.

Keywords: testability – design for testability – OO components – UML

1 Introduction

Component-based software development is a current trend in the construction of new systems. The main motivation is the possibility to significantly reduce development costs and time yet satisfying quality requirements.

A software component can be defined as a “unit of composition with contractually specified interfaces and explicit dependencies. A software component can be deployed independently and is subject to composition by third parties” [Sz98]. A software component may be: in-house software designed for one project and then used in others; software created to be part of an in-house repository; third party software which might be either open-source software or commercial-off-the-shelf (COTS) components.

For component-based development to be succesful, verification and validation activities should be carried out during component development and development of systems where they are used [Ad99]. In particular, components should be tested many times: individually and also each time it is integrated into a new system [We98].

Testing components presents, however, additional difficulties with respect to testing in-house software. These difficulties are mainly due to lack of information [BG03]: component providers (who develop the component) do not know in advance all different contexts in which the component will be used; component users (who develop a system using the component), on the other hand, typically have only provided interface access. This usually happens with COTS components whose source

code or other information that could help testing is not available. Even when source code is available, as open-source components, users might not have either the required documentation nor the expertise that would help them test and debug.

Component providers can facilitate matters to the users by enhancing component testability. Design for testability (DFT) techniques have long been used by integrated circuit (IC) engineers to reduce test cost and effort yet achieving the required quality level. The built-in test (BIT) approach, which is a DFT technique, introduces standard test circuits and pins to allow a card or an IC to be put in test mode, to transmit test inputs and to capture the outputs. An extension to that approach consists of integrating test inputs generation mechanisms inside the IC, which renders the use of an external tester unnecessary. This concept is known as built-in self-test, or BIST, for short. Similar approaches have been proposed to enhance software testability. A discussion about component testing and the difficulties to accomplish this activity, as well as the approaches used to improve software components testability, are the subject of Section 2.

This text presents an approach for components testability improvement, making them testable. A testable component contains, in addition to its implementation, information that supports testing activities. Our concern in this work is OO components, defined as sets of tightly coupled classes. This work extends the one in [MTY01], where a component is considered a unique class. It is also an extension of the work in [UM01], in which built-in capabilities are inserted in intermediate code, thus allowing its use when no source code is available. A testable component, in our approach, contains a test and a tracking interface to allow users to access test facilities embedded into the component. These facilities include self-checking capabilities, state and error tracking functions as well as a test specification from which test cases can be automatically derived. Section 3 shows design aspects and Section 4 presents the architecture of a testable component.

Our aim is unit testing: to determine whether a component conforms to its usage contract. This is the first step users should perform in order to guarantee that the component fulfils its contract in the new context of execution. Possible misbehavior can be detected earlier and countermeasures can be taken by users, such as creating a wrapper or a protector to avoid that the identified misbehavior jeopardizes application’s quality.

Section 5 contains implementation issues, showing the steps performed in building a testable component as well as implementation examples. Finally, Section 6 concludes this paper, presenting current status and future works.

2. Component testing

Reusable components are increasingly being used nowadays, specially in critical applications. It is expected that their use may shorten development time and ease maintenance, yet guaranteeing a required quality level. To achieve good quality, systems must be built upon high quality components that interoperate safely [We98]. Since testing is the most commonly used quality assurance technique, components should be extensively tested, both individually and once integrated.

Component-based systems, as well as any other system, undergo at least three test phases: unit (to check one individual component), integration (to check whether the components fit together) and system testing (to check whether the system fulfills its requirements). Furthermore, regression testing is needed each time there is a modification.

2.1. Difficulties in testing components

There are several technical difficulties that remain to be solved in component-based development. These difficulties can be attributed mainly to two main causes [BG03]: heterogeneity and lack of information. Heterogeneity stems from various reasons: (i) the same specification can have different implementations for the same component; (ii) different components can be implemented in different programming languages and for different platforms (database technology, operating system, runtime environment, hardware); (iii) different components can be provided by different suppliers, possibly competing with each other.

A consequence of this heterogeneity for testing is that it is hard to build test suites or test tools that can operate on a large variety of programming languages and platforms. Test drivers and stubs cannot be built in the ad-hoc and project specific way, as they could not effectively cope with different components and their customizable functions [Bo01].

Lack of information, on the other hand, is due to the limited exchange of information among component providers and component users [BG03]. Component providers do not know in advance the different contexts in which the component will be used, so they need to test it in a context-independent way. Nevertheless, explicit or implicit assumptions are made by the providers to circumvent this lack of knowledge about usage contexts. These assumptions, specially the implicit ones, might lead the users to the architectural mismatch problem [GAO95]. Mismatches can occur due to incompatibility of the data exchanged among components or due to problems in the dynamic interaction among them. Consequently users must retest a component each time it is used in a new context, as it may function well in some of them but not in others. Integration testing by component users is absolutely necessary to reveal dynamic interaction conflicts.

Component users, on the other hand, generally do not have the information needed (input domain, specification, source code) for adequately test a component. Lack of source code makes it difficult to statically analyze the code to obtain data-flow or control-flow dependencies, or to employ white-box coverage criteria for testing. After testing, debugging is also complicated without source code information. Even in the case it is available, users may have neither the time nor the expertise necessary to understand it well enough to perform testing and debugging tasks.

2.2. Component’s testability

Given the need of extensive testing, building components with good testability is very important because test tasks are simplified and test costs are reduced without jeopardizing system quality.

As pointed out by different authors [Bi94, Pr97, Ga00], testable software possesses a set of characteristics such as controllability (how easy it is to control a software in its inputs/outputs and behavior?), observability (how easy it is to observe a software during testing?), traceability (how easy it is to track the status of component attributes and component behavior?), understandability (how much information is provided and how well is it presented?), among others.

In the context of component-based development there are various ways to enhance testability. Two alternatives can be considered [Ga02]: creating new test suite technologies and embedding test facilities inside components. The first category consists of enabling components users to build test suites for any component and perform tests. Such techniques are called as "plug-in-and-test". Examples of such approaches can be found in [Vo97] and [MK96].

The second category refers to approaches that are aimed at augmenting a component with test facilities such as test suites (or capabilities to generate them), runtime checkers, monitoring capabilities. These capabilities are known as built-in tests (BIT), and a standard interface is needed for the user to access such extra information. In this text we focus these approaches, considering a testable component as one that contains built-in testing capabilities.

2.3. Building testable components

Design for testability techniques and the self-testing concept have been used in hardware for a long time; however, only recently this subject is gaining more attention of software community.

Some authors have proposed the use of these hardware approaches to improve software testability. A pioneering work is the one presented in [Ho89] which augments a module’s interface with testing features – access programs, corresponding to the test access ports (TAP) used in BIT approaches in hardware. Other features are assertion using, automate test driver generation from a test specification and insertion of testing features by compiler directives. Binder adapted this approach to the OO context, proposing the construction of self-testing classes [Bi94]. A self-testing class is composed by the class (or component) under test augmented with BIT capabilities and a test specification used for test generation and as test oracle.

Since then, various approaches have been proposed to construct testable components. For the sake of conciseness we present here only those that are closer to our work and relevant in the paper's context. A more detailed overview of several approaches can be seen in [BG03] and [Bo01].

Voas et al. present a tool, ASSERT++ [VSS98], to help users to place assertions into programs to improve OO software testability. The tool supports assertion insertion and suggests points where the assertions might be inserted, according to

testability scores obtained. Their work is complementary to ours, in that we are not concerned with the best place to introduce assertions. Moreover, they are only concerned with improving controllability and observability; test case generation and other built-in facilities, such as reporter methods, were not considered.

Le Traon et al. present self-testable OO components, which embed their specification (documentation, methods signature and invariant properties) and test cases [TDJ97]. Test cases are generated manually and embedded into the component. Assertions are used as an oracle, but manually generated oracles can also be used. An interesting point is that a test quality estimate can be associated to each self-test, either to guide the choice of a component, or to help reaching a test adequacy criteria when generating test cases. Similar to their approach, we also use assertions to build self-checking capabilities inside a component.

The component test bench approach (CTB) [Bu00] also enhances a component with a test specification. This specification contains a description of the component implementation (programming language, platform), its implemented interfaces and the test sets appropriate for each interface. A test specification is composed of test operations, which define the necessary steps to test a specific method. It is described in XML and stored in descriptor files, which are updated with test results as CTB runs so they may also be used for regression testing. Test operations can be executed either in a specific runtime environment, IRTS (Instrumented Runtime System), provided with the CTB tool, or in a standard runtime environment. IRTS has symbolic execution capabilities to help users parameterize test cases or to obtain conditions not yet exercised. A test pattern verifier module is also available to allow the generation of test drivers from the test specification. This module, written in Java, can be packed with the component, together with the test specification.

In the Component+ approach [EH03], the component under test is enhanced with built-in tests and test interfaces to access them, becoming the BIT component... The built-in tests are restricted to error detection mechanisms and reporters, which give access to the internals of the component. The test interfaces are connected to Testers components, which contains the test cases to exercise the built-in tests and evaluate the information provided by the test interfaces. There are two kinds of Testers: to test component’s functionalities (Contract testing) and to test system quality issues (Quality of Service testing). There are also Handlers components for exception handling purposes. The test specification is not included, and a testable component is produced from COTS and in-house components.

In the three previous approaches, test cases generated during development are delivered with the component, while our approach embeds the component specification from which test cases are generated. An advantage of these approaches is that users test effort is reduced. However, as pointed out in 2.4, component providers’ test suites might not be adequate for users.

The Self-Testing COTS Component (STECC) strategy aims at both reducing users effort on testing and allowing them to create tests that are more suitable to their requirements [BG03a]. Instead of test suites, components are augmented with analysis and testing functions. Component providers do not need to disclose detailed information to users: the test tool may access the necessary information from inside the component. Component users can use the embedded analysis and test functions to

test a component according to adequacy criteria that are most interesting to them. Test drivers are automatically generated by embedded test tools. Test cases do not necessarily include expected results: this depends on whether the component specification is available in a form which can be automatically processed or not.

Gao et al. [Ga02] propose the testable bean concept, a component architecture with well-defined built-in test interfaces to enhance the component testability and support external interactions for software testing. There are also built-in tracking interfaces which provide tracking mechanisms for monitoring various component behaviors in a systematic manner. Testing and tracking code are separated from normal functional parts of the component as in our work, but there is no automatic generation of test cases.

Our work extends previous ones developed by our group. In Martins et al. [MTY01] we presented an approach for building self-testable classes. This work was extended to consider components that can contain more than one class [UM01]. In these works the main focus was test case generation from a specification embedded in the component. Here our main concern is to define what resources a testable component must provide to support testing, which can be easily used by component providers and component users, with minimum programming overhead. We still consider specification-based test case generation, but the embedded resources should also be used with other types of testing.

3. Design issues

The use of additional information to components is not new. Besides the approaches to enhance component testability presented in Section 2, we can also mention other approaches that are not specifically for testing purposes. For example, a JavaBean component can be associated with a BeanInfo Object, which contains information such as the component name, a textual description of its functions and textual description of its properties [Su03]. These are aimed at helping to customize the component within an application.

In our approach, the additional information provides resources to support testing activities. Design of the architecture aims at achieving the following properties: ease of use, so that a testable component could be provided and used with minimum programming effort; separation of concerns, providing independence between the component under test (CUT) and the instrumentation features necessary for testing and tracking, as well as between the instrumentation mechanisms with respect to each other; extensibility, so that new capabilities can be added without further difficulties; portability, among various applications and platforms; reduced intrusiveness, such that the instrumentation inserted for testing purposes does not impact too much in storage and behavior of the component; and source code independence, which allows the instrumentation code to be added or removed without the need to re-compile the component. Besides, it also deals with COTS components.

A testable component is augmented with built-in testing and tracking capabilities, which allow control of pre-test state and observation of post-test state. A test and tracking interface is implemented to access embedded test facilities.

There are different approaches to develop built-in testing and tracking capabilities, according to Gao et al. [GZS00]. One is the framework-based approach, where a trace library is used to add tracking and testing code into components. Besides the high programming overhead, this approach requires that component source code is available. Another approach is based in automatic code insertion, in which the instrumentation is introduced in the component source code. This can be accomplished by an automatic tool (e.g, by extending a parser, like OpenC++ 2.0 [Ch97]). Although this approach reduces the programming overhead, it still assumes component source code availability. The third is the automatic component wrapping approach, where a component is wrapped with testing and tracking code as a black-box. This approach has low programming overhead and the component source code is not required. However, it is not suitable to support error and state tracking (see Section 4.1 for details), as they both highly depend on detailed information about a component internal logic as well as on the business rules of the application.

We used the second approach, which is based on code insertion, since error and state tracking is an important feature for us. To avoid source code dependency, built-in testing and tracking code is inserted by manipulating the intermediate code (bytecode, for Java language, for example).

BIT capabilities also include runtime checking mechanisms, which are responsible for: (i) checking conditions on inputs provided by component’s clients (precondition), (ii) checking conditions on the outputs delivered by a component to its clients (postcondition), (iii) checking conditions that must be true at all times (invariants) and (iv) produce an action when such conditions are violated. In other words, it implements the design-by-contract approach developed in OO domain [Me92]. A contract is the specification of rights and obligations between a component and its clients. The process by which this contract is obtained from requirements is not our concern in this text.

A component specification is also part of our approach. It comprises the contract specification and a behavior model. The contract specification can use the Object Constraint Language (OCL), that is part of the UML standard. The behavior model is represented by an Activity Diagram, which is also part of UML. This specification is not part of built-in capabilities; it can be deployed with the component or can be made available at the component provider site. The contract specification can be used to generate the executable assertions that comprise the built-in testing capabilities mentioned above. The behavior model, on the other hand, can serve for test case generation purposes. These aspects are not considered in this text, which addresses only the testable component architecture.

4. Testable component architecture

A testable component, according to what is sated in Section 3, offers two additional interfaces: one for tracking and other for testing purposes. Its architecture is presented in Figure 1. The testable component is composed by three subcomponents: the component under test (CUT), the Tracker component, which implements the tracking interface (ITracking), and the Tester component, which implements the testing

interface (ITesting). In that way we achieved the separation of concerns and extensibility properties, since the testing and tracking codes are separated from each other and also from the CUT’s code.

The ITesting and ITracking interfaces are public and intended to be used by a test driver1 to setup the CUT for testing and post-testing state observation.

The Tester and Tracker components will be detailed in the following subsections.

TesterTra cke r

Co mpon ent

ITestingITracking

InterfaceBInterfaceA

Testable Component

ILogTrace ILogAssert

Figure 1: Testable component architecture

4.1. Tracker Component

The Tracker component provides methods to include tracking mechanisms into the CUT. The kinds of traces available are [GZS00]: Operational trace, which records the interactions between component public operations; Error trace, which records exceptions generated by the component; and State trace, which traces the public attributes state. Other kinds of traces, such as event trace for user interfaces, performance or reliability, were not considered in this preliminary architecture, but can be incorporated later, according to the interest of component providers or users. The architecture is extensible to ease these changes.

The Tracker component provides two interfaces: ITracking, used by the testable component clients, and ILogTrace, used by the CUT.

The ITracking interface provides services to include built-in tracking code into the CUT. The following services are available:

- operationalTrace(method: String, type: char): includes built-in code into the CUT for operational tracking of method. The parameter type specifies which calls will be registered: the calls to the method, the calls from the method, or both.

- stateTrace(attribute: String[]): includes built-in code for state tracking of public attributes whose names are passed in attribute.

- errorTrace(exception: String): includes built-in code for error tracking of in public methods where it is raised.

- setLog(file: File): sets the log file.

1 Test driver is a program that automates test cases execution, providing test inputs (data),

control and monitor execution.

The ILogTrace interface is only visible inside the testable component. It is used by the CUT to register the traces into a log file. It is worth noting that only public elements of a component are traceable, so users have no access to undisclosed information.

4.2. Tester Component

The Tester component provides two interfaces: ITesting, which offers services to testable component clients, and ILogAssert, which is used by the CUT.

The ITesting interface is responsible for embedding built-in testing capabilities into the CUT. It also offers specification retrieval services to the users.

The built-in testing capabilities comprise the runtime checking functions mentioned in Section 3. They are included into the CUT’s bytecode by the following operations: insertPrecondition(method: String, continue: boolean), insertPostcondition(method: String, continue: boolean) and insertInvariant(method: String, continue: boolean).

When an assertion is violated, an action must be produced which has two responsibilities [Bi00, c.17.4]: notification and continuation. The continuation part is controlled by the continue parameter, which indicates whether the CUT execution should continue after assertion violation or not. That is why the executable assertions couldn’t be implemented using the assert Java library. The assert function throws an Error2 if the assertion fails, stopping the application, thus giving no chance to continue execution.

The notification consists in writing into a log file, indicated by the ITesting.setLog(file: File) operation. Notification services are provided to the CUT by ILogAssert interface, which is only visible inside the testable component.

It is worth noting that the component provider must develop the runtime checking code that will be inserted into the component. This code may be complex, since to express all relationships of interest it must be able to manipulate sets, sequences, bags, relations and universal quantifiers ("for all" and "there exists"). Therefore, this code must be designed, implemented and tested as any piece of code, especially because many programming languages, such as Java and C++, have limited support for executable assertions implementation. Nevertheless, there are some toolkits that support assertions, such as JASS [Co01] for the Java language.

The component specification is retrieved by the methods getModel() and getAssertions(). These methods return files that contain the components behavior model and the component’s contract (c.f. Section 3). These can be used to automatically generate test cases and built-in test code. Tools for that purpose are being developed.

2 An Error is Java class (Throwable subclass) that indicates serious problems that the

application should not try to catch [Sun03].

5 Implementation issues

The testable component implementation comprises built-in code creation and Tester and Tracker components implementation. During the built-in code creation, the component provider builds the testing and tracking libraries. The testing library contains the self-checking code, implementing the CUT’s contract, while the tracking library contains the tracking code, implementing the monitoring capabilities. To achieve source code independence, the built-in code is included into CUT’s bytecodes by the Tester and Tracker components. The way this code is inserted influences the Tracker and Tester components implementation.

Our first studies used Java as implementation language for the CUT and testing and tracking facilities as well. CUT instrumentation was achieved in two ways: using a bytecode manipulation library, BCEL (Byte Code Engineering Library) [Da01], and using aspect-oriented programming [Ki01], with AspectJ [La03], which supports load-time weaving.

To illustrate our approach we use the Stack component as example, which has two methods: push, to include an Object in the stack, and pop, to remove the Object that is in the top of the stack. The Stack component contains only one class, and is used for illustration purpose. Our approach, of course, is intended to support components with more than one class. To differentiate the original component, we call the resultant testable component TStack.

5.1. BCEL API

The BCEL API is a toolkit for static analysis and dynamic creation or transformation of Java class files [Da01]. The API offers services to retrieve method instructions and allow their manipulation. The main problem in using BCEL is implementation complexity, because the developer has to understand the Java bytecodes structure. It was used in the first implementation of the testable component architecture because, at that time, it was one of the few available toolkits which allowed the necessary instrumentation to be performed by bytecode transformation, avoiding source code dependencies.

Components design

The Tracker component is simpler, so only the Tester component will be described in detail. In addition to the ITesting and ILogAssert interfaces presented in 4.2, two new interfaces were necessary. The Tester architecture for TStack component is shown in Figure 2.

IAStack_pop_insert

insertPrecondition()insertPostcondition()

insertInvariant()

Testing

+ insertPrecondition()+ insertPostcondition()+ insertInvariant()+ getModel()+ setLog()

AStack

checkInvariant()insertInvariant()

IAStack_pop_check

checkPrecondition()checkPostcondition()

checkInvariant()

ITesting

+ insertPrecondition()+ insertPostcondition()

+ insertInvariant()+ getModel()+ setLog()

Tester

AStack_pop

checkPrecondition()checkPostcondition()insertPrecondition()insertPostcondition()

LogAssert

writePreViolation()writePostViolation()writeInvariantViolation()setLog()

ILogAssert

writePreViolation()writePos tViolat ion()

writeInvariantViolation()setLog()

Contract

Figure 2: TStack Tester component architecture for the BCEL implementation

The built-in code is inserted through the ITesting interface, implemented by the

Testing class. This class delegates the insertion to the Contract library, which is composed by A<method name> classes. There is one A<method name> class for each public method of the CUT interface (only AStack_pop is shown; AStack_push was omitted for sake of brevity), and each one offers two interfaces: IA<method name>_insert and IA<method name>_check (in our example, IAStack_pop_insert and IAStack_pop_check), the first, visible inside the testable component, and the former visible only inside Tester. IAStack_pop_insert is used by the Testing class to insert calls to the assertion checking methods in specific points of the code (beginning and end of methods, before returning from a method call, among others) using the BCEL API. The second one, IA<method name>_check interface, provides to the CUT access to the assertion checking code during runtime.

The A<method name> classes inherit from A<interface name> class, which is responsible for inserting a call for invariant checking during instrumentation and for checking the invariant during runtime. Since the invariant is the same for all methods, one class is enough to represent it. In our example, both AStack_pop and AStack_push (omitted) inherit from AStack class.

Finally, the LogAssert implements the ILogAssert interface (c.f. 4.2), which is used by the A<method name> objects. Instrumentation Code

Figure 3 presents the code included into Stack.pop method for testing purposes (in

Java syntax). Precondition code is simply a method call to

IAStack_pop_check.checkPrecondition() in the beginning of the method code (Figure 3 (a) – lines 1-2). For postcondition code, it was necessary to create the variable size (line 3) to store the stack length before the method execution, in order to compare with its final length after execution. This checking is made by method

IAStack_pop_check.checkPostcondition() (lines 5-6), based on parameter values. Each checkPrecondition() and checkPostcondition() method will have different parameters according to the contract specification.

Postcondition checking occurs right before the end of method, but only if the method returns normally. The invariant is checked at the start and end of the method execution, even if an exception is raised, to check component’s consistency. That is why its implementation needs a try-catch block (Figure 3 (b)).

1 2 3 4 5 6 7

Tester.IAStack_pop_check tester=new Tester.AStack_pop(); tester.checkPrecondition (index); Int size = index; … Tester.IAStack_pop_check tester=new Tester.AStack_pop(); tester.checkPostcondition (index, size); return data;

//precondition //precondition //postcondition //pop method code //postcondition //postcondition //pop method code

(a) Precondition and postcondition code

1 2 3 4 5 6 7

Tester.IAStack_pop_check tester = new Tester.AStack_pop(); tester.checkInvariant(index); try{

… //pop method code } finally { tester.checkInvariant(index); }

(b) Invariant code

Figure 3: Code necessary for assertion checking included in pop method.

Test Driver Example

Figure 4 presents a possible usage scenario for the test facilities proposed. The scenario describes the interactions between a test driver, which is a client of the testable component TStack. First, there is a setup phase, in which the Driver instruments the Stack component by calling ITesting.insertPostcondition(). The Testing object receives the message and uses the IAStack_pop_insert interface to includes a call to postcondition verification into Stack.pop().

Driver Tester. ITesting Tester.Testing Tester.IAStack_pop_insert

Stack Tester.IAStack.pop_check

Tester.ILogAssertTester.AStack_pop

insertPostcondition()

insertPostcondition()

insertPostcondition()

This method includes the postcondition in CUT's Stack.pop byte codes

pop()

checkPostcondition()

checkPostcondition()

writePostviolation()

insertPostconditi on()

Figure 4: Example of use of the TStack component in the BCEL implementation.

Afterwards, the driver calls the Stack.pop method. The test case purpose is to pop

an element from the stack when it is empty. At the end of the method execution, the postcondition is checked by calling IAStack_pop_check.checkPostcondition(). Since there is an assertion violation, a call to ILogAssert.writePostViolation() is necessary to write a notification into the log file.

5.2. Aspect-Oriented Programming

Our approach was also implemented using aspect-oriented programming (AOP) [Ki01]. AOP enables modularization of crosscutting concerns, which are concerns that are common to many of the modules of the application, such as assertion checking and tracking capabilities.

According to R. Laddad [Lad03], AOP is a “new methodology that provides separation of crosscutting concerns by introducing a new unit of modularization – an aspect – that crosscuts other modules”. Crosscutting concerns are implemented as aspects, and the final system composition is performed by the aspect weaver, which combines the core component with the crosscutting modules (weaving).

The AOP tool used was AspectJ, an aspect-oriented extension to the Java programming language [La03]. AspectJ introduces the join point concept, which is an identifiable point in the execution flow. The new constructs added in Java by AspectJ are: pointcuts, to select join points and collect their contexts; advices, which contain the code executed when a join point is reached; inter-type declarations, which allow program’s static structure modification; and aspects, structures similar to Java classes which encapsulate the weaving rules constructed using the other items.

For more information about AOP and AspectJ, see [Lad03].

Components Design

With AOP, the Tester and Tracker become simpler and both with the same internal structure.

The Tester structure becomes as shown in Figure 5. Testing and LogAssert classes remain with the same operations as shown in Figure 2. The testing library is no longer composed by the A<method name> classes of the BCEL implementation; it contains a set of aspects instead. There is one aspect for each CUT’s public method, and each aspect contains advices that implement the component’s usage contract.

The testable component setup is no longer performed by IA<method name>_insert interfaces, as with the previous implementation, but directly by the Testing class. A single auxiliary class is needed – the Setup class, which is responsible for keeping track of the aspects to be weaved with the component code during load time. This class is created by the Testing class, when the ITesting interface is accessed to insert the pre- and post-conditions for the selected methods. Then it is queried by the aspects at the weaving phase during load-time.

Asp ects

T ester

PopAspect PushAspect Setup T esting

IT estingILogAsse rt

LogAssert

Figure 5: Tester component architecture in implementation using AOP

The tracking library is also composed by aspects, which are divided into the following categories: OperationalAspect, which contains the advices for operational tracking (one OperationalAspect for each CUT’s interface); StateAspect, related to public attributes tracking; and ErrorAspect, related to error tracking. In these aspect categories, the join points correspond to access to methods, attributes and exception handlers as well. Instrumentation Code

To instrument the Stack component using AOP, four aspects were built: PopAspect and PushAspect, in the Tester aspect library, and OperationalAspect and ErrorAspect, in the Tracker library. The code to be included into the CUT is formatted in advices and pointcuts determine where the advices must be weaved.

In AspectJ, pointcuts pick out join points, which can be one or more of the following: method or constructor invocations and executions, exception handling, field assignments and accesses, among others [Lad03]. To define a pointcut, one should give it a name. For example, Figure 6 (lines 1-2) shows the definition of the popMethod pointcut, which intercepts the calls (indicated by the keyword call) to the public method Stack.pop(). The advice to be executed at this pointcut is shown in Figure 6 (lines 4-7). A piece of advice brings together a pointcut and a code which must run at the join point indicated by the pointcut. For example, the aforementioned advice specifies that the piece of code which implements the operational trace for the

Stack.pop method is executed after (indicated by the keyword after) the join point indicated by the pointcut popMethod. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 1516 17 1819 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35

public privileged aspect PopAspect { //variable declarations //define the join point to insert the advices private pointcut popExecution():

execution(public Object Stack.pop()); //advice 1: pre condition before(): popExecution() && if(setup.isChecked("pop", 0)){ //&& if: guard condition to enable precond. verification pre_size = ((Stack)thisJoinPoint.getTarget()).index; if (pre_size < 1) //precondition checking log.writePreViolation(...); } //advice 2: invariant in the beginning before(): popExecution() && if(setup.isChecked("pop", 1)){ if ((pre_size < 0) || (pre_size > 9)) //invariant log.writeInvariantViolation(...); } //advice 3: invariant at the end after(): popExecution () && if(setup.isChecked("pop", 2)){ post_size = ((Pilha)thisJoinPoint.getTarget()).indice; if ((post_size < 0) || (post_size > 9)) //invariant log.writeInvariantViolation(...); } //advice 4: post condition after(): popExecution () && if(setup.isChecked("pop",1)){ if (post_size != pre_size - 1) //postcondition checking log.writePostViolation(...); }

Figure 7: PopAspect code.

For assertion checking, the code included was different than in BCEL implementation (Figure 5), because checking was implemented directly into the advices, so there is no need to explicitly call a checking method. Figure 7 shows the PopAspect for the Stack.pop method. In order to access the Stack’s private attributes, necessary for assertion checking, it is implemented as a privileged [La03] aspect (regular aspects can not access a class’ private information). In the figure, line 9 defines the pointcut popExecution to capture Stack.pop method execution. The precondition advice, in line 12, corresponding to the popExecution pointcut, specifies that the precondition is checked before method execution starts. Lines 21 and 28 shows the specification of invariant advices, one that runs before method execution starts (Line 21) and the other after the method execution ends. The postcondition

advice (line 36) runs after method normal execution, differently from the invariant advice (after), which runs regardless of whether the method returns normally or throws an exception. Assertion violation is registered in the log by the LogAssert class (Figure 5).

The instrumentation consists in weaving the aspects to the CUT at load-time. AspectJ weaves into the CUT all the available advices in the library. To implement assertion enabling options (only assertions enabled through ITesting interface will be weaved, although all assertions were implemented as advices by the component provider), advice inclusion is controlled by the Setup class. This class keeps which assertions have to be included and is queried at weaving time by method setup.isChecked(method: String, assertion: int) call (Line 13). If it returns false, the advice is not weaved.

Test Driver Example

A test driver example, similar to the one presented in Figure 4, illustrates the use of TStack instrumented using AOP. The instrumentation consists in including Stack.pop methods’ postcondition (stack size must decrease in one element) and invariant (stack index must be between zero and ten elements). The code necessary to apply this test case is shown in Figure 8. In the AOP implementation, the weaving has to be started after the setup phase (lines 4 to 10).

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

public class Driver { public static void main (String args[]) { //Tester component creation Tester.ITesting tester = new Testing(); //Stack.pop Postcondition enabling tester.insertPostcondition ("pop", false); //Stack.pop Invariant enabling tester.insertInvariant ("pop", false); tester.setLog(“Empty_stack.log”); //In the AOP implementation, the weaving happens here //Stack component creation Component.Stack stack = new Component.stack(); stack.pop(); //Calls pop method } }

Figure 8: Class driver’s source code

Figure 9 shows the generated log file. Since the tracking was not enabled, just the

invariant violation was registered. If there were other test cases, their assertion violations messages would be registered in the same file. Invariant violation: Method Stack.pop Vector index = -1

Figure 9: Generated log file

5.3. Preliminary Evaluation

Built-in mechanisms increase executable size and the runtime as well. Here we present preliminary results concerning the size overhead.

Table 1 presents source code size variations for the testable component, which is composed by Tester, Tracker and CUT. The latter has no variations between the BCEL and AspectJ implementations, since its source code is not affected by the instrumentation.

Table 1: Testable component source code size variation

BCEL (lines of code) AOP (lines of code) CUT 31 31 Tester 470 total (220 in library) 215 total (94 in library) Tracker 180 total (100 in library) 220 total (80 in library)

The Tester and Tracker sizes include also the testing and tracking libraries,

respectively. The numbers inside parenthesis indicate the amount of source code relative to libraries implementation. For example, 220 lines of Tester source code are concerned with BCEL API.

The testable component executable code size, before CUT instrumentation, is approximately 30Kbytes in BCEL implementation, and 20Kbytes in AOP implementation.

To measure the size overhead we considered two Stack component versions: with (Stack-E) and without (Stack) exception handling. In the Stack-E, the pop method had a try-catch block to handle the ArrayIndexOutOfBounds exception [Sun03]. The reason for considering a version with exception handling was to measure the impact on the CUT size when error tracking is used.

For operational and error trace observation, the Stack-E version was used, whereas for assertion measurements, the Stack version was used. Table 2 shows the variation in both components size due to instrumentation.

Table 2: Variations in CUT executable size.

Without instrumentation

BCEL instrumentation

AOP instrumentation

Stack 660 bytes 1190 bytes 1010 bytes Stack-E 798 bytes 927 bytes 1580 bytes

The Stack-E version was instrumented with operational trace in pop and push

methods as well as an error trace in pop method. For the BCEL implementation the testable componente size (Stack-E + Tracker) was 8,36kbytes. For the AOP implementation, the testable component size was 11,05kbytes.

The Stack version, on the other hand, was instrumented with assertion checking for both pop and push methods. The testable component size (Stack + Tester) was 13,83kbytes for the BCEL implementation and 9,77kbytes for AspectJ.

These preliminary results show that a testable component should be carefully designed. Monitoring all the methods, exception handlers and state alteration, as well

as checking the usage contract of each operation of the component interface is not possible, due to the space and runtime overhead. For that reason, Tester and Tracker interfaces allows component users to tune the instrumentation according to their needs.

6 Conclusions and future works

This article proposed testable component architecture to facilitate testing. The testability is increased by the inclusion of self-checking capabilities and tracking mechanisms into the component under test, which can be accomplished even when the source code is not available. Also, the component specification is made available to allow test case generation automation. As a proof of concept we implemented the proposed architecture in a simple component implemented in Java in two ways: using BCEL, a bytecode manipulation library, and AspectJ, a toolkit for aspect-oriented programming.

The major advantage is helping component users to test it in an efficient way, guaranteeing the component’s quality in the new environment. An advantage for the component providers is that they can offer a testable component without revealing its source code. Also, the use of built-in mechanisms become easier because the same testing and tracking interfaces can be used for all the components in a system.

The major disadvantage is the amount of extra information, which increases the executable code size. To cope with this limitation, the testable component interface provides means to select the amount of instrumentation to use.

In spite of this practical limitation, the approach is useful for both component users and component providers. The former will have enough information to help in component understanding, testing and maintenance. The latter will be able to produce components with better quality, since the elaboration of the information needed to build a testable component helps quality improvement. Of course, both component users and providers have to balance the savings with reduction in the amount of embedded information and the quality required for the component.

Use in a real application is also under way, as well as runtime impact measurements.

Acknowledgments

Camila Ribeiro Rocha is supported by CAPES/Brazil.

References [Ad99] Addy, E.: Verification and Validation in Software Product Line

Engineering. PhD Dissertation. College of Engineering and Mineral Resources at West Virginia University, 1999.

[BG03] Beydeda, S.; Gruhn, V.: State of the Art in Testing Components. In: 3rd

International Conference on Quality Software, Dallas, 2003.

[BG03a] Beydeda, S.; Gruhn, V.: Merging components and testing tools – The self-testing COTS components (STECC) strategy. In: Proc. EUROMICRO Conference Component-based Software Engineering Track, 2003.

[Bi94] Binder, R.: Design for Testability in Object-Oriented Systems. In: Communications of ACM, 37(9), Sept/Oct 1994; pp 87-101.

[Bi00] Binder, R.: Testing Object-Oriented Systems – Models, Patterns and Tools. 5th edition. New York: McGraw-Hill, 2001.

[Bo01]

Bhor, A.: Software Component Testing Strategies. Technical Report UCI-ICS-02-06, University of California. Irvine, 2001.

[Bu00] Bundell, G. et. al.: A Software Component Verification Tool. In: Proc. International Conference on Software Methods and Tools, Wollongong, 2000.

[Co01] Correct System Design Group. The Jass page. University of Oldenburg, 2001. URL: http://csd.informatik.uni-oldenburg.de/~jass/index.html Accessed in July 28, 2004.

[Ch97] Chiba, S. OpenC++ 2.5 Reference Manual. Institute of Information Science and Eletronics. University of Tsuukuba. 1997-99

[Da01] Dahm, M.: Byte Code Engineering with the BCEL API. Technical Report B-17-98, Freie Universität at Berlin, Institut für Informatik, 2001.

[EH03] Håkan, E; Hörnstein, J. Component+: Final Report. 2003. URL: http://www.component-plus.org/pdf/reports/Final%20report%201.1.pdf. Accessed in July 14, 2004.

[Ga00] Gao, J.: Component Testability and Component Testing Challenges. URL: www.sei.cmu.edu/cbs/cbse2000/papers/18/18.pdf. Accessed in May 20, 2004.

[Ga02] Gao, J. et. al.: On building testable software components. In: Lecture Notes in Computer Science, V. 2255, Springer Verlag, 2002; pp. 108-121.

[GAO95] Garlan, D.; Allen, R.; Ockerbloom, J.: Architectural Mismatch – Why Reuse is so Hard. In: IEEE Software, 12(6), Nov 1995; pp. 17-26.

[GZS00] Gao, J.; Zhu, E.; Shim, S.: Monitoring Software Components and

Component-Based Software. In: IEEE Computer Software and Application Conference (COMPSAC), Taipei, 2000.

[HMF92] Harrold, M.; McGregor, J.; Fitzpatrick, K. Incremental Testing of Object-Oriented Class Structures. In Proc. 14th. IEEE International Conference on Software Engineering (ICSE-14), 1992, pp 68-80.

[Ho89] Hoffman, D.: Hardware Testing and Software Ics. In: Proc. Northwest Software Quality Conference, Portland, 1989; pp. 234-244.

[Ki01] Kiczales, G. et al. An Overview of AspectJ. In: Lecture Notes in Computer Science, V. 2072, p. 327-353, Springer Verlag.

[La03] Laddad, R. AspectJ in Action: Practical Aspect-Oriented Programming. Manning Publications Co, 2003.

[Me92] Meyer, B.: Applying Design by Contract. In: IEEE Computer, 25(10), Oct 1992; pp. 40-51.

[MK96] McGregor, J.; Kare, A.: Parallel Architecture for Component Testing. In: Proc. 9th International Quality Week, 1996.

[MTY01] Martins, E.; Toyota, C.; Yanagawa, R; Constructing Self-Testable Software Components. In: Proc. IEEE/IFIP Dependable Systems and Networks (DSN) Conference. Gothemburg, 2001.

[Pr97] Pressman, R.: Software Engineering – a Practitioner’s Approach. McGraw-Hill, 4th Edition, 1997.

[Su03] Sun Microsystems, Inc.: Java 2 Plataform Standard Edition v1.4.2 – API Especification). 2003. URL: http://java.sun.com/j2se/1.4.2/docs/api/. Accessed in May 28, 2004.

[Sz98] Szyperski, C.: Component Software – Beyond OO Programming. Addison-Wesley, 1998.

[TDJ97] Traon, Y.; Deveaux, D.; Jézéquel, J.: Self-testable Components – from Pragmatic Tests to Design-for-Testability Methodology. In: Proc. 1st. Workshop on Component-Based Systems, Switzerland, 1997.

[UM01] Ukuma, L.; Martins, E.: Development of Self-Testing Software Components". In: Proc. 2nd IEEE Latin-American Test Workshop (LATW), Cancun, 2001; pp. 52-55.

[Vo97] Voas, J.: A Defensive Approach to Certifying COTS Software. Technical Report, Reliable Software Technologies Corporation, 1997.

[VSS98] Voas, J.; Schmid, M.; Schatz, M.: A Testability based Assertion

Placement Tool for Object-Oriented Software. Technical Report NIST CGR 98-735, Information Technology Laboratory, 1998. URL: http://www.rstcorp.com

[Wa97] Wang Y. et. al..: On Testable Object-Oriented Programming. In: ACM Software Engineering Notes, 22(4), July 1997; pp. 84-90.

[We98] Weyuker, E.: Testing Component-Based Software – a Cautionary Tale. In: IEEE Software, 15(5), Sept/Oct 1998; pp. 54-59.

[WKW99] Wang, Y.; King, G.; Wickburg, H.: A Method for Built-in Tests in Component-based Software Maintenance. In: Proc. 3rd European Conference on Software Maintenance and Reengineering (CSMR), Holland, 1999; pp. 186-189.