Implementation-Oriented Secure Architectures

10
Implementation-Oriented Secure Architectures Daniel Conte de Leon, Jim Alves-Foss, and Paul W. Oman Center for Secure and Dependable Systems University of Idaho [danielc, jimaf, oman] at cs.uidaho.edu Abstract We propose a framework for constructing secure sys- tems at the architectural level. This framework is com- posed of an implementation-oriented formalization of a system’s architecture, which we call the formal implemen- tation model, along with a method for the construction of a system based on elementary analysis, implementation, and synthesis steps. Using this framework, security vulnerabil- ities can be avoided by constraining the architecture of a system to those architectures that can be rigorously argued to implement all corresponding functional and security re- quirements, and no other. Furthermore, the framework enables the verification and validation of system correct- ness by enforcing traceability of final system components to their corresponding design, architecture, and requirement work products. 1. Introduction High assurance and critical computing systems require compelling evidence that the final developed system satis- fies a defined set of critical properties as well as a defined set of functional requirements. Both nonfunctional require- ments, also called system constraints, and functional re- quirements contribute to system correctness; a system is correct if it correctly, completely, and uniquely implements every functional requirement while at the same time it cor- rectly, completely, and uniquely implements every non- functional requirement [5]. Nonfunctional requirements are referred to as depend- ability properties [1]. Dependability properties of a high assurance and critical computing system such as security and safety are emergent properties of the system [17]. The Stanford Encyclopedia of Philosophy offers a good introduction to the concepts of emergence and emergent properties [25]. “Emergent properties are systemic features of complex systems which could not be predicted (prac- tically speaking; or for any finite knower; or for even an ideal knower) from the standpoint of a pre-emergent stage, despite a thorough knowledge of the features of, and laws governing, their parts” [25]. Emergence is intrinsically re- lated to the notion of levels of abstraction: an emergent property “emerges” at a higher level of abstraction but it cannot be observed at any of the previous lower levels of abstraction [6]. Complex systems are built as a set of interacting com- ponents. A system model describes and/or specifies those interacting components and their relationships at different levels of abstraction. In order to anticipate and verify the emergent properties of a system by using a system model, it is necessary to position ourselves at a higher level of ab- straction from that of the components being analyzed in the system model. Hinton exemplified how undesired or unexpected emergent properties can arise at higher levels of abstraction (i.e., at the system level) by showing how an insecure system can be composed of two Generalized Non-Interference (GNI) secure components [9], and Leve- son showed how unexpected emergent behaviors resulted in accidents in safety-critical systems [18]. Current state of the practice approaches to systems and software requirements, architecture, and implementation do not facilitate the analysis of emergent security proper- ties. This is due to the fact that most current approaches do not enforce the creation and maintenance of essential trace- ability links which establish the relationship between sys- tem components and their, design, architecture, and func- tional and nonfunctional requirement counterparts. For ex- ample, most current approaches are focused on a divide and conquer strategy, which is necessary for complex systems development, but they do not facilitate the analysis of emer- gent security properties that appear at higher levels of ab- straction after system components are built, integrated, and deployed. To solve this persistent problem, we argue that hybrid approaches to system specification and development are necessary. We propose a framework for the incremental development of secure systems that is formal but, at the same time, allows the level of formalization to evolve along with system analysis, design, and implementation. In ad- dition, this framework enforces a strong connection, by the Proceedings of the 40th Hawaii International Conference on System Sciences - 2007 1 © 1530-1605/07 $20.00 2007 IEEE Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS'07) 0-7695-2755-8/07 $20.00 © 2007

Transcript of Implementation-Oriented Secure Architectures

Implementation-Oriented Secure Architectures

Daniel Conte de Leon, Jim Alves-Foss, and Paul W. Oman

Center for Secure and Dependable SystemsUniversity of Idaho

[danielc, jimaf, oman] at cs.uidaho.edu

Abstract

We propose a framework for constructing secure sys-tems at the architectural level. This framework is com-posed of an implementation-oriented formalization of asystem’s architecture, which we call the formal implemen-tation model, along with a method for the construction of asystem based on elementary analysis, implementation, andsynthesis steps. Using this framework, security vulnerabil-ities can be avoided by constraining the architecture of asystem to those architectures that can be rigorously arguedto implement all corresponding functional and security re-quirements, and no other. Furthermore, the frameworkenables the verification and validation of system correct-ness by enforcing traceability of final system components totheir corresponding design, architecture, and requirementwork products.

1. Introduction

High assurance and critical computing systems require

compelling evidence that the final developed system satis-

fies a defined set of critical properties as well as a defined

set of functional requirements. Both nonfunctional require-

ments, also called system constraints, and functional re-

quirements contribute to system correctness; a system is

correct if it correctly, completely, and uniquely implements

every functional requirement while at the same time it cor-

rectly, completely, and uniquely implements every non-

functional requirement [5].

Nonfunctional requirements are referred to as depend-

ability properties [1]. Dependability properties of a high

assurance and critical computing system such as security

and safety are emergent properties of the system [17].

The Stanford Encyclopedia of Philosophy offers a good

introduction to the concepts of emergence and emergent

properties [25]. “Emergent properties are systemic features

of complex systems which could not be predicted (prac-

tically speaking; or for any finite knower; or for even an

ideal knower) from the standpoint of a pre-emergent stage,

despite a thorough knowledge of the features of, and laws

governing, their parts” [25]. Emergence is intrinsically re-

lated to the notion of levels of abstraction: an emergent

property “emerges” at a higher level of abstraction but it

cannot be observed at any of the previous lower levels of

abstraction [6].

Complex systems are built as a set of interacting com-

ponents. A system model describes and/or specifies those

interacting components and their relationships at different

levels of abstraction. In order to anticipate and verify the

emergent properties of a system by using a system model,

it is necessary to position ourselves at a higher level of ab-

straction from that of the components being analyzed in

the system model. Hinton exemplified how undesired or

unexpected emergent properties can arise at higher levels

of abstraction (i.e., at the system level) by showing how

an insecure system can be composed of two Generalized

Non-Interference (GNI) secure components [9], and Leve-

son showed how unexpected emergent behaviors resulted

in accidents in safety-critical systems [18].

Current state of the practice approaches to systems and

software requirements, architecture, and implementation

do not facilitate the analysis of emergent security proper-

ties. This is due to the fact that most current approaches do

not enforce the creation and maintenance of essential trace-

ability links which establish the relationship between sys-

tem components and their, design, architecture, and func-

tional and nonfunctional requirement counterparts. For ex-

ample, most current approaches are focused on a divide and

conquer strategy, which is necessary for complex systems

development, but they do not facilitate the analysis of emer-

gent security properties that appear at higher levels of ab-

straction after system components are built, integrated, and

deployed.

To solve this persistent problem, we argue that hybrid

approaches to system specification and development are

necessary. We propose a framework for the incremental

development of secure systems that is formal but, at the

same time, allows the level of formalization to evolve along

with system analysis, design, and implementation. In ad-

dition, this framework enforces a strong connection, by the

Proceedings of the 40th Hawaii International Conference on System Sciences - 2007

1©1530-1605/07 $20.00 2007 IEEEProceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS'07)0-7695-2755-8/07 $20.00 © 2007

use of formal traceability relations between all work prod-

ucts and at all levels of abstraction. By using the proposed

framework, security vulnerabilities can be avoided by con-

straining the architecture of a system to those architectures

that can be shown to completely and uniquely implement

all corresponding functional and nonfunctional security re-

quirements, including environmental constraints.

The proposed framework is composed of:

I. A Formal and Implementation-Oriented Archi-tectural Model of a System: In this framework the

architecture of a system is represented using a for-mal implementation model that takes into account

all deliverables resulting from system analysis, de-

velopment, and deployment processes.

II. A Method for Systems Development: A method

for the incremental development of systems that

comply with a set of implementation constraints over

a formal implementation model (implementation-

oriented architectural model). In a formal imple-

mentation model the level of formalization may in-

crementally evolve along with the analysis, design,

and implementation processes.

The remainder of this paper is organized as follows:

Section 2 justifies our hybrid (formal and informal) ap-

proach to systems development. Section 3 briefly describes

the formal foundation of this framework, which is based on

previous work by the authors which focused on the dis-

covery of hidden dependencies in order to anticipate safety

hazards in safety-critical systems [4, 5]. Sections 4 and

5 describe the framework for the development of secure

system architectures. To demonstrate how this proposed

framework would help build more secure systems, in Sec-

tion 6 we show how two security vulnerabilities could have

been avoided by using the proposed framework. Our con-

clusions along with some open questions are presented in

Section 7. Lastly, Section 8 briefly reviews related work.

2. Merging Formal and Informal System De-velopment Approaches

A set of well-known engineering and design method-

ologies is used today for the analysis of high assurance

and critical computing systems and their separation into

constructible or realizable components. Furthermore, there

exists a set of methods for the creation and later synthe-

sis of those developed components. For example, require-

ments engineering methods are used to elicit system re-

quirements and to further divide those requirements into

derived (i.e., child) requirements [11], and architectural

analysis and specification methods are used to develop and

specify the architecture of a system including hardware and

software [2]. In addition, build utilities, compilers, linkers,

and assemblers are used to construct the software of a sys-

tem based on the source code, and hardware component

assembly and connection techniques are used to assemble

system’s hardware.

However, methods for the formal analysis and verifi-

cation of security properties of a final assembled or in-

production system are scarce and not commonly used in

industry. While it has been demonstrated that formal ap-

proaches for developing secure systems can help to foresee

major flaws in the design of a system, most formalizations

are disconnected with the initial informal requirements and

typically also disconnected with the final implemented sys-

tem as well. In other words, except for formal develop-

ment systems like Kestrel’s Specware®1 approach [12, 13],

the Galois Haskell-based approach [7, 14], and the Praxis

High Integrity Systems SPARK Ada approach [8,19] there

is rarely formal connection between the formal models and

the final system implementation. Furthermore, there have

been several cases where formalizations of security proto-

cols have been found semantically flawed, mainly due to

the fact that assumptions made during the formalization

process did not hold true in the final implemented system

or that the system was underspecified in its formal repre-

sentation (i.e., missing assumptions) [9].

Purely formal approaches have been shown to be very

useful for developing dependable systems, but they also

carry burdens such as lack of trained personnel and large

learning curves that have precluded widespread adoption.

There is also, for most formal methodologies, a lack of a

strongly structured (or formal) connection with both ends

of the system development process: informal requirement

descriptions and final (in-production) system implementa-

tions.

On the other hand, standard programming methods as

currently used by industry, including object-oriented pro-

gramming languages, have also failed to create secure sys-

tems mainly due to their inability to enable the verification

of systemic (emergent) properties or nonfunctional require-

ments and their disconnection with system requirements

and other formal and informal system descriptions or spec-

ifications.

We do not argue in this paper for or against either for-

mal or informal system and software development meth-

ods, we recognize the value of both. However, we point

out that when used separately and without strongly struc-

tured connections between them these methods have failed

to achieve the promised goals. New dependable system de-

velopment methods that combine formal and informal ap-

proaches are necessary.

The proposed framework reuses some aspects of Leve-

son’s Intent Specifications and SpecTRM methodologies,

which have proved successful for the specification of safe

1Specware® is a registered trademark of the Kestrel Institute.

Proceedings of the 40th Hawaii International Conference on System Sciences - 2007

2Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS'07)0-7695-2755-8/07 $20.00 © 2007

air traffic control and airborne systems [15,16] (see the Re-

lated Work section). It also extends our previous work,

which is briefly described in the next section.

3. Formal Implementation Models

A formalization of traceability relationships based on

set theory and predicate calculus was initially introduced

in Conte de Leon [3]. An extended analysis of this formal-

ization for the specific case of the partial implementation

relationship and an improved description of the formal sys-

tem appears in Conte de Leon and Alves-Foss [5].

In our work, we divide system components, architec-

tural and design artifacts, and requirements into what we

call work product sections, which we define as uniquely

identifiable, addressable, and meaningful sections of work

products [22]. We abbreviate work product sections as wps

and wps(s) for singular and plural, respectively. We call

S the set of all wps(s) of a system. We call implementsthe partial order established by the partial implementation

traceability relation between wps(s) in S. Further, we de-

note the top and bottom of our abstract formal model as �and ⊥, respectively.

The formal system Γ described by Conte de Leon and

Alves-Foss is based on set theory and a binary predicate

calculus. In Γ, each traceability link between any two

wps(s) is formalized as an axiom or theorem. In the same

article, we analyzed and formalized the structural proper-

ties of the partial implementation traceability relationship

between wps(s). We formalized the partial implementation

traceability relationship as a strict partial order (irreflex-

ive, asymmetric, and transitive). This strict partial order

between wps(s) along with a new technique named con-ceptual completeness were applied in order to discover hid-

den dependencies between wps(s) that led to safety hazards

which developed into catastrophic accidents. A prototype

expert system (SyModEx) was constructed to verify and

enforce these properties, among others, as inference rules

of Γ.

In this paper, we extend the Γ formal system introduced

in Conte de Leon and Alves-Foss [5] with a different ob-

jective: we create new inference rules that constrain the

evolution of a formal implementation model of a system in

order to avoid security vulnerabilities. These rules enforce

the method for secure systems development explained in

Section 5 of this paper. However, these rules do not specify

the processes, methodologies, or languages used to create

these architectures, they only constrain the architecture of

a system to a set of “safe” architectural states.

Figure 1. Complete and Unique Implementa-tion of Work Product Sections.

4. Framework Definitions: Partial, Complete,and Total Implementation of Work Prod-uct Sections

In this paper, we augment the formal model Γ with three

traceability relationship types and three inference rules.

The three new relationship types are: completelyImple-ments, uniqelyImplements, and totallyImplements. We de-

fine and describe these newly introduced concepts below.

Definition 1. Complete implementation between workproduct sections: A wps b (implementor) completely im-plements a wps t (implementee) if and only if every chain〈C, implements〉 in Γ, whose minimum element is ⊥ andwhose maximum element is t contains b.

This definition can also be stated as: a wps b (imple-

mentor) completely implements a wps t (implementee) if

and only if every intermediate wps that is partially imple-

mented by b also partially implements t. Also, this is equiv-

alent to say that every directed path from ⊥ to t contains b.

Figure 1 illustrates this definition with a diagram.

Informally, a wps b uniquely implements another wps

t if and only if b does not directly or indirectly partially

implement a third wps. In other words there are no par-

tial implementation traceability links going out of b that do

not go through t. In this, case we could also say that t is

uniquely implemented by b.

Proceedings of the 40th Hawaii International Conference on System Sciences - 2007

3Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS'07)0-7695-2755-8/07 $20.00 © 2007

Definition 2. Unique implementation between work prod-uct sections: An implementor wps b uniquely imple-ments an implementee wps t if and only if every chain〈C, implements〉 in Γ, whose maximum element is � andwhose minimum element is b contains t.

Definition 3. Total Implementation between two workproduct sections: A wps b (implementor) totally imple-ments a wps t (implementee) if and only if b completelyand uniquely implements t.

If b totally implements t then there is a lattice-like alge-

braic structure embedded in Γ, with respect to the imple-

ments traceability relation, such that b is the ⊥ and t is the

�. In this structure, given any pair (x, y) of wps(s), the

set of lower bounds and the set of upper bounds of (x, y)are nonempty. Using this algebraic structure we can mathe-

matically verify that a given formal implementation model,

and therefore the architecture of a given system, complies

with the set of rules defined in Section 5. As we will see in

the next section, however, the formal model is not a proper

lattice it is only lattice-like.

5. System Development Method

Our proposed system development method is composed

of six steps:

1. basic system definition.

2. successive refinement through analysis.

3. atomic implementation of work product sections.

4. successive implementation of higher level wps(s)

through synthesis.

5. a hybrid (analysis and synthesis combined) step used

to create two or more lower abstraction level wps(s)

to implement two or more higher abstraction level

wps(s).

6. finalization test.

Steps 2, 3, 4, and 5 do not necessarily need to be per-

formed in sequence but can be interleaved. Step 5 can be

used as a substitute for steps 3 and 4 when more than one

implementor wps and more than one implementee wps are

used. Obviously, there are similarities between our pro-

posed framework and previous methods for system devel-

opment, but there are also several differences. We elaborate

on those similarities and differences at the end of this sec-

tion after describing each of the steps in our method.

1. Basic System Definition: In this step we create

two system wps(s) to match two meta-artifacts, which are

the top (�) and bottom (⊥) of our formal implementation

model. These two wps(s) will represent the main require-

ment of the system and the system implementation itself.

For example, in Figure 3 wps(s) DBMSMainReq and DBM-SImpl correspond to � and ⊥ respectively. Figure 2, Sub-

figure (1) depicts this step. Our final goals are: i) that the

implemented system (represented by ⊥) completely imple-

ments every functional and nonfunctional requirement and

ii) that the system goal (represented by �) is uniquely im-

plemented by every deployed system component. The def-

inition of a � requirement as a first step in developing a

system is similar to Leveson’s Intent Specifications method

where the goals of the system are stated as the very first

step [16, 17].

2. Analysis: In an analysis step we divide a wps into

two or more wps(s) that will reside in a lower level of ab-

straction than the original wps. By construction each of the

newly created wps(s) will partially implement the original

higher abstraction level wps. For example, we can divide a

requirement into two or more newly derived requirements

or a system into two or more subsystems. Figure 2, Subfig-

ure (2) illustrates this step.

An analysis step can be seen, in part, as stepwise refine-

ment, in which case the verification of the higher level wps

(implementee artifact) is the question weather the lower ab-

straction level wps(s) (implementor artifacts) completely

provide all the behavior required or specified by its corre-

sponding higher abstraction level wps. However, we need

more than just complete implementation, we also need to

assure that the lower level wps(s) do not implement any

other unspecified behavior. Therefore, the additional re-

striction we apply to an analysis step is that the newly

created wps(s) are implementation independent and com-

pletely specified. In other words, these newly created com-

ponents collaborate in order to implement the higher level

wps, but they do not implement each other nor collabo-

rate to implement a behavior that is not explicitly stated by

the higher level wps. Notice that we are asking that the

newly created wps(s) be completely specified, not neces-

sarily completely formally specified.

3. Atomic Implementation: In this step the atomic im-

plementation of a work product section (wps) is performed.

This is the basic implementation step; in it, an analyst or

system engineer indicates that a given wps totally imple-

ments a given requirement or higher abstraction level wps.

Figure 2, Subfigure (3) illustrates this step.

By atomic implementation we mean that given an imple-

mentee wps that has been sufficiently analyzed and speci-

fied in such a way that its implementor wps is a simple and

direct implementation of its corresponding higher abstrac-

tion level implementee and no further analysis and sepa-

ration is necessary. In other words, a rigorous argument of

correct and total implementation can be made by a group of

trained engineers without further analysis and separation.

4. Synthesis: In a synthesis step we unify two or more

higher abstraction level wps(s) (atomic or synthesized) into

one lower abstraction level wps. By construction the new

wps partially implements those higher abstraction level

wps(s) synthesized into it.

Proceedings of the 40th Hawaii International Conference on System Sciences - 2007

4Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS'07)0-7695-2755-8/07 $20.00 © 2007

Figure 2. System Development through Formalized Analysis and Synthesis Implementation Steps.

The restriction on this synthesis step is that the new

lower abstraction level wps does not add any new function-

ality. In other words, the new component partially imple-

ments only those wps(s) used in its composition. Figure 2,

Subfigure (4) illustrates this step.

5. Hybrid Step (Analysis/Synthesis): Since analysis

and synthesis steps can be performed together we add an

additional step called a hybrid step, which corresponds to

the combination of analysis and synthesis steps together.

Figure 2, Subfigure (5) illustrates this step.

The rationale behind this step is twofold. First, it en-

ables the modeling of wps reuse, that is, where one lower

level wps is used to partially implement more than one

higher level wps. Notice that this includes the reuse of

any artifact not only source code. Second, the hybrid step

enforces the following rule, which is a derivation of the

conceptual completeness principle [5]: if wps bi partially

implement wps(s) ti and tj , and there exists wps bj that

partially implements ti, then bj must partially implement

tj as well.

6. Finalization Test: The development of the system

is finalized when the system has been completely imple-

mented and we can prove our goals by analyzing the formal

implementation model of the system. That is the � wps is

being totally implemented by every wps in the model and

that the ⊥ wps totally implements every wps in the formal

implementation model. This with the exception of those

wps(s) that belong to internal lattice-like sub structures cre-

ated by the use of a hybrid step.

This finalization step is verified by testing that given any

two wps(s) x and y in the formal implementation model

(Γ) the set of lower bounds of (x, y) and the set of upper

bounds in Γ are nonempty. As a result, a complete for-

mal implementation model would contain a nonempty set

of chains from ⊥ to � where every wps belongs to at least

one chain. The model is neither complete not valid until

this condition is satisfied.

5.1. Comparison with Current Methods

Clearly, there are similarities between our proposed

implementation-oriented secure architectures framework

and current methods for system development.

Similarity (a): Similar Development Process. This

framework strongly resembles the current state of the prac-

tice development process of requirements analysis, archi-

tectural design, implementation, and integration [11]. This

similarity is in fact one of the advantages of our framework

with respect to purely formal specification approaches.

Similarity (b): Similar Component ImplementationTechniques. This framework does not specify which tech-

niques, specification or programming languages need to be

used for developing system artifacts. It only constrains the

way resulting wps(s) are combined with each other in order

to implement a higher level artifact.

Nevertheless, there are also differences.

Difference (a): Constrained Analysis and SynthesisSteps. In contrast with current state of the practice devel-

opment methods, in this framework, analysis and synthe-

sis steps are constrained by architectural constraint rules.

For example, if an analysis step is performed and a higher

abstraction level wps is divided into two lower abstraction

Proceedings of the 40th Hawaii International Conference on System Sciences - 2007

5Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS'07)0-7695-2755-8/07 $20.00 © 2007

level wps(s), then there must be no remaining behavior in

the higher abstraction level wps that has not been moved

into the lower abstraction level. For example, if a module

t that partially implements a requirement r is divided into

two modules bi and bj , then module t must not partially

implement any of the behavior specified by r; only bi and

bj must, and the responsibilities of t will be reduced to the

control of bi and bj .

Difference (b): Traceability Links Are Kept at allStages of the Development Process. In this framework,

and in contrast with both current state of the practice devel-

opment methods and purely formal specification methods,

we can and do trace which artifacts are implementing each

other and how components or artifacts collaborate in order

to implement a higher abstraction level artifact such as a re-

quirement. This formal and complete traceability approach

facilitates the verification and validation of system correct-

ness with respect to the system’s informal requirements and

environmental constraints.

Difference (c): Formal Finalization Step. In this

framework there is a clear and formal finalization step

which is oriented toward the objective of proving that the

final implemented system completely and uniquely imple-

ments the stated system goal. This step is a formal test in

the architecture of the system and necessarily needs to be

proved for a given formal implementation model of a sys-

tem.

6. Vulnerability Case Studies

In order to describe how our implementation-oriented

approach to secure architectures could help to build more

secure systems, and to enable discussion about this frame-

work, we introduce two vulnerabilities along with their

associated formal implementation models. We also show

how the application of our set of rules would have helped

to anticipate these vulnerabilities.

6.1. Vulnerability Note VU#180147: Oracle DBMSAuthentication Bypass

The Vulnerability Note VU#180147 was published by

the US-CERT in February 2006 [23]. This vulnerability

affects multiple versions (9i, 8i, 8) and multiple releases

(from 8.0.1 to 9.0.1) of the Oracle® Database Management

Server2 (DBMS).

The Oracle DBMS, specifically the Database Server Ap-

plication (DBServerApp), allows authorized users to re-

motely connect to the database and request the DBServer-App to execute a PL/SQL3 statement. The process in charge

2Oracle® is a registered trademark of the Oracle Corporation.3PL/SQL (Procedural Language/Structured Query Language) is a pro-

of receiving PL/SQL queries or statements is called the Lis-tener (Figure 3). Whenever the Listener receives a PL/SQL

query or statement, it forks a new process (ChildSQLPro-cess), which is in charge of verifying that the requesting

user has appropriate access rights in order to execute such

a query/statement and also is in charge of executing the

query/statement. However, if the user request contains a

call to an external library, the ChildSQLProcess requests

the Listener to execute the requested external library. In

order to execute the external library the Listener calls an

external library executor process ExtLibraryExec. The vul-

nerability resides in the fact that neither the Listener nor the

ExtLibraryExec processes verify that the user is authorized

to perform such a query/statement; only the child processes

ChildSQLProcess(es) do. Therefore, a malicious process

masquerading as a ChildSQLProcess can request the Lis-tener to call an external library and completely bypass the

authentication procedure [21, 23].

Figure 3 shows a formal implementation model we have

developed to represent the requirements and components

of a DBMS application. In Figure 3 the DBMSMainReqand DBMSImpl components are the � and ⊥ meta-artifacts

of our formal model, respectively. Also, the partial im-

plementation traceability links from the AuthLibs compo-

nent to every ChildSQLProc[1/2/3] component, from ev-

ery ChildSQLProc[1/2/3] component to the Listener, and

from the Listener to the DBServerApp can be derived from

the architectural structure of the source code (i.e., a call

tree). The traceability link from the AuthLibs component

to the AuthPolicy requirement can be argued true by con-

struction of the system; in other words, they can be added

by an atomic implementation step. The traceability links

from Listener to AuthPolicy as well as the traceability link

from DBServerApp to AuthPolicy cannot be argued by con-

struction since non elementary, analysis, implementation,

and synthesis steps have been used for the development of

the lower level components; they need to be proved valid

in the formal implementation model.

Observe the dotted arrow from the AuthLibs component,

which represents the implementation of the authorization

policy, into the ExtLibraryExec component. This dotted

arrow indicates that the ExtLibraryExec component does

not partially implement AuthLibs. In this implementation-

oriented secure architectures framework the absence of this

traceability link represents a security vulnerability. This is

because the Listener component does not completely im-

plement the requirement AuthPolicy (due to the missing

traceability link represented by the dotted arrow). As a re-

sult we cannot infer that DBServerApp totally implements

prietary extension to the SQL language used in Oracle® Database Man-

agement Systems (DBMSs). PL/SQL allows procedural and object-

oriented constructs to be used along with embedded SQL statements or

queries for the access and management of data in an Oracle® DBMS [20].

Proceedings of the 40th Hawaii International Conference on System Sciences - 2007

6Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS'07)0-7695-2755-8/07 $20.00 © 2007

Figure 3. Partial Formal Implementation Model and Associated Security Vulnerability (Example 1).

the corresponding DBMSMainReq, which is our final goal.

In previous work [5], which focused on the discovery of

hidden implementation dependencies in order to anticipate

safety risks, the missing traceability link from AuthLibs to

ExtLibraryExec, among others, is called a hidden imple-

mentation dependency and was discovered and indicated as

an impDependecy traceability link by our SyModEx tool.

6.2. Vulnerability Note VU#871756: Oracle DBMSEscalation of Privileges during Initial Au-thentication

The Vulnerability Note VU#871756 was published by

the US-CERT on 17 January 2006 [24]. This vulnerabil-

ity affects multiple versions (10g, 9i, and 8i) of the Oracle

DBMS [10].

The user authentication protocol to connect to an Ora-

cle DBMS is composed of two steps. In the first step the

client process sends a username to the server process, and

the server validates the username and responds to the client.

In the second step the client sends the username, an obfus-

cated password, and a set of attribute/value pairs to con-

figure the user session. One of these attribute/value pairs

is the “AUTH-ALTER-SESSION” attribute, which is asso-

ciated with an SQL statement. The security vulnerability

VU#871756 resided in the fact that this SQL statement was

executed by the DBMSServerApp without any verification

whatsoever of user rights, allowing for any valid user to

execute an arbitrary SQL statement with system privileges.

Stated in the terms of our architectural implementation

model (Figure 4), the process in charge of executing the

SQL statement (SetUpSessionAttributes), or one or more of

its lower abstraction level implementor wps(s), is (are) not

partially implemented by the authentication libraries Auth-

Proceedings of the 40th Hawaii International Conference on System Sciences - 2007

7Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS'07)0-7695-2755-8/07 $20.00 © 2007

Figure 4. Partial Formal Implementation Model and Associated Security Vulnerability (Example 2).

Libs. Therefore, the LogIn process does not totally imple-

ment the authentication policy AuthPolicy. Figure 4 shows

a formal implementation model developed to represent this

vulnerability.

We remark that the models shown in Figures 3 and 4

are only simplified interpretations of an architecture corre-

sponding to the Oracle® DBMS at the time of the vulnera-

bilities. We do not imply in any way that these models rep-

resent the actual architecture of the Oracle® DBMS. The

models were developed only for the purpose of exempli-

fying how our framework could help minimize the occur-

rence of security vulnerabilities.

7. Conclusions

We introduced a framework composed of i) a formal im-

plementation model of a system, which is based on set the-

ory and predicate calculus [4, 5] and ii) a method for the

successive refinement of a formal implementation model

of a system through analysis and synthesis steps. These

formalized development steps constrain the formal imple-

mentation model architecture of a system to a lattice-like

algebraic structure. This algebraic structure ensures that

all requirements and constraints that a system component

must implement are in fact considered when developing the

system. Furthermore, this framework enables rigorous ver-

ification and validation of system correctness while at the

same time allows for informal approaches to be used in a

more systematic way.

We believe that our framework is a step toward the ef-

fective use of mathematically-based approaches in state of

the practice software engineering. Its main strength resides

in the fact that it enforces an algebraic structure to a system

architecture, while at the same time, allows the formaliza-

tion level to be adjusted during system development and

to vary depending on the portion of the system in question.

However, we also recognize that the framework is not with-

out problems.

In order to further enable a proactive discussion of

our implementation-oriented approach to secure system ar-

chitectures, we now introduce some topics and questions

which we believe need to be considered and addressed as a

Proceedings of the 40th Hawaii International Conference on System Sciences - 2007

8Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS'07)0-7695-2755-8/07 $20.00 © 2007

prerequisite for a successful implementation of our frame-

work.

One difficulty we anticipate for the future implemen-

tation of our approach is the transformation step from

old development methods to the framework introduced in

this paper. How do the object-oriented design and pro-

gramming methods interact with our new proposed frame-

work? Is it possible to implement a system created us-

ing our implementation-oriented secure architectures us-

ing object-oriented design and programming techniques?

What changes may our framework need to undergo in or-

der to coexist with the object-oriented methodology?

In a similar way, what changes will current program-

ming languages (structured and object-oriented) need to

undergo in order to accommodate the framework? For ex-

ample, when implementing an artifact we require that all

functionality must reside in the lower abstraction level (im-

plementor) wps and that the higher abstraction level arti-

fact (implementee) must act as manager of the lower level

wps(s). What changes would be needed to programming

languages in order to accommodate these restrictions? As

one of many options we could, for example, require code

modules to be segregated into either control modules or

functionality modules.

This framework has been purposely designed with the

characteristic of allowing the modeling of a main system

goal and the system requirements (including functional and

nonfunctional requirements) from the start. Furthermore, it

has been designed to allow a formal representation of the

partial implementation traceability links between the sys-

tem requirements and all other artifacts. For this reason,

we believe in the adequacy of the framework to model all

security features of a system. This is because, in the end,

all security features must necessarily derive from, or map

to, a security requirement or constraint.

For example, we believe that most current buffer over-

flow vulnerabilities could be seen, at a high level of ab-

straction, as the absence of a partial implementation link

between a wps representing a compiler and a protection

constraint wps stating that variables must not be referenced

outside their memory boundaries. Of course, the same con-

straint could be totally implemented by other wps(s) more

internal to the lattice-like formal implementation model;

for example, by using a safe library that ensures (com-

pletely implements) such constraint or even by ensuring

that every access to a variable partially implements such

constraint.

Although we believe that by using our framework most

security features could be modeled and verified at a work-

able level of abstraction, we also recognize that more work

is needed in order to address the scalability of the process.

8. Related Work

The idea of stating system goals up-front has been bor-

rowed from the work of Leveson and colleagues at the Mas-

sachusetts Institute of Technology. Leveson et al. have de-

veloped a safety and human-centered approach to the de-

sign and specification of safe systems (air traffic control

and airborne systems) [15, 16]. The basic techniques be-

hind Leveson’s approach are twofold: Intent Specificationsand the SpecTRM-RL requirements specification language.

The first step in an Intent Specifications-based specification

is the statement of system goals (intent). Our framework

goes a step further in this approach and states that there is

one system goal that encompasses all other system objec-

tives or subgoals. We call this goal the Main System Re-quirement and it forms the � (top) meta-artifact of our ar-

chitecture. In a similar way the highest refinement level in

an Intent Specification is the system implementation level,

which we unified into our System Implementation compo-

nent corresponding to our ⊥ (bottom) meta-artifact.

With respect to formal approaches that allow the de-

velopment of software implementations while keeping full

traceability to the formal structures, the Krestel Institute

offers a full-blown development application based on the

use of their Metaslang formal specification language and

the use of categorical operators for the derivation of pro-

gram refinements. Krestel’s formalization approach de-

fines what is called the Specware development process and

commences with an initial formal specification written in

Metaslang. Successive refinements are applied to the spec-

ification until it reaches a final state; all stages of the spec-

ification are executable [12]. However, this approach nec-

essarily requires the use of the Metaslang formal specifi-

cation language and of proprietary component implemen-

tation technologies. In contrast, our framework allows the

use of any specification and programming language as long

as the system architecture is constrained by its formal im-

plementation model as described in this paper.

9. Acknowledgments

The material presented in this paper is based on re-

search sponsored by the Air Force Research Laboratory

(AFRL) and the Defense Advanced Research Projects

Agency (DARPA) under agreement number F30602-02-1-

0178 with the Center for Secure and Dependable Systems

(CSDS) at the University of Idaho. The U.S. Government

is authorized to reproduce and distribute reprints for gov-

ernmental purposes notwithstanding any copyright notation

thereon. The views and conclusions contained herein are

those of the authors and should not be interpreted as nec-

essarily representing the official policies or endorsements,

Proceedings of the 40th Hawaii International Conference on System Sciences - 2007

9Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS'07)0-7695-2755-8/07 $20.00 © 2007

either expressed or implied, of AFRL and DARPA or the

U.S. Government.

10. References

[1] A. Avizienis, J. C. Laprie, B. Randell, and C. Landwehr,

“Basic concepts and taxonomy of dependable and secure

computing,” IEEE Trans. Dependable and Secure Comput.,vol. 1, no. 1, pp. 11–33, 2004.

[2] P. Clements, F. Bachmann, L. Bass, D. Garlan, J. Ivers,

R. Little, R. Nord, and J. Stafford, Documenting SoftwareArchitectures: Views and Beyond, ser. SEI Series in Soft.

Eng. New York, NY, U.S.A.: Addison-Wesley, 2002.

[3] D. Conte de Leon, “Formalizing traceability among soft-

ware work products,” Thesis, Univ. of Idaho, Moscow, ID,

U.S.A., Dec. 2002.

[4] D. Conte de Leon, “Completeness of implementation trace-

ability for the development of high assurance and critical

computing systems,” Dissertation, Univ. of Idaho, Moscow,

ID, U.S.A., 2006.

[5] D. Conte de Leon and J. Alves-Foss, “Hidden implementa-

tion dependencies in high assurance and critical computing

systems,” IEEE Trans. Softw. Eng., In press, 2006.

[6] R. I. Damper, “Emergence and levels of abstraction,” Int’lJournal of Systems Science, vol. 31, no. 7, pp. 811–818, Jul.

2000.

[7] “Galois Haskell-based approach to high assurance systems,”

Galois Inc., Beaverton, OR, U.S.A., 2006. [Online].

Available: http://www.galois.com/

[8] A. Hall and R. Chapman, “Correctness by construction:

Developing a commercial secure system,” IEEE Software,

vol. 19, no. 1, pp. 18–25, Jan./Feb. 2002.

[9] H. M. Hinton, “Under-specification, composition and emer-

gent properties,” in Proc. of the 1997 Workshop on New Se-curity Paradigms (NSPW ’97). New York, NY, U.S.A.:

ACM Press, 1997, pp. 83–93.

[10] “Oracle® DBMS - critical access control bypass in

login bug,” IMPERVA®, Foster City, CA, U.S.A., Jan.

2006. [Online]. Available: http://www.securityfocus.com/

bid/4033

[11] Institute of Electrical and Electronics Engineers, “Guide to

the software engineering body of knowledge,” IEEE, Tech.

Rep. Ironman Version, Jun. 2004.

[12] Kestrel, “Specware 4.1 tutorial,” Kestrel Development Cor-

poration and Kestrel Technology LLC, Tech. Rep., 2004.

[13] “Specware®,” Kestrel Institute, Palo Alto, CA, U.S.A.,

2004. [Online]. Available: http://www.specware.org/

[14] J. Launchbury, “Galois: High assurance software,” in Proc.of the 9th. ACM SIGPLAN International Conference onFunctional Programming. New York, NY, U.S.A.: ACM

Press, 2004, p. 3.

[15] N. G. Leveson, M. de Villepin, M. Daouk, J. Belling-

ham, J. Srinivasan, N. Neogi, E. Bachelder, N. Pilon, and

G. Flynn, “A safety and human-centered approach to de-

veloping new air traffic management tools,” Aeronautics

and Astronautics Department, MIT, and Eurocontrol Exper-

imental Centre, Tech. Rep., Dec. 2001.

[16] N. G. Leveson, “Intent specifications: An approach to

building human-centered specifications,” IEEE Trans. Softw.Eng., vol. 26, no. 01, pp. 15–35, Jan. 2000.

[17] N. G. Leveson, “A systems-theoretic approach to safety in

software intensive systems,” IEEE Trans. Dependable andSecure Comput., vol. 1, no. 1, pp. 66–86, Jan. 2004.

[18] N. G. Leveson, System Safety Engineering: Back to theFuture. Draft of Book, Jun. 2006. [Online]. Available:

http://sunnyday.mit.edu/book2.pdf

[19] “Praxis High Integrity Systems,” Praxis High Integrity

Systems Limited, Bath, U.K., 2006. [Online]. Available:

http://www.praxis-his.com/

[20] J. Price, Oracle Database 10g SQL. Hightstown, NJ,

U.S.A.: McGraw Hill, 2004.

[21] “Oracle TNS listener arbitrary library call execution

vulnerability,” Security Focus™, Calgary, AB, Canada,

Feb. 2002. [Online]. Available: http://www.securityfocus.

com/bid/4033

[22] Software Engineering Institute, “Capability maturity model

integration for systems engineering, software engineering,

integrated product and process development, and supplier

sourcing,” Carnegie Mellon Univ., Pittsburgh, PA, U.S.A.,

Tech. Rep. CMU/SEI-2002-TR-012, Mar. 2002.

[23] “Vulnerability note VU#180147,” United States Computer

Emergency Readiness Team (US-CERT®), Arlington,

VA, U.S.A., Feb. 2002. [Online]. Available: http:

//www.kb.cert.org/vuls/id/180147

[24] “Vulnerability note VU#871756,” United States Computer

Emergency Readiness Team (US-CERT®), Arlington, VA,

U.S.A., Jan. 2006. [Online]. Available: http://www.kb.cert.

org/vuls/id/871756

[25] E. N. Zalta, Ed., The Stanford Encyclopedia of Philosophy.

Stanford, CA, U.S.A.: The Metaphysics Research Lab

Center for the Study of Language and Information,

Mar. 2006, ch. Emergent Properties. [Online]. Available:

http://plato.stanford.edu/entries/properties-emergent/

Proceedings of the 40th Hawaii International Conference on System Sciences - 2007

10Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS'07)0-7695-2755-8/07 $20.00 © 2007