Optimizing Drug Delivery Systems Using Systematic "Design of Experiments." Part II: Retrospect and...

80
27 Critical Reviews™ in erapeutic Drug Carrier Systems, 22(1):27–105 (2004) Optimizing Drug Delivery Systems Using Systematic “Design of Experiments.” Part I: Fundamental Aspects Bhupinder Singh, Rajiv Kumar, & Naveen Ahuja Pharmaceutics Division, University Institute of Pharmaceutical Sciences, Panjab University, Chandigarh, India Address all correspondence to Bhupinder Singh, University Institute of Pharmaceutical Sciences, Panjab University, Chandigarh 160 014 India; [email protected] Referee: Dr. Gurvinder Singh Rekhi, Elan Holdings Inc., Gainesville, GA 30504, USA ABSTRACT: Design of an impeccable drug delivery product normally encompasses multiple objectives. For decades, this task has been attempted through trial and error, supplemented with the previous experience, knowledge, and wisdom of the formulator. Optimization of a pharmaceutical formulation or process using this traditional approach involves changing one variable at a time. Using this methodology, the solution of a specific problematic formulation characteristic can certainly be achieved, but attainment of the true optimal composition is never guaranteed. And for improvement in one characteristic, one has to trade off for degeneration in another. is customary approach of developing a drug product or process has been proved to be not only uneconomical in terms of time, money, and effort, but also unfavorable to fix errors, unpredictable, and at times even unsuccessful. On the other hand, the modern formulation optimization approaches, employing system- atic Design of Experiments (DoE), are extensively practiced in the development of diverse kinds of drug delivery devices to improve such irregularities. Such systematic approaches are far more advantageous, because they require fewer experiments to achieve an optimum formulation, make problem tracing and rectification quite easier, reveal drug/polymer inter- actions, simulate the product performance, and comprehend the process to assist in better formulation development and subsequent scale-up. Optimization techniques using DoE represent effective and cost-effective analytical tools to yield the “best solution” to a particular “problem.”rough quantification of drug delivery systems, these approaches provide a depth of understanding as well as an ability to explore and defend ranges for formulation factors, where experimentation is completed before optimization is attempted. e key elements of a DoE optimization methodology encompass planning the study objectives, screening of influential variables, experimental designs, postulation of mathematical models for various chosen response characteristics, fitting experimental data into these model(s), mapping and generating graphic outcomes, and design validation using model-based response surface methodology. e broad topic of DoE optimization methodology is covered in two parts. Part I of the 0743-4863/05$20.00 © 2005 by Begell House, Inc., www.begellhouse.com Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

Transcript of Optimizing Drug Delivery Systems Using Systematic "Design of Experiments." Part II: Retrospect and...

27

Critical Reviews™ in Th erapeutic Drug Carrier Systems, 22(1):27–105 (2004)

Optimizing Drug Delivery Systems Using Systematic “Design of Experiments.” Part I: Fundamental Aspects

Bhupinder Singh, Rajiv Kumar, & Naveen Ahuja

Pharmaceutics Division, University Institute of Pharmaceutical Sciences, Panjab University, Chandigarh, India

Address all correspondence to Bhupinder Singh, University Institute of Pharmaceutical Sciences, Panjab

University, Chandigarh 160 014 India; [email protected]

Referee: Dr. Gurvinder Singh Rekhi, Elan Holdings Inc., Gainesville, GA 30504, USA

ABSTRACT: Design of an impeccable drug delivery product normally encompasses multiple objectives. For decades, this task has been attempted through trial and error, supplemented with the previous experience, knowledge, and wisdom of the formulator. Optimization of a pharmaceutical formulation or process using this traditional approach involves changing one variable at a time. Using this methodology, the solution of a specifi c problematic formulation characteristic can certainly be achieved, but attainment of the true optimal composition is never guaranteed. And for improvement in one characteristic, one has to trade off for degeneration in another. Th is customary approach of developing a drug product or process has been proved to be not only uneconomical in terms of time, money, and eff ort, but also unfavorable to fi x errors, unpredictable, and at times even unsuccessful.

On the other hand, the modern formulation optimization approaches, employing system-atic Design of Experiments (DoE), are extensively practiced in the development of diverse kinds of drug delivery devices to improve such irregularities. Such systematic approaches are far more advantageous, because they require fewer experiments to achieve an optimum formulation, make problem tracing and rectifi cation quite easier, reveal drug/polymer inter-actions, simulate the product performance, and comprehend the process to assist in better formulation development and subsequent scale-up. Optimization techniques using DoE represent eff ective and cost-eff ective analytical tools to yield the “best solution” to a particular “problem.” Th rough quantifi cation of drug delivery systems, these approaches provide a depth of understanding as well as an ability to explore and defend ranges for formulation factors, where experimentation is completed before optimization is attempted. Th e key elements of a DoE optimization methodology encompass planning the study objectives, screening of infl uential variables, experimental designs, postulation of mathematical models for various chosen response characteristics, fi tting experimental data into these model(s), mapping and generating graphic outcomes, and design validation using model-based response surface methodology.

Th e broad topic of DoE optimization methodology is covered in two parts. Part I of the

0743-4863/05$20.00 © 2005 by Begell House, Inc.,

www.begellhouse.comBegell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

28

B. SINGH ET AL.

review attempts to provide thought-through and thorough information on diverse DoE aspects organized in a seven-step sequence. Besides dealing with basic DoE terminology for the novice, the article covers the niceties of several important experimental designs, mathematical models, and optimum search techniques using numeric and graphical meth-ods, with special emphasis on computer-based approaches, artifi cial neural networks, and judicious selection of designs and models.

KEY WORDS: artifi cial neural networks, computer software, drug product development, experimental design, factor screening, response surface methodology

I. INTRODUCTION

Th e domain of drug delivery has enabled a newer look toward drug formulation

development and subsequent patient therapy. Lately, pharmaceutical scientists have

made remarkable strides in the development of diverse types of newer drug deliv-

ery systems (DDS).¹-³ Development of such DDS invariably involves handling a

plethora of drugs, polymers, excipients, and processes. Th e traditional approach of

optimizing a formulation or process essentially entails studying the infl uence of

the corresponding composition and process variables by Changing One Single (or

Separate) variable or factor at a Time (COST), while keeping others as constant.⁴-⁹

Th e technique, at times, is also referred to as OVAT (i.e., One Variable at a Time) or

OFAT (i.e., One Factor at a Time) or “shotgun” approach.⁶,¹⁰,¹¹ During these COST

studies, the fi rst variable is fi xed at a favorable value, and the next is examined until

no further improvement is attained in the response variable.

For decades, drug formulations have been developed by this process of trial

and error.¹¹,¹² Th e COST approach can somehow achieve the solution of a specifi c

problematic property, but attainment of the true optimum composition or process is

never guaranteed.⁹,¹¹,¹³ It may be ascribed to the presence of interactions—i.e., the

infl uence of one or more variable(s) on others.⁷,¹⁴ During such interactions among

variables, the COST approach gets stuck, usually far from optimum. Because there is

no further improvement in the response, the experimenter may erroneously assume

attainment of the optimum. Th e fi nal product may be thought satisfactory but will

really be suboptimal, because a better formulation still exists, although unperceived

under the studied conditions.¹⁰,¹⁴-¹⁶

Th e prior experience, knowledge, and wisdom of the formulator have been the

key factors in formulating new or customized dosage forms. Sometimes, when the

developer is instinctive, skilled, and fortunate, such unsystematic approach may yield

surprisingly successful outcomes. Invariably, however, when skill, acumen, or chance

are not in the developer’s favor, it leads to squandering remarkable amounts of time,

energy, and resources. ⁴,¹⁵,¹⁶ Accordingly, the intuitive COST approach requires

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

29

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

many experiments for little gain in information about the system under investiga-

tion. Fıgure 1 illustrates the case of an arbitrary DDS depicting that the “arrived”

COST optimum is quite distant from the “missed” true optimum.

A drug delivery product and process design problem is normally characterized by

multiple objectives.¹¹,¹²,¹⁷ In an attempt to accomplish such objectives, a pharmaceu-

tical scientist has to fulfi ll various control limits for a formulation. For a controlled

release bioadhesive tablet, for instance, the dissolution rate profi le and bioadhesion

would be most appropriate to control.¹⁸ Because most of the objectives of a formula-

tion are often diff ering, accepting a suitable trade-off or compromise between one

or more properties—e.g., dissolution rate at the expense of bioadhesion—usually

becomes unavoidable.⁶,¹¹,¹⁷ Th us, the primary aim of the traditional formulator has

been to fi nd that suitable trade-off under the given set of constraints rather than

to design the best formulation. Th e imposed pressures of time, cost, resources, aes-

FIGURE 1. Pictorial representation of the COST approach to designing an archetypical trans-

dermal gel employing the optimal values of gelling polymer and penetration enhancer.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

30

B. SINGH ET AL.

thetics, and performance benchmarks further exacerbate the situation. Th erefore,

the conventional COST approach of drug formulation development suff ers from

several pitfalls.⁴,⁶-⁸,¹²,¹⁵,¹⁹,²⁰ Th e most important of these are enumerated in Box 1.

Th ese drug product inconsistencies are generally due to inadequate knowledge of

the underlying cause-and-eff ect relationship(s).⁶,⁹,¹⁹

Systematic optimization techniques, on the other hand, have widely been

practiced to alleviate such inconsistencies.⁷,¹¹,¹⁹,²¹-²⁶ Development of the principles

behind such optimization techniques, now known as design of experiments (DoE),

dates back to 1925, with its discovery by British statistician, Sir Ronald Fısher.²⁷

Th e implementation of DoE optimization techniques invariably encompasses use

of experimental designs and generation of mathematical equations and graphic

outcomes, thus depicting a complete picture of variation of the product/process

response(s) as a function of the input variable(s).¹⁵,²⁶,²⁸,²⁹ Employing various rational

combinations of formulation variables, DoE fi ts experimental data into statistical

equations, uses these as models to predict formulation performance, and optimizes

the critical responses. In direct contrast to the COST approach, DoE optimization

off ers an organized methodology that connects various experiments in a rational

manner, giving more precise information from fewer experiments.⁷,³⁰ Considering

all the multiple variables at once, DoE demonstrates how the system works as a

whole. It enables the experimenter to optimize all the critical responses and fi nd

BOX 1. Various Limitations of Changing One Variable at a Time (COST) Approach

Shortcomings of COST approach

Strenuous.

Uneconomical.

Time consuming.

Unsuitable to plug errors.

Inapt to reveal interactions.

Isolated and unconnected studies.

Pseudo-convergent to untrue optimum.

Result only in “just satisfactory” solutions.

Detailed study of all variables is prohibitive.

Prone to misinterpretation or faking of results.

Futile when all variables change simultaneously.

Unable to establish “cause and effect” relationship.

Ineffectual as leads to unnecessary runs and batches.

New product may retain defects inherent in the old one.

Irreproducible as infers randomly on the basis of origin.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

31

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

the “triumphant” combination. DoE undertakes a simultaneous testing approach in

parallel studies, which has proved to be far more eff ective, effi cient, economical, and

expedient than the “sequential” COST scheme.⁷,¹⁵ In a nutshell, the optimization

techniques possess much greater benefi ts, because they surmount several pitfalls

inherent to the traditional approaches.⁶,⁷,¹²,¹⁶,¹⁹,²²,²⁸,²⁹,³¹-³⁵ Several meritorious

features of DoE vis-à-vis COST optimization have been summarized in Box 2.

Of late, DoE optimization techniques are becoming a regular practice globally,

not only in the design and development of an assortment of new dosage forms, but

also for modifying existing ones.⁸,¹⁰ Putting such rational approaches into practice,

however, usually involves a great deal of mathematical and statistical intricacy. De-

spite its discovery in the 1920s, DoE optimization lay virtually dormant because

the manual calculations it required were extremely cumbersome. It often called

for the pivotal help of an apt computer interface.²⁵,³⁵-³⁷ Software that automates

the “designed-experiment” studies was invented in the early days of mainframe

computers.³⁸ Mainframes no doubt chugged through complicated DoE equations

but required programming skills beyond the scope of the most experimenters.

BOX 2. Various Meritorious Features of Systematic DoE Optimization Techniques

Advantages of systematic optimization techniques

Require fewer experiments to achieve an optimum formulation.

Can trace and rectify a “problem” in a remarkably easier manner.

Lead to comprehensive understanding of the formulation system.

Yield the “best solution” in the presence of competing objectives.

Help in fi nding the “important” and “unimportant” input variables.

Tests and improves “robustness” amongst the experimental studies.

Can change the formulation ingredients or processes independently.

Aid in determining experimental error and detecting “bad data points.”

Can simulate the product or process behavior using model equation(s).

Save a signifi cant amount of resources viz. time, effort, materials and cost.

Evaluate and improve the statistical signifi cance of the proposed model(s).

Can predict the performance of formulations even without preparing them.

Detect and estimate the possible interactions and synergies among variables.

Facilitate decision–making before next experimentation by response mapping.

Provide reasonable fl exibility in experimentation to assess the product system.

Can decouple signal from background noise enabling inherent error estimation.

Comprehend a process to aid in formulation development and ensuing scale–up.

Furnish ample information on formula behavior from one simultaneous study only.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

32

B. SINGH ET AL.

Nonetheless, it wasn’t until those room-sized computers became desktop PCs that

aff ordable DoE software fi rst appeared to cater to the nonstatistical experts. Today,

with the availability of comprehensive DoE software, coupled with the powerful and

economical hardware, the erstwhile computational hiccups have been greatly simpli-

fi ed and streamlined.⁶,²⁸ Hence, computer use is considered almost indispensable

in DoE optimization methods to take care of the numeric calculations entailed in

its realization. Accordingly, the onerous task of systematic optimization of a DDS

can be accomplished using a three-pronged strategy encompassing vistas of drug

delivery, DoE, and computer-aided computation. Fıgure 2 illustrates the synergy

between them.

Th e conduct of systematic DoE studies using computers, undeniably obviates

an in-depth knowledge of statistical and mathematical precepts. However, com-

prehension of varied concepts behind these methodologies is certainly a must for

the successful conduct of optimization studies. Th e information on such rational

techniques, however, lies scattered in diff erent books and journals. Complete and

lucid description of the variegated facets of DoE optimization is not available from

a single textual source. Th e current article is an earnest attempt to furnish such

unambiguous and illustrated information.

Th e vast topic of DoE optimization of drug delivery is being discussed in two

parts. Part I, herein, acquaints the reader with the DoE fundamentals by presenting

a concise and cogent account of vital principles and precepts of these systematic

methodologies, absolutely needed to comprehend and execute the approach. Part

II, appearing in a subsequent issue, will thrash out the subtler features of the DoE

application in designing wide-ranging products and processes and lead to successful

development of variegated DDS.

FIGURE 2. Pivotal elements for successful endeavor in optimization of drug delivery systems.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

33

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

II. OPTIMIZATION: FUNDAMENTAL DoE CONCEPTS AND TERMINOLOGY

Th e word optimize simply means to make as perfect, eff ective, or functional as pos-

sible.⁴,¹⁶ Th e term optimized has been used in the past to suggest that a product has

been improved to accomplish the objectives of a development scientist. However,

today the term implies that DoE and computers have been used to achieve the

objective(s). With respect to drug formulations or pharmaceutical processes, opti-

mization is a phenomenon of fi nding the best possible composition or operating

conditions.⁴,⁶ Accordingly, optimization has been defi ned as the implementation of

systematic approaches to achieve the best combination of product and/or process

characteristics under a given set of conditions.¹⁹

II.A. Variables

Design and development of any drug formulation or pharmaceutical process invari-

ably involves several variables.⁴,²⁵,³⁹ Th e input variables, which are directly under

the control of the product development scientist, are known as independent vari-

ables—e.g., drug content, polymer composition, compression force, percentage of

penetration enhancer, hydration volume, agitation speed. Such variables can either

be quantitative or qualitative.²⁸,⁴⁰ Quantitative variables are those that can take

numeric values (e.g., time, temperature, amount of polymer, osmogent, plasticizer,

superdisintegrants) and are continuous. Instances of qualitative variables, on the

other hand, include the type of polymer, lipid, excipient, or tableting machine.

Th ese are also known as categorical variables.⁶,⁴¹ Th eir infl uence can be evaluated

by assigning discrete dummy values to them. Th e independent variables, which

infl uence the formulation characteristics or output of the process, are labeled fac-

tors.⁶,³⁴,⁴⁰ Th e values assigned to the factors are termed levels—e.g., 100 mg and 200

mg are the levels for the factor, release-rate-controlling polymer in the compressed

matrices. Restrictions imposed on the factor levels are known as constraints.¹⁶,⁴⁰

Th e characteristics of the fi nished drug product or the in-process material are

known as dependent variables—e.g., drug release profi le, percent drug entrapment,

pellet size distribution, moisture uptake.⁶,²⁸,⁴² Popularly termed response variables, these

are the measured properties of the system to estimate the outcome of the experiment.

Usually, these are direct function(s) of any change(s) in the independent variables.

Accordingly, a drug formulation (product), with respect to optimization tech-

niques, can be considered as a system whose output (Y ) is infl uenced by a set of

input variables via a transfer function (T ).⁷,³¹ Th ese input variables may either be

controllable (X; signal factors) or uncontrollable (U; noise factors).²⁸,⁴³ Fıgure 3 depicts

the same graphically.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

34

B. SINGH ET AL.

Th e nomenclature of T depends upon the predictability of the output as an ef-

fect of the change of input variables. If the output is totally unpredictable from the

previous studies, T is termed the black box. Th e term white box is used for a system

with absolutely true predictability, while the term gray box is used for moderate pre-

dictability. Using optimization methods, the attempt of the formulator is to attain

a white box or nearly white box status from the erstwhile black or gray box status

observed in the traditional studies.¹⁹ Th e greater the number of variables in a given

system, the more complicated becomes the job of DoE optimization.³¹ Nevertheless,

regardless of the number of variables, a distinct relationship exists between a given

response and the factors studied.⁶,³¹

II.B. Effect, Interaction, and Confounding

Th e magnitude of the change in response caused by varying the factor level(s) is

termed as an eff ect.³⁴,⁴⁰ Th e main eff ect is the eff ect of a factor averaged over all the

levels of other factors.

However, an interaction is said to occur when there is “lack of additivity of

factor eff ects.” Th is implies that the eff ect is not directly proportional to the change

in the factor levels.⁴⁰ In other words, the infl uence of a factor on the response is

nonlinear.⁴,⁶,⁷,⁴⁴ In addition, an interaction may said to take place when the eff ect

of two or more factors are dependent on each other—e.g., the eff ect of factor A

changes on changing factor B by one unit. Th e measured property of the interacting

X Y

U

T

U

FIGURE 3. System with controllable input variables (X), uncontrollable input variables (U),

transfer function (T), and output variables (Y).

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

35

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

variables depends not only on their fundamental levels, but also on the degree of

interaction between them. Depending upon whether the change in the response is

desired (positive) or undesired (negative), the phenomenon of interaction may be

described as synergism or antagonism, respectively.⁶,⁴⁰ Fıgure 4 illustrates the concept

of interaction graphically.

Eff ects plot is plotted between the magnitude of various coeffi cients for the ef-

fects and/or interactions against the response variable.⁶,³¹ Th e plot is drawn during

the initial stages of DoE to determine the infl uence of each term.

Th e term orthogonality is used if the estimated eff ects are due to the main factor

of interest and are independent of interactions.²⁹,⁴⁰,⁴⁵,⁴⁶ Conversely, lack of orthogo-

nality (or independence) is termed confounding or aliasing.⁴⁰,⁴⁴ When an eff ect is

confounded (or aliased, or mixed up, or equalled), one cannot assess how much of

the observed eff ect is due to the factor under consideration. Th e eff ect is infl uenced

by other factors in a manner that cannot easily be explored. Th e measure of the de-

gree of confounding is known as resolution.⁷,⁴⁵ At times, there is confusion between

confounding and interaction. Confounding, in fact, is a bias that must be controlled

by suitable selection of the design and data analysis. Interaction, on the other hand,

is an inherent quality of the data, which must be explored. Confounding must be

assessed qualitatively, while interaction may be tested more quantitatively.⁴⁴

NO INTERACTION

DRUG DRUG

Low polymer level

D

SSO

UT

ON

D

SSO

UT

ON

D

SSO

UT

ON

DISSOLUTION

Low High

DISSOLUTION

Low High

Low polymer level

High polymer level High polymer level

INTERACTION

FIGURE 4. Diagrammatic depiction of interaction. Unparallel lines in (b) describe the

phenomenon of interaction between the levels of drug and polymer amount affecting

drug dissolution. (—): Linear response-factor relationship; (.....): nonlinear response-factor

relationship.

(a) (b)

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

36

B. SINGH ET AL.

II.C. Coding

Th e process of transforming a natural variable into a nondimensional coded variable,

Xi , so that the central value of experimental domain is zero is known as coding (or

normalization).³⁴,⁴⁰,⁴⁷

Generally, the various levels of a factor are designated as –1, 0, and +1, repre-

senting the lowest, intermediate (central), and highest factor levels investigated,

respectively.⁶,³¹,⁴⁰ For instance, if sodium carboxymethyl cellulose, a hydrophilic

polymer, is studied as a factor in the range of 120–240 mg, then codes –1 and +1

signify 120 mg and 240 mg amounts, respectively. Th e code 0 would represent the

central point at the arithmetic mean of the two extremes—i.e., 180 mg. Alterna-

tively, for convenience, the factors and their levels have been denoted by alphabetic

notation (symbol) to express various combinations investigated in the study. For

example, a factor is denoted by a capital alphabet letter (say factor A ), the high

level by a, and low level as (–1). Table 1 illustrates the alphabetic denotations used

in pharmaceutical literature for coding factors and their factor combinations at the

respective levels.

Although the terminology for factors as A and B and their levels as (1), a, b, etc.

is comprehensive in the text format, their translation into mathematical equation(s)

is neither practical nor easy to comprehend.¹⁹ Th erefore, the symbol Xk is normally

used for representing the factor X, where the subscript k depicts the number of fac-

tors.²⁸,³¹ Analogously, the subscripted β values are employed to denote the coeffi cient

values in the mathematical equations.

Coding involves the orthogonality of eff ects and depicts eff ects and interaction(s)

using (+) or (–) signs.¹⁶,⁴⁰ It assigns equal signifi cance to each axis and allows not

only easier calculation of coeffi cients and coeffi cient variances, but easier depiction

of response surfaces as well.

To circumvent any anomaly in factor sensitivity with change in levels, it is

recommended that the factor coding must be carried out judiciously.²⁸,⁴⁰ For instance,

in the case of microsphere production, if one factor is stirring speed (say, within

TABLE 1. Denotation of Various Levels of Two Factors

Factor Level notations

Low level High level

A –1 a

B –1 b

AB 1 ab

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

37

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

the range of 1500–3000 rpm) and the other is pH (say within the range of 1–5), a

change of 1 pH unit is far more signifi cant than a change of 1 rpm.

II.D. Experimental Domain

Th e dimensional space defi ned by the coded variables is known as factor space.⁶,²²

Fıgure 5 illustrates the factor space for two factors on a bidimensional (2-D) plane

during the formulation of controlled release microspheres.⁴⁸ Th e part of the factor

space, investigated experimentally for optimization, is the experimental domain.⁶,⁴⁷

Also known as the region of interest, it is enclosed by the upper and lower levels of

the variables. Th e factor space covers the entire fi gure area and extends even beyond

it, whereas the design space of the experimental domain is the square enclosed by

X₁ = ±1, X₂ = ±1.

II.E. Experimental Design

Th e conduct of an experiment and the subsequent interpretation of its experimental

outcome are the twin essential features of the general scientifi c methodology.⁴,²² Th is

FIGURE 5. Quantitative factors and factor space. The axes for the natural variables, ethyl

cellulose:drug ratio and Span 80 are labeled U1 and U2 and those of the corresponding

coded variables X1 and X2.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

38

B. SINGH ET AL.

can be accomplished only if the experiments are carried out in a systematic way and

the inferences are drawn accordingly. An experimental design is the statistical strategy

for organizing the experiments in such a manner that the required information is

obtained as effi ciently and precisely as possible.²⁶,²⁹,³⁴,⁴⁹ Runs or trials are the experi-

ments conducted according to the selected experimental design.⁶,²⁸ Such DoE trials

are arranged in the design space so that the reliable and consistent information is

attainable with minimum experimentation. Th e layout of the experimental runs in a

matrix form, according to the experimental design, is known as the design matrix.⁶,³¹

Th e choice of design depends upon the proposed model, the shape of the domain,

and the objective of the study. Primarily, the experimental (or statistical) designs are

based on the principles of randomization (i.e., the manner of allocations of treat-

ments to the experimental units), replication (i.e., the number of units employed for

each treatment), and error control or local control (i.e., the grouping of specifi c types

of experiments to increase the precision).⁷,³¹,³⁴,⁴⁷

For deriving maximal benefi ts from DoE, an experimenter has invariably to

know, comprehend and apply some or all of the following aspects.

1. Blocking in Experimental Designs

Often the estimation of “eff ects” and “interaction” becomes complicated as a result

of variability in the results caused by some uncontrollable factors, commonly termed

nuisance factors or extraneous factors.⁷ Although these nuisance factors are the fac-

tors that may aff ect the measured result, they are not of primary interest. In such

situations, blocks are generated in the experimental domain. Each block is a set of

relatively homogenous experimental conditions, wherein every level of the primary

factor occurs the same number of times with each level of nuisance factor.⁷,³¹,⁴⁶

Th ese uncontrollable factors, therefore, are usually taken as the blocking factors. Th is

technique of blocking is used to reduce or eliminate the variability transmitted by

the nuisance factors. Accordingly, the analysis of the experiment focuses on the

eff ect of varying levels of the primary factor “within each block” of the experiment.

Runs are distributed over blocks in such a way that any diff erence between the

blocks does not bias the results for the factors of interest. Th is is accomplished

by treating the blocking factor as another factor in the design. Th e inclusion of

blocking factors as additional factors in the design results in loss of estimation of

some interaction terms, eventually lowering the resolution of the design. None-

theless, the technique of blocking makes the design statistically more powerful.³¹

It allows simultaneous estimation and control of variability stemming from the

diff erence(s) between the blocks during optimization of a process or formulation.

Blocking considerably improves the precision with which comparisons are made

among the factors of interest.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

39

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

2. Resolution of Experimental Designs

One of the important features of the experimental designs is their resolution—i.e.,

the ability to describe the degree to which the estimated main eff ects are aliased (or

confounded) with the estimated two-, three-, or higher level interactions.⁶,⁷,¹⁵,⁴⁵ In

general, the resolution of a design is one more than the smallest order interaction

that some main eff ect is confounded with.⁴¹ For instance, if some main eff ects are

confounded with some two-level interactions, then the resolution is III. Th e most

prevalent design resolutions in the pharmaceutical arena are III, IV, and V.⁶ Th ese

designs imply that

a. Resolution III Designs: In such designs, the main eff ects are confounded (aliased)

with two-factor interactions.

b. Resolution IV Designs: No main eff ects are aliased with two-factor interactions,

but two-factor interactions are aliased with each other.

c. Resolution V Designs: No main eff ect or two-factor interaction is aliased with

any other main eff ect or two-factor interaction, but two-factor interactions are

aliased with three-factor interactions.

Th e orthogonal designs, where the estimation of main eff ects and interactions are

independent of each other, are said to possess “infi nite resolution.”³¹ For most practi-

cal purposes, when the number of factors is quite large in pharmaceutical product

development, a resolution IV design may be adequate, while a resolution V design

is an excellent choice. Resolution III designs, on the other hand, are also useful in

conditions where the number of factors is large and interactions among them are

assumed to be negligible.

Th e resolution of experimental designs can be improved upon by the fold over

technique.⁷,³¹,⁴⁶,⁵⁰ Th e procedure involves the generation or addition of a second

block of experiments, in which the levels of each factor are reversed from the origi-

nal block. For a resolution III design, this will improve the alias structure for all

the factors. Fold over designs can either be mirror-image fold over designs (resulting

in complete dealiasing of main eff ects and all interactions) or alternative fold over

designs (involving break-up of specifi c alias patterns).

3. Design Augmentation

In the whole DoE endeavor, a situation sometimes arrives in which a study, conducted

at some stage, is found to be inadequate and needs to be investigated further, or

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

40

B. SINGH ET AL.

when the study carried out during the initial stages needs to be “reused.”¹⁵ In either

situation, more design points can be added systematically to the erstwhile design.

Th us, the erstwhile primitive design can be enhanced to a more advanced design

furnishing more information, better reliability, and higher resolution. Th is process

of extension of a statistical design, by adding some more rational design points, is

known as design augmentation.³¹,⁴¹ For instance, a design involving study at two levels

can be augmented to a three-level design by adding some more design points. A

design can be augmented in a number of ways, such as by replicating, adding center

points to two-level designs, adding axial points (i.e., design points at various axes of

the experimental domain), or by folding over.

II.F. Response Surfaces

Conduct of DoE trials, according to the chosen statistical design, yields a series

of data on the response variables explored. Such data can be suitably modeled to

generate mathematical relationships between the independent variables and the

dependent variables. Graphical depiction of the mathematical relationship is known

as a response surface.¹⁹,⁴⁵,⁴⁹ A response surface plot is a 3-D graphical representation of

a response plotted between two independent variables and one response variable.

Th e use of 3-D response surface plots allows us to understand the behavior of the

system by demonstrating the contribution of the independent variables.

Th e geometric illustration of a response obtained by plotting one independent

variable against another, while holding the magnitude of response and other variables

as constant, is known as a contour plot.²⁸ Such contour plots represent the 2-D slices

of the corresponding 3-D response surfaces. Th e resulting curves are called contour

lines. Fıgure 6 depicts a typical response surface and contour plot for a diff usional

release exponent (proposed by Korsemeyer et al.⁵¹) as the response variable, reported

with mucoadhesive compressed matrices of atenolol.⁵² For complete response depic-

tion among k independent variables, a total of kC₂ number of response surfaces and

contour plots may be required. In other words, 1, 3, 6, or 10 number of 3-D and

2-D plots are needed to provide depiction of each response for 2, 3, 4, or 5 number

of variables, respectively.¹⁵,³¹

II.G. Mathematical Models

Th e mathematical model, simply referred to as the model, is an algebraic expression

defi ning the dependence of a response variable on the independent variable(s).⁴⁶,⁵³

Mathematical models can either be empirical or theoretical.²⁸ An empirical model

provides a way to describe the factor/response relationship. It is most frequently, but

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

41

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

not invariably, a set of polynomial equations of a given order.⁷,⁴⁶ Most commonly

used linear models are shown in Eqs. (1)–(3):

Y X X= + + + +β β β ε0 1 1 2 2 ... (1)

Y X X X X= + + + + +β β β β ε0 1 1 2 2 12 1 2 ... (2)

Y X X X X X X= + + + + + + +β β β β β β ε0 1 1 2 2 12 1 2 11 12

22 22 ... (3)

where Y represents the estimated response, sometimes also denoted as E(y). Th e

symbols Xi represent the value of the factors, and β0, βi, βii, and βij are the constants

representing the intercept, coeffi cients of fi rst-order (fi rst-degree) terms, coeffi cients

of second-order quadratic terms, and coeffi cients of second-order interaction terms,

respectively. Th e symbol ε implies pure error. Equations (1) and (2) are linear in

variables, representing a fl at surface and a twisted plane in 3-D space, respectively.

Equation (3) represents a linear second-order model that describes a twisted plane

with curvature, arising from the quadratic terms.

A theoretical model or mechanistic model may also exist or be proposed. It is most

-10

1-1

0

1

0.64

0.70

0.76

0.82

Rel

ease

exp

one

nt

HPMCSod. CMC

0.76-0 .82

0.70-0 .76

0.64-0 .70

H P M C

So

d.

CM

C- 1 . 0 - 0 . 5 0 . 0 0 . 5 1 . 0

- 1 . 0

- 0 . 5

0 . 0

0 . 5

1 . 0

0 .6 8

0 .7 4

0 .7 7

0 .8 0

0 .7 1

0 .7 1

(a) (b)

FIGURE 6. (a) A typical response surface plotted between a response variable, release

exponent, and two factors, HPMC and sodium CMC, in case of mucoadhesive compressed

matrices; (b) the corresponding contour plot.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

42

B. SINGH ET AL.

often a nonlinear model, where transformation to a linear function is not usually

possible.²⁸ Such theoretical relationships are, however, rarely employed in pharma-

ceutical product development.

III. DRUG DELIVERY OPTIMIZATION: DoE METHODOLOGY

An experimental approach to DoE optimization of DDS comprises several

phases.⁵,¹⁵,²⁸,⁵⁴ Broadly, these phases can be sequentially summed up in seven salient

steps. Fıgure 7 delineates these steps pictographically.

FIGURE 7. Seven-step ladder for optimizing drug delivery systems.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

43

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

• Th e optimization study begins with Step I, where an endeavor is made to as-

certain the initial drug delivery objective(s) in an explicit manner. Various main

response parameters, which closely and pragmatically epitomize the objective(s),

are chosen for the purpose.

• In Step II, the experimenter has several potential independent product and/or

process variables to choose from. By executing a set of suitable screening tech-

niques and designs, the formulator selects the “vital few” infl uential factors among

the possible “so many” input variables. Following selection of these factors, a

factor infl uence study is carried out to quantitatively estimate the main eff ects

and interactions. Before going to the more detailed study, experimental studies

are undertaken to defi ne the broad range of factor levels as well.

• During Step III, an apposite experimental design is worked out on the basis of

the study objective(s), and the number and the type of factors, factor levels, and

responses being explored. Working details on variegated vistas of the experi-

mental designs, customarily required to implement DoE optimization of drug

delivery, have been elucidated in the subsequent section. Afterwards, response

surface modeling (RSM) is characteristically employed to relate a response vari-

able to the levels of input variables, and a design matrix is generated to guide

the drug delivery scientist to choose optimal formulations.

• In Step IV, the drug delivery formulations are experimentally prepared according

to the approved experimental design, and the chosen responses are evaluated.

• Later in Step V, a suitable mathematical model for the objective(s) under

exploration is proposed, the experimental data thus obtained are analyzed

accordingly, and the statistical signifi cance of the proposed model discerned.

Optimal formulation compositions are searched within the experimental domain,

employing graphical or numerical techniques. Th is entire exercise is invariably

executed with the help of pertinent computer software.

• Step VI is the penultimate phase of the optimization exercise, involving vali-

dation of response prognostic ability of the model put forward. Drug delivery

performance of some studies, taken as the checkpoints, is assessed vis-à-vis that

predicted using RSM, and the results are critically compared.

• Fınally, during Step VII, which is carried out in the industrial milieu, the process

is scaled up and set forth ultimately for the production cycle.

Th e niceties of the signifi cance and execution of each of these seven steps is discussed

in greater detail below.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

44

B. SINGH ET AL.

III.A. Step I: Objective

Th e foremost step while executing systematic DoE methodology is to understand the

deliverables of the fi nished product. Th is step is not merely confi ned to understanding

the process performance and the product composition, but it usually goes beyond

to enfold the concepts of economics, quality control, packaging, market research,

etc.³¹ Th e term objective (also called criterion) has been used to indicate either the

goal of an optimization experiment or the property of interest.¹⁶,²⁸ Th e objectives

for an experiment should be clearly determined after discussion among the project

team members having sound expertise and empiricism on product development,

optimization, production, and/or quality control. Th e group of scientists contemplates

the key objectives and identifi es the trivial ones. Prioritizing the objectives helps in

determining the direction to proceed with regard to the selection of the factors, the

responses, and the particular design.⁵,⁵⁴,⁵⁵ Th is step can be very time consuming and

may not furnish rapid results. However, unless the objectives are accurately defi ned,

it may be necessary to repeat the entire work that is to follow. Th e response variables,

selected with dexterity, should be such that they provide maximal information with

the minimal experimental eff ort and time. Such response variables are usually the

performance objectives, such as the extent and rate of drug release, or are occasionally

related to the visual aesthetics, such as chipping, grittiness, or mottling.¹⁵

III.B. Step II: Factor Studies

Subsequent to ascertaining the study objectives and responses, “several possible” fac-

tors are envisioned and screening of a “few important” ones is done. Th e infl uence

of the important factors—i.e., the main eff ects and the possible interactions are also

studied. Collectively, screening and factor infl uence studies are also known as factor

studies.⁴ Often carried out as a prelude to fi nding the optimum, these are sequential

stages in the development process. Screening methods are used to identify important

and critical eff ects.⁶,⁵⁴ Factor studies aim at quantitative determination of the eff ects

as a result of a change in the potentially critical formulation or process parameter(s).

Such factor studies usually involve statistical experimental designs, and the results

so obtained provide useful leads for further response optimization studies.

1. Screening of Infl uential Factors

As the term suggests, screening is analogous to separating “rice” from “rice husk,”

where rice is a group of factors with signifi cant infl uence on response, and husk is

a group of the rest of the noninfl uential factors.¹⁵ A product development scientist

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

45

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

normally has numerous possible input variables to be investigated for their impact

on the response variables. During the initial stages of optimization, such input

variables are explored for their infl uence on the outcome of the fi nished product

to see if they are factors.⁴,⁶,⁵⁶ Th e process, called screening of infl uential variables, is

a paramount step. An input variable, identifi ed as a factor, increases the chance of

success, while an input variable that is not a factor has no consequence.²⁸ Further-

more, an input variable falsely identifi ed as a factor unduly increases the eff ort and

cost, while an unrecognized factor leads to an erroneous picture, and a true optimum

may be missed.

Principally, screening embarks upon the phenomenon of sparsity eff ect—i.e.,

only a few of the factors among the numerous envisioned ones truly explain a

larger proportion of the experimental variation.⁷,³¹,⁵⁷ Th e factors responsible for the

variability are the active or infl uential variables, while others are termed inactive

or less infl uential variables. Th e entire exercise aims solely at selecting the active

factors and excluding the redundant variables, but not at obtaining complete and

exact numerical data on the system properties. Such a reduction in the number

of factors becomes necessary before the pharmaceutical scientist invests the hu-

man, fi nancial, and industrial resources in more elaborate studies.⁴,⁵⁴ Th is phase

may be omitted if the process is known well enough from the analogous studies.

Even after elimination of the noninfl uential variables, the number of factors may,

at times, still be too large to optimize in terms of available resources of time,

money, manpower, equipment, etc.⁴ In such cases, more infl uential variables are

optimized, keeping less infl uential ones as constant at their best levels. Th e number

of experiments is kept as small as possible to limit the volume of work carried out

during the initial stages.

a. Screening Designs

Th e experimental designs employed for this purpose are commonly termed screening

designs.⁵⁴,⁵⁶ Screening presumes considerable approximation of the additivity of the

diff erent factors and the absence of interaction. Th erefore, the primary purpose of

the screening design is to identify signifi cant main eff ects, rather than interaction

eff ects. Th us, these are usually fi rst-order designs with low resolution.²²,³¹ Th ese

designs are also sometimes termed main eff ects designs or orthogonal main eff ect plans

or simply orthogonal arrays.⁶ Th e number of experiments in the screening process is

kept small, but it must at least be equal to the number of independent coeffi cients

(P ) required to be calculated, as in Eq. (4).

P Sik

i= + −∑ =1 11( ) (4)

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

46

B. SINGH ET AL.

where Si is the number of levels of the ith factor, when there are k factors in all.⁶ Th e

estimators of the coeffi cients should be orthogonal and be estimated with minimum

possible error. In general, in order to determine main eff ects independently, the

number of runs should be four times the number of factors to be estimated.³¹ Th e

experimental designs are said to be saturated if the number of runs equals the number

of model terms to be estimated.³¹,⁵⁷ In cases where a larger number of factors needs

to be screened, the number of runs becomes exorbitantly high. In such circumstances,

supersaturated designs, which possess fewer runs than factors, are used. Supersaturated

designs can be attractive for factor screening, especially when there are many factors

and/or the experimental runs are expensive. A supersaturated design can examine

dozens of factors using fewer than half the number of runs. Th is is usually at the

expense of the precision and accuracy of the information. Th e mathematical models

normally considered for screening include the linear and interaction models already

described by Eqs. (1) and (2).⁴,⁵⁴,⁵⁶ A two-level screening design can be augmented

to a high-level design by adding axial points along with center points.

2. Factor Infl uence Study

Having screened the infl uential variables, a more comprehensive study is subsequently

undertaken, with the main aim to quantify the eff ect of factors and to determine the

interactions, if any.⁴,⁶,⁴⁵ Herein, the studied experimental domain is less extensive,

as many fewer active factors are studied. Th e models used for this study are neither

predictive nor capable of generating a response surface. Th e number of levels is usu-

ally limited to two (i.e, the factors are investigated at the extreme values). However,

suffi cient experimentation is carried out to allow for the detection of interactions

among factors.⁶,⁵³ Th e experimental designs used are generally of the same kind as

used for screening. Th e experiments conducted at this step may often be “reused”

during the optimization or response modeling phase by augmenting the experimental

designs with additional design points at the center or the axes.³¹ Central points (i.e.,

at the intermediate level), if added at this stage, are not included in the calculation

of model equations.⁴ Nevertheless, they may prove to be useful in identifying the

curvature in the response, in allowing the reuse of the experiments at various stages,

and if replicated, in validating the reproducibility of the experimental study.

III.C. Step III: Response Surface Modeling and Experimental Designs

During this crucial stage in DoE, one or more selected experimental responses

are recorded for a set of experiments carried out in a systematic way to develop a

mathematical model.⁸,²¹,²⁶,³³,⁴⁷,⁵⁸,⁵⁹ Th ese approaches comprise the postulation of an

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

47

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

empirical mathematical model for each response, which adequately represents change

in the response within the zone of interest. Rather than estimating the eff ects of each

variable directly, response surface modeling (RSM) involves fi tting the coeffi cients

into the model equation of a particular response variable and mapping the response

over the whole of the experimental domain in the form of a surface.⁶,¹⁹,²³,⁴⁶,⁵⁴

Principally, RSM is a group of statistical techniques for empirical model build-

ing and model exploitation.⁴⁶,⁵⁴ By careful design and analysis of experiments, it

seeks to relate a response to a number of predictors aff ecting it by generating a

response surface, which is an area of space defi ned within upper and lower limits

of the independent variables depicting the relationship of these variables to the

measured response.

Experimental designs, which allow the estimation of main eff ects, interaction

eff ects, and even quadratic eff ects, and, hence, provide an idea of the (local) shape

of the response surface being investigated, are termed response surface designs.⁶,²⁸,⁴⁵,⁵⁹

Under some circumstances, a model involving only main eff ects and interactions

may be appropriate to describe a response surface. Such circumstances arise when

analysis of the results reveals no evidence of “pure quadratic” curvature in the response

of interest—i.e., the response at the center approximately equals the average of the

responses at the two extreme levels, +1 and –1.

In each part of Fıgure 8 (a, b, and c), the value of the response increases from

the bottom of the fi gure to the top and those of the factor settings increase from left

to right.³¹ If a response behaves as in Fıgure 8a, the design matrix to quantify that

behavior needs only to contain factors with two levels—low and high. Th is model

is a basic assumption of simple two-level screening or factor-infl uence designs. If a

response behaves as in Fıgure 8b, the minimum number of levels required for a factor

(a) (b) (c)

FIGURE 8. Different types of responses as functions of factor settings; (a) linear; (b) qua-

dratic; (c) cubic.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

48

B. SINGH ET AL.

to quantify that behavior is three. Addition of center points to a two-level design ap-

pears to be a logical step at this point, but the arrangement of the treatments in such

a matrix may confound all the quadratic eff ects with each other.³¹,⁴⁵,⁴⁶ A two-level

design with center points can only detect the quadratic nature of the response, but

not estimate the individual pure quadratic eff ects. Generally, the quadratic models are

proposed for optimization of drug delivery devices.⁴,⁶,²² Th erefore, response surface

designs involving studies at three or more than three levels are employed for DoE

optimization purposes. Th ese response surface designs are used to fi nd improved or

optimal process settings, troubleshoot the process problems and weak points, and

make a formulation or process more robust (i.e., less variable) against external and

noncontrollable infl uences.³¹,⁴⁵ Relatively more complicated cubic responses (Fıg.

8c) are quite infrequent in pharmaceutical practice.⁶,²²

Th e prediction ability of response surface designs can be determined by prediction

variance, which is a function of experimental variance (σ²) and variance function

(d ), as described by Eq. (5).⁶,⁷,⁴⁵

2ˆvar .y d (5)

where var ( y) is the prediction variance. Th e variance function (d ) further depends

upon the levels of a factor and the experimental design. When the prediction vari-

ance of a response is constant in all the directions at a given distance from the center

point of the domain, the design is termed rotatable.⁷,⁸,³¹ Ideally, all response surface

designs should possess the characteristic of rotatability—i.e., the ability of a design

to be run in any direction without any change in response prediction variance.

1. Experimental Designs

DoE is an effi cient procedure for planning experiments in such as way that the

data obtained can be analyzed to yield valid and unbiased conclusions.⁹,³⁰ An

experimental design is a strategy for laying out a detailed experimental plan in

advance to the conduct of the experimental studies.⁸,¹⁴,²²,²⁶ Before the selection of

experimental design, it is essential to demarcate the experimental domain within the

factor space—i.e, the broad range of factor studies. To accomplish this task, fi rst a

pragmatic range of experimental domain is embarked upon and the levels and their

number are selected so that the optimum lies within its realm.¹⁹,³¹ While selecting

the levels, one must see that the increments between them should be realistic. Too

wide increments may miss fi nding the useful information between the levels, while

a too narrow range may not yield accurate results.¹⁵

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

49

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

Th ere are numerous types of experimental designs. Various commonly employed

experimental designs for RSM, screening, and factor-infl uence studies in pharma-

ceutical product development are

a. factorial designs

b. fractional factorial designs

c. Plackett–Burman designs

d. star designs

e. central composite designs

f. Box–Behnken designs

g. center of gravity designs

h. equiradial designs

i. mixture designs

j. Taguchi designs

k. optimal designs

l. Rechtschaff ner designs

m. Cotter designs

For a three-factor study, an experimental design can invariably be envisaged as a

“cube,” with the possible combinations of the factor levels (low or high) represented

at its respective corners.⁹ Th e cube thus can be the most appropriate representation

of the experimental region being explored. Most design types discussed in the cur-

rent article are, therefore, being depicted pictorially using this cubic model, with

experimental points at the corners, centers of faces, centers of edges, and so forth.

Such depiction facilitates easier comprehension of various designs and comparisons

among them. For designs in which more than three factors are adjusted, the same

concept is applicable except that a hypercube represents the experimental region.

Such cubic designs are popular because they are symmetrical and straightforward

for conceptualizing and envisioning the model.

a. Factorial Designs

Factorial designs (FDs) are very frequently used response surface designs.⁸,⁴⁰,⁶⁰ A

factorial experiment is one in which all levels of a given factor are combined with

all levels of every other factor in the experiment.⁶,⁴⁰,⁶¹ Th ese are generally based

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

50

B. SINGH ET AL.

upon fi rst-degree mathematical models. Full FDs involve studying the eff ect of all

the factors (k) at various levels (x), including the interactions among them, with the

total number of experiments being xk. FDs can be investigated at either two levels

(2k FD) or more than two levels. If the number of levels is the same for each factor

in the optimization study, the FDs are said to be symmetric, whereas in cases of a

diff erent number of levels for diff erent factors, FDs are termed asymmetric.⁶

• 2 k factorial designs. Th e two-level FDs are the simplest form of orthogonal

design, commonly employed for screening and factor infl uence studies.³¹,⁴⁰,⁴⁷

Th ey involve the study of k factors at two levels only—i.e., at high (+) and low

(–) levels. Th e simplest FD involves investigation of two factors at two levels

only. Characteristically, these represent fi rst-order models with linear response,

as demonstrated in Fıgure 8a. Fıgure 9 portrays a 2² and 2³ FD, in which each

point represents an individual experiment.

Th e design matrix for a two-level full factorial with k factors in the standard

order can be generated in the following manner. Th e fi rst column (X₁) starts

with –1 followed by alternate sign for all 2k runs.³¹ Th e second column (X₂)

starts with –1 repeated twice, then alternates with 2 in a row of the opposite

sign until all 2k places are fi lled. Th e third column (X₃) starts with –1 repeated

four times, then four repeats of +1, and so on. In general, the ith column (Xi)

starts with 2i–¹ repeats of –1 followed by 2i–¹ repeats of +1. Table 2 illustrates

a simple design matrix layout of 2³ FD. Th e signs + or – in the columns AB,

(a) (b)

FIGURE 9. Diagrammatic representation of (a) 22 factorial design; (b) 23 factorial design.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

51

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

AC, BC, and ABC are generated by multiplication of the corresponding levels

of the various factors. Th e design in the given instance has been employed for

optimization of pellets to study the eff ect of three process factors—the rotor

speed (A), the amount of water sprayed (B), and the atomizing air pressure

(C)—at two levels each on the geometric mean diameter (dg) and geometric

size distribution (σg) of the pellets.⁶²

Th e mathematical model associated with the design consists of the main ef-

fects of each variable plus all the possible interaction eff ects—i.e., interactions

between the two variables, and in fact, between as many factors as there are in

the model.⁴,⁶,⁴⁶ Equation (6) is the general mathematical relationship for the

FDs involving main eff ect and interaction terms.

Y X X Xin

i i in

j in

ij i j

in

j in

k in

ijk

= + ∑ + ∑ ∑

+ ∑ ∑ ∑

= = = +

= = + = +

β β β

β

0 1 1 1

1 1 2 XX X Xi j k (6)

where n is the number of factors (3 in the above equation), X is +1 or –1 as per

coding, Y is the measured response, and βi, βij, βijk represent the coeffi cients

computed from the responses of the formulation in the design. For a 2³ FD,

the above equation can be written as Eq. (7).

TABLE 2. Design Matrix for 23 FD in Standard Order Along with the Corresponding

Interactions and Responses*

Exp. run

Factors Interactions Responses

A(X1)

B(X2)

C(X3)

AB(X1X2)

AC(X1X3)

BC(X2X3)

ABC(X1X2X3)

dg (µm)

�g(µm)

(1) –1 –1 –1 + + + – 1150.7 224.2

a 1 –1 –1 – – + + 303.0 42.4

b –1 1 –1 – + – + 1054.5 222.8

ab 1 1 –1 + – – – 507.2 116.0

c –1 –1 1 + – – + 326.7 56.0

ac 1 –1 1 – + – – 463.5 97.6

bc –1 1 1 – – + – 792.9 135.6

abc 1 1 1 + + + + 1252.7 208.4

* Data taken from Korakianiti et al.62

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

52

B. SINGH ET AL.

Y X X X X X

X X X X X X X

= + + + +

+ + +

β β β β β

β β β

0 1 1 2 2 3 3 12 1 2

13 1 3 23 2 3 123 1 2 3 (7)

Center points can be added to 2k FDs to allow identifi cation of the curvature

in the response and, upon replication, validate the reproducibility of the ex-

perimental study.⁶⁰ Fıgure 10 shows the cubic model for 2³ FD with an added

center point.

• Higher level factorial designs. FDs at three or more number of levels are employed

mainly for response surface optimization.²⁸,⁴⁵,⁶⁰ Simple to generate, these designs

can detect and estimate nonlinear or quadratic eff ects. Th e main strength of the

design is orthogonality, because it allows independent estimation of the main

eff ects and interactions.³¹,⁴⁰,⁶⁰ On the other hand, the major limitation associated

with high-level FDs is the increased number of experiments required with the

increase in the number of factors (k). Even at a modest number of factors, the

number of runs is quite large. For instance, the absolute minimum number of

runs required to estimate all the terms present in a four-factor, three-level qua-

dratic model is 15, involving the intercept term, four main eff ects, six two-factor

interactions, and four quadratic terms. Th e corresponding 3k FD for k = 4 requires

81 runs. Another disadvantage of xk FDs is the lack of rotatability.⁷,³¹ Table 3

illustrates the design matrix for a three-level FD for buccoadhesive compressed

matrices of diltiazem hydrochloride involving two factors—Carbopol (X₁) and

HPMC K 100 LV (X₂).¹⁸ Th e response parameters studied encompass bioadhesive

strength (F ), release up to 10 hours (Rel₁₀h), and the time taken for 50% of the

drug release (t50%).

FIGURE 10. Diagrammatic representation of 23 factorial design with added center point.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

53

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

b. Fractional Factorial Designs

In a full FD, as the number of factors or factor levels increases, the number of

required experiments exceeds manageable levels. Also, with a large number of fac-

tors, it is possible that the highest order interactions have no signifi cant eff ect. In

such cases, the number of experiments can be reduced in a systematic way, with the

resulting design called fractional factorial designs (FFD) or sometimes partial facto-

rial designs.²⁸,³⁴,⁴⁷,⁶⁰ An FFD is a fi nite fraction (1/xr ) of a complete or “full” FD,

where r is the degree of fractionation and xk–r is the total number of experiments

required. Although these designs are economical in terms of number of experiments,

the ability to distinguish some of the factor eff ects is partly sacrifi ced by reduction

in the number of experiments. In other words, the eff ects in an FFD can no lon-

ger be uniquely estimated.³¹,⁶⁰ Th erefore, the FFDs often possess lower resolution

than their full factorial counterparts, because they require fewer experiments and

consequently provide fewer data. Th e degree of fractionation should be appropri-

ately chosen on the basis of resources available and the design resolution desired.⁶

It should not be large, because this may lead to confounding of factor eff ects, not

only with the interactions, but also with other factor eff ects. Properly chosen FFDs

for two-level experiments, however, have the desirable properties of being both

balanced and orthogonal.³¹,⁶⁰

TABLE 3. A 32 Full Factorial Design Layout Along with Studied Responses*

Trial no. X1 X2

Response variables

F(g)

Rel10h(%)

t50%(h)

1 –1 –1 6.66 80.66 4.99

2 –1 0 11.80 72.50 7.43

3 –1 1 14.11 67.01 8.28

4 0 –1 9.09 68.21 7.38

5 0 0 15.55 59.37 8.88

6 0 1 23.50 50.59 9.71

7 1 –1 16.32 57.10 8.35

8 1 0 19.43 47.98 10.23

9 1 1 28.16 35.94 12.03

* Data taken from Singh & Ahuja18

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

54

B. SINGH ET AL.

For a two-level, three-factor design, a full FD will require 2³—i.e., eight ex-

periments and seven eff ects are determined. Out of these seven eff ects, there are

three main eff ects, and the other four eff ects are due to the interactions among the

three factors. An FFD with r = 1, on the other hand, will require only 2³–¹, i.e., four

experiments and a total of three eff ects are estimated. However, these three eff ects

are the combined eff ects of factors and interactions. Table 4 depicts a one-half

replicate of a 2³ FD.

From Table 4, the aliases (i.e., the confounded eff ects) can be defi ned. An ef-

fect is defi ned by the signs in the corresponding columns—e.g., the eff ect of A is

(a – b – c + abc), which is exactly equal to BC. Th erefore, BC and A are aliases—i.e.,

confounded. Also, C = AB and B = AC. Th us, in this design, the main eff ects are

TABLE 4. One Half Replicate of a 3–Factor 2–Level Factorial Design

ExperimentFactors Interactions

A B C = AB AC BC

a + – – – +

b – + – + –

c – – + – –

abc + + + + +

(a) (b)

FIGURE 11. Diagrammatic representation of (a) a 23–1 fractional factorial design with design

points as spheres at the corners of the cubic model; (b) a 23–1 fractional factorial design

with added center point.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

55

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

confounded with interactions of the other two factors.²⁸,⁴⁰ Th is implies that the

main eff ects cannot be clearly interpreted if the interactions present are signifi cant.

Fıgure 11 depicts an FFD graphically as a hypercube, with its corners represented

by the spheres, depicting the experiments studied.

Low-resolution FFDs (mainly resolution III or occasionally resolution IV)

are routinely employed for screening purposes.⁴,⁵⁶ Th ey are effi cient in determin-

ing “main eff ects,” where interactions are assumed to be negligible. On the other

hand, high-resolution FFDs such as resolution V or higher are used to determine

both the main eff ects and interactions. Hence, such high-resolution FFDs, besides

factor infl uence studies, can be used for drug delivery product or process optimiza-

tion.⁷,⁶³,⁶⁴ Th e resolution of FFDs can also be improved upon by fold over methods.³¹

Th e high-resolution designs, such as resolution V FFDs, can also be augmented to

second-order response surface designs.

c. Plackett–Burman Designs

Plackett–Burman designs (PBD) are special two-level FFDs used generally for

screening of K—i.e., N–1 factors, where N is a multiple of 4.⁶⁵ Also known as Had-

amard designs or symmetrically reduced 2k–r FDs, the designs can easily be constructed

employing a minimum number of trials.⁴,⁶,⁶⁵-⁶⁸ For instance, a 30-factor study can

be accomplished using only 32 experimental runs. In most cases, the fi rst line is

given and the remaining lines are obtained by permutation, except for the last line,

which consists entirely of minus signs. Table 5 presents the fi rst lines of a PBD for

the number of experiments, with N ranging between 4 and 24.

Th e second row is generated from the fi rst one by moving the element of the

row one position right and placing the last in the fi rst position. Th e third row is

produced from the second in an analogous manner, and the process is continued

TABLE 5. Coded Experimentation Carried Out with Plackett–Burman Design

N First lines of the design

4 + + –

8 + + + – + – –

12 + + – + + + – – – + –

16 + + + + – + – + – – + – – –

20 + + – – + + – + – + – + – – – – + + –

24 + + + + + – + – + + – – + + – – + – + – – – –

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

56

B. SINGH ET AL.

until the k t line is reached. Because these designs cannot be represented as cubes,

they are sometimes called nongeometeric designs.²⁸,³⁴ Table 6 presents the PBD layout

for eight experiments.

In Plackett–Burman designs, the main eff ects are orthogonal, and two-factor

interactions are only partially confounded with main eff ects.³¹ Th is is diff erent from

resolution III FFDs, in which two-factor interactions are indistinguishable from main

eff ects. PBDs are quite favorably employed during the screening process.⁵⁴,⁶⁷,⁶⁹

d. Star Designs

Because FDs do not allow detection of curvature unless more than two levels of a

factor are chosen, a star design can be used to alleviate the problem and provide a

simple way to fi t a quadratic model.⁸,²⁸ Th e number of required experiments in a

star design is given by 2k + 1. A central experimental point is located from which

other factor combinations are generated by moving the same positive and negative

distance (= step size, α). For two factors, the star design is simply a 2² FD rotated

over 45° with an additional center point (Fıg. 12). Th e design is invariably orthogonal

and rotatable.

e. Central Composite Designs

For nonlinear responses requiring second-order models, central composite designs

(CCDs) are the most frequently employed.¹⁶,³³,⁵⁴ Also known as the Box–Wilson

TABLE 6. A Plackett–Burman Design Layout for Eight Experiments and Seven Factors

Experiment run X1 X2 X3 X4 X5 X6 X7

1 +1 +1 +1 –1 +1 –1 –1

2 –1 +1 +1 +1 –1 +1 –1

3 –1 –1 +1 +1 +1 –1 +1

4 +1 –1 –1 +1 +1 +1 –1

5 –1 +1 –1 –1 +1 +1 +1

6 +1 –1 +1 –1 –1 +1 +1

7 +1 +1 –1 +1 –1 –1 +1

8 –1 –1 –1 –1 –1 –1 –1

B e g e l l H o u s e D i g i t a l L i b r a r y , h t t p : / / d l . b e g e l l h o u s e . c o m D o w n l o a d e d 2 0 1 1 - 1 - 3 f r o m I P 1 6 8 . 2 2 3 . 7 . 1 7 1 b y F l o r i d a A & M U n i v e r s i t y

57

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

design, the “composite design” contains an imbedded (2k ) FD or (2k–r ) FFD, aug-

mented with a group of star points (2k) and a “central” point.⁷⁰ Th e star points allow

estimation of curvature and establish new extremes for the low and high settings for

all the factors. Hence, CCDs are second-order designs that eff ectively combine the

advantageous features of both FDs (or FFDs) and the star design. Th e total number

of factor combinations in a CCD is given by 2k + 2k + 1.

If the distance from the center of the design space to a factorial point is ±1 unit

for each factor, the distance from the center of the design space to a star (axial) point

is ±α with |α| > 1. Th e precise value of α depends on certain properties desired for

the design and on the number of factors involved.⁸,²⁸,³¹,⁷⁰ Th e axial points for two-

factor problems include (±α, 0) and (0, ±α). A two-factor CCD is identical to a 3²

FD with rectangular experimental domain at α = ±1, as shown in Fıgure 13a. On

the other hand, the experimental domain is spherical in shape for α = √2 = 1.414,

as shown in Fıgure 13b. Th e CCD is quite popular in response surface optimization

during pharmaceutical product development.

A face centered cube design (FCCD) results when both factorial and star points

in a CCD possess the same positive and negative distance from the center.²⁸ A

rotatable CCD (RCCD) is identical to FCCD except that the points defi ned for

the star design are changed to [±(2k )¹/⁴,…0] and those generated by the FD remain

unchanged. In this way, the design generates information equally well in all direc-

tions—i.e., the variance of the estimated response is the same at all the points on a

sphere centered at the origin.³¹ Furthermore, depending upon the type of domain

and α value, the RCCD can be either circumscribed (CCC) or inscribed (CCI). Table

7 gives an account of the salient aspects of various types of CCDs. Fıgure 14 depicts

the designed experiments carried out using various types of CCDs.

To maintain rotatability, the value of α depends on the number of experimental

FIGURE 12. Diagrammatic representation of a star design with an additional center point,

derived from the factorial by rotation over 45º.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

58

B. SINGH ET AL.

TABLE 7. Various Types of Composite Designs and Their Salient Features

CCDs type Notation Salient features

Face centered FCCD In this design the star points are at the center of each face of the factorial space, i.e., α = ± 1. It requires 3 levels of each factor. Augmenting an existing FD or FFD (resolution V) with appropriate star points can also produce this design.

Rotatable CCDs

Circumscribed CCC These designs are the original form of the central composite design. The star points are at some distance (α) from the center, based on the properties desired for the design and the number of factors in the design. These designs have circular, spherical, or hyperspherical symmetry, and require 5 levels for each factor. Augmenting an existing FD or FFD (resolution V) with star points can also produce this design.

Inscribed CCI For situations in which the limits specifi ed for factor settings are truly limits, this design uses the factor settings as the star points and creates a FD or FFD within those limits. In other words, a CCI design is a scaled down CCC design with each factor level of the CCC design divided by α to generate the CCI design. This design also requires 5 levels of each factor.

(a) (b)

FIGURE 13. Diagrammatic representation of (a) central composite design (rectangular

domain) with α = 1; (b) central composite design (spherical domain) with α = 1.414.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

59

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

runs in the factorial portion (factorial or FFD design space) of the CCD, as given

by Eq. (8).³¹,⁷⁰

α = [ ] number of factorial runs1 4/

(8)

If the factorial space is generated from a full factorial, it can be simplifi ed as

Eq. (9).

α = [ ]21 4k /

(9)

Table 8 illustrates some typical values of α as a function of the number of factors

for maintaining the rotatability of CCDs.

Th e second-order polynomial, generally used for the composite designs, is given

as Eq. (10).

Y X X X Xin

i i in

j in

ij i j in

ii i= + ∑ + ∑ ∑ + ∑= = = + =β β β β0 1 1 1 12

(10)

Th e values of βi, βij, and βii represent the coeffi cients for main eff ect, interaction,

and second-order terms, respectively. Th e composite designs normally involve the

investigation of X at fi ve levels—i.e., one central point (0 level), two factorial points

(a) (b) (c)

FIGURE 14. Diagrammatic representation of (a) a face centered central composite; (b) a

circumscribed rotatable central composite; (c) an inscribed rotatable central composite

design.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

60

B. SINGH ET AL.

(±1 levels) and two axial star points (±α levels). However, in case of FCCD, the

number of levels is kept at three for each factor. Th e above second-order equation

for a two-factor study, upon expansion, will transform to Eq. (3).

f. Box–Behnken Designs

A specially made design, the Box–Behnken design (BBD), requires only three levels for

each factor—i.e., –1, 0, and +1.⁷¹ A BBD is an economical alternative to CCD.⁴,⁵⁴,⁷¹-⁷³

It overcomes the inherent pitfalls of the CCDs, where each factor has to be studied

at fi ve levels, consequently escalating the number of experiments with the rise in

the number of factors. Table 9 gives a comparative account of the number of runs

required by CCDs and BBDs for a given number of factors.

Th is is an independent quadratic design, in that it does not contain an embed-

ded FD or FFD. Although BBD is also called orthogonal balanced incomplete block

design, it has limited capability for orthogonal blocking in comparison to the CCDs.

Th e design is rotatable (or nearly rotatable), and the treatment combinations are

located at the midpoints of edges and the center of the experimental domain, as

portrayed in Fıgure 15.³¹,⁷¹

Th e BBDs are also popularly used for response surface optimization of drug

delivery systems.⁴,²³,⁷²-⁷⁷ Table 10 lists a documented instance of a BBD design

layout for preparing sustained release pellets, employing 15 experiments with three

factors at three levels each.⁷⁸

TABLE 8. Determination of � Value for Maintaining the Rotatability of a Central Composite Design for Different Number of Factors

Number of factors

Factorial portion

Scaled value for � relative to ± 1

2 22 22/4 = 1.414

3 23 23/4 = 1.682

4 24 24/4 = 2.000

5 25–1 24/4 = 2.000

5 25 25/4 = 2.378

6 26–1 25/4 = 2.378

6 26 26/4 = 2.828

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

61

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

g. Center of Gravity Designs

Outlined by Podczeck,⁷⁹ these designs are modifi cations of CCD. Retaining the

advantages of CCD, these designs further reduce the total number of experiments

to 4k + 1. Th e experiments start with a midpoint, which usually lies in the factorial

region. From this midpoint (i.e., center of gravity), at least four points are decided

on each coordinate axis in such a way that the resulting geometric space becomes

as large as possible. Despite the broader geometric space, the designs include only

meaningful experiments. Such designs have been employed to optimize various

DDS quite frequently.²¹,⁸⁰,⁸¹

h. Equiradial Designs

Equiradial designs are fi rst-degree response surface designs, consisting of N points on

a circle around the center of interest in the form of a regular polygon.⁶,⁴¹ Th e designs

TABLE 9. Number of Runs Required by Central Composite and Box–Behnken Designs

Number of factors

Number of experimental runs needed

Central composite design Box–Behnken design

2 13 (5 center point runs) —

3 20 (6 center point runs) 15

4 30 (6 center point runs) 27

5 33 (fractional factorial) or 52 (full factorial) 46

6 54 (fractional factorial) or 91 (full factorial) 54

FIGURE 15. Diagrammatic representation of a Box–Behnken design for three factors.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

62

B. SINGH ET AL.

can be rotated by any angle without any loss in the properties. For six experiments,

the design is of a pentagon shape, with fi ve design points on the circumference of

a circle and one at the center.

TABLE 10. Design Layout According to Box–Behnken Design*

RunVariable factors Response variables

X1 X2 X3 Y1 Y2 Y3

1 30 6/1 500 20.0 ± 0.8 27.5 ± 1.1 38.0 ± 1.2

2 30 2/1 500 33.0 ± 1.0 45.4 ± 1.1 65.2 ± 1.1

3 10 6/1 500 42.4 ± 0.9 58.7 ± 1.5 80.5 ± 0.9

4 10 2/1 500 66.1 ± 1.3 85.6 ± 1.2 94.1 ± 2.0

5 30 4/1 700 15.4 ± 1.1 21.1 ± 1.6 29.5 ± 1.1

6 30 4/1 300 53.9 ± 1.4 71.7 ± 1.8 85.1 ± 1.0

7 10 4/1 700 32.9 ± 0.8 46.5 ± 1.3 68.5 ± 1.2

8 10 4/1 300 82.4 ± 2.0 91.0 ± 2.0 93.8 ± 2.0

9 20 6/1 700 10.8 ± 1.0 14.8 ± 1.1 20.3 ± 1.3

10 20 6/1 300 47.4 ± 1.1 62.7 ± 1.3 80.3 ± 1.5

11 20 2/1 700 22.1 ± 1.2 30.4 ± 1.2 42.8 ± 1.7

12 20 2/1 300 75.3 ± 0.9 87.1 ± 2.0 94.0 ± 1.9

13 20 4/1 500 26.5 ± 1.0 36.4 ± 1.5 50.8 ± 1.3

14 20 4/1 500 24.0 ± 1.5 32.9 ± 2.0 47.0 ± 2.0

15 20 4/1 500 25.0 ± 1.2 34.9 ± 1.5 49.3 ± 2.0

FactorsLevels

Response variables–1 0 1

X1: Plasticizer concentration (%)

10 20 30Y1: Cumulative percent drug release after 3 h

X2: Polymer ratio (Eudragit RS/Eudragit RL)

2/1 4/1 6/1Y2: Cumulative percent drug release after 4 h

X3: Quantity of coating dispersion (g)

300 500 700Y3: Cumulative percent drug release after 6 h

* Data taken from Kramar et al.78

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

63

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

Hexagonal equiradial design for two factors is popularly known as the Doehlert

design. Also known as the uniform shell design, this design is characterized by uni-

form distribution of the experimental points on the surface of a hypersphere, thus

providing a good basis for interpolation.⁶,⁸,²⁸,⁸² Th e total number of experiments is

given as k² + k + 1. For two factors, for instance, a minimum of seven experiments is

proposed in a regular hexagon shape with a central point. Each factor is analyzed at

a diff erent number of levels.⁸²,⁸³ Th e design may be extended in any direction. Th is

includes the possibility of entailing additional factors without any adverse eff ect

on the quality of the design. Lately, this design has been recommended by several

authors for pharmaceutical formulation development.⁶,⁸,²⁸,⁸³

Fıgure 16 explicitly illustrates several important cases of equiradial designs.

(a) (b)

(c) (d)

FIGURE 16. Diagrammatic representation of a two-factor equiradial designs; (a) triangular

four-run design; (b) square fi ve-run design; (c) pentagonal six-run design; (d) Doehlert

hexagonal seven-run design.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

64

B. SINGH ET AL.

i. Mixture Designs

In FDs, CCDs, BBDs, etc., all the factors under consideration can simultaneously

be varied and evaluated at all levels. Th is may not be possible under many situations.

Particularly in DDS with multiple excipients, the characteristics of the fi nished

product usually depend not so much on the quantity of each substance present but

on their proportions.⁴,¹³ Here, the sum total of the proportions of all the excipients

is unity, and none of the fractions can be negative. Th erefore, the levels of diff er-

ent components can be varied with the restriction that the sum total should not

exceed one.⁸⁴ Mixture designs are highly recommended in such cases.¹³,⁸⁵-⁸⁷ In a

two- component mixture, only one factor level can be independently varied, while

in a three-component mixture, only two factor levels can be independently varied,

and so on. Th e remaining factor level is chosen to complete the sum to unity. Hence,

they have often been described as the experimental designs for formulation opti-

mization.⁴,¹²,¹³,³² For process optimization, however, the designs such as FDs and

CCDs are preferentially employed.

Th e fact that the proportions of diff erent factors must sum to 100% complicates

the design as well as the analysis of mixture experiments. Th ere are two types of

mixture designs—standard mixture designs and constrained mixture designs.⁶,³¹,⁵⁴ If the

experimental region is a simplex, standard mixture designs are used. A simplex is the

simplest possible n-sided fi gure in an (n – 1) dimensional space.¹⁶,²⁸,³³ It is represented

as a straight line for two components, as a 2-D triangle for three components, as

a 3-D tetrahedron for four components, and so on. If the mixture components are

subject to the constraint that they must sum to one, then standard mixture designs

for fi tting standard models are used. Th e most popular standard mixture designs are

simplex mixture designs (SMDs), also known as Scheff é’s designs.²⁸,⁸⁸ Th ey can either

be centroid or lattice designs. Both of these are identical for fi rst- and second-order

models but diff er for third-order onwards. Herein, the design points are uniformly

distributed over the factor space and form the lattice. Th e design point layout for

three factors using various models is shown in Fıgure 17, where each point refers

to an individual experiment.

Scheff é’s polynomial equations are used for estimating the eff ects. General

mathematical models for a total of three components, X₁, X₂, and X₃, are given as:

Linear: Y X X X= + +β β β1 1 2 2 3 3 (11)

Quadratic: Y X X X X X X X X X= + + + + +β β β β β β1 1 2 2 3 3 12 1 2 13 1 3 23 2 3 (12)

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

65

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

(a) (b) (c)

FIGURE 17. Diagrammatic representation of simplex mixture designs (a) linear model; (b)

quadratic model; (c) special cubic model.

Special cubic model: Y X X X X X

X X X X X X X

= + + +

+ + +

β β β β

β β β

1 1 2 2 3 3 12 1 2

13 1 3 23 2 3 123 1 2 3 (13)

where βi in each of Eqs. (11)–(13) represent the coeffi cients of respective variables.

Because a change of a fraction in a mixture implies a change of another fraction,

there are no quadratic interaction terms in Scheff é’s polynomial equations. Because

there are no intercept terms in these polynomials, standard linear regression analysis

of the data cannot be performed, and special regression algorithms are required for

calculating the model equations.¹³,²⁸,⁸⁸ For screening purposes, fi rst-order linear

mixture models are used involving the axial design points in the experimental do-

main.²⁹ Table 11 shows the design matrix for a simplex lattice design generated for

optimization of dissolution enhancement of an insoluble drug (prednisone) with

the physical mixtures of superdisintegrants.⁸⁹

When some or all of the mixture components are subject to additional con-

straints, such as a maximum (upper bound) and/or a minimum (lower bound) value

for each component, constrained mixture designs are preferred to standard mixture

designs.⁶,⁸⁵,⁹⁰ Th e extreme vertices design is the most widely used example of a con-

strained mixture design.⁷,³⁴,⁹¹ It is recommended when the factor space is restricted

usually on both the upper and lower limits of the factor levels. For instance, in a

study involving a formulation of a controlled release tablet by direct compression,

use of lubricant in less than 0.2% w/w amount is useless, and more than 2% w/w

is meaningless.¹⁵ In such designs, the observations are made at the corners of the

bounded design space, at the middle of the edges, and at the center of the design

space, which can be evaluated only by regression.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

66

B. SINGH ET AL.

j. Taguchi Designs

Each industrial development system is amenable to natural variability over which

one has little or no control. Such variability arises from a number of possible causes

such as materials, operators, processes, suppliers, and environmental changes. To

develop the products or processes as robust amidst such natural variability, Genichi

Taguchi, a Japanese engineer and quality consultant, proposed several experimental

design approaches in the mid 1980s.⁴³,⁹² Th ese Taguchi methods have, of late, become

globally popular in industrial experimentation. Taguchi refers to experimental design

as “off -line quality control” because it is a method of ensuring good performance in

the development of products or processes.⁹² Th e goal of these robust designs is to

divide system variability according to various sources and to fi nd the control factor

settings that generate acceptable responses.⁴,²⁹,³¹

Th e unique aspects of his approach are the use of signal (or control or design) and

noise (or uncontrollable) factors. Signal factors are the system control inputs. Noise

factors are typically too diffi cult or too expensive to control. Th e design employs

TABLE 11. Design Layout for Simplex Lattice Design*

Formulation X1 X2 X3

Percent drug dissolved in 10 min

1 1 0 0 15.2

2 0 1 0 2.8

3 0 0 1 23.1

4 0.5 0.5 0 55.3

5 0.5 0 0.5 59.5

6 0 0.5 0.5 20.6

7 0.33 0.33 0.33 82.4

8 0.667 0.167 0.167 44.7

9 0.167 0.667 0.167 45.5

10 0.167 0.167 0.667 71.6

X1 Croscarmellose Sodium

X2 Dicalcium Phosphate Dihydrate

X3 Anhydrous β–Lactose

* Data taken from Ferrari et al.89

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

67

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

two orthogonal arrays—i.e., tabulated designs. Th e signal (or control) factors, used

to fi ne-tune the process, form the inner array. Th e noise factors, associated with

process or environmental variability, form the outer array.⁶,⁴¹,⁹³ Taguchi’s orthogonal

arrays are invariably two-level, three-level, and mixed-level FFDs. An inner design

constructed over the control factors fi nds optimum settings. An outer design over

the noise factors looks at how the response behaves for a wide range of noise condi-

tions. Th e experiment is performed at all the combinations of the inner and outer

design runs.

Actually, a Taguchi experiment is the cross-product of the two orthogonal arrays.

Fıgure 18 illustrates the layout of the arrays as per two-level, three-factor Taguchi

design. Pictorially, it can be seen as a conventional design in the inner array factors

(in comparison with Fıgure 9b for the classical 2³ FD), with the addition of a “small”

outer array factorial design at each corner of the “inner array” box. Taguchi experi-

mental designs, based on the orthogonal arrays, are usually labeled L8, to indicate an

array with eight runs. Classical experimental designs are identifi ed with a superscript

to indicate the number of variables. Th us, because a 2³ classical experimental design

also has eight runs, the designs generated by the two methods are often analogous.

Table 12 shows a Taguchi L8 array (in contrast to Table 2 illustrating a classical 2³

design), to investigate the eff ects of up to seven factors in eight runs. Use of linear

graphs and interaction tables would select columns 1, 2, and 4 to identify the eff ects

of three factors, and this corresponds to the same classical design (Table 2).³¹

For Taguchi arrays, each row represents a run of the experiment. Here, each

design has eight runs. Each column represents the settings of the factor at the top

of the column. In the classical design, the levels are (–1, +1), while in the Taguchi

design, the levels are (1, 2), implying (low, high) for each factor, respectively. At

the bottom of each design is the corresponding column number for the alternative

FIGURE 18. Diagrammatic representation of inner 23 and outer 22 arrays for Taguchi robust

design with “I” as the inner array and “E” as the outer array.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

68

B. SINGH ET AL.

design—e.g., column 1 in the Taguchi design (Table 12) corresponds to the column

C in the classical design (Table 2), and vice versa. Th e Taguchi design has the same

number of components as the classical design, but in a diff erent order. However,

the columns for the settings of the factors, chosen according to the interactions as-

sumed by the investigator, may or may not be present in the process. Th e investigator

consults an interaction table and/or linear graphs to determine which columns to

choose in the design.

Th e response variable in Taguchi data analysis is not the usual raw response or

quality characteristic, but the signal-to-noise ratio (S/N ratio).⁴,⁶,⁷,⁴³ Th e S/N ratio

is a performance statistic, calculated across the entire outer array for each inner

run, which becomes the response for a fi t across the inner design runs. Its formula

depends on whether the experimental goal is to maximize, minimize, or match a

target value of the quality characteristic of interest.

From the drug delivery perspective, while using Taguchi methods, fi rst one

needs to determine the control factors that can be set by the product development

pharmacist.⁶,⁹³-⁹⁵ Th ose are the factors in the experiment for which diff erent levels

are investigated. Next, decisions are made on the choice of an appropriate orthogonal

array to select the experiment and methodology to measure the quality character-

istic of interest. It must be remembered that most S/N ratios require that multiple

measurements are taken during each run of the experiment—e.g., the variability

around the nominal value cannot otherwise be assessed. Fınally, the experiment is

conducted and the factors that most strongly aff ect the chosen S/N ratio are identi-

TABLE 12. Taguchi L8 Array for Three Variables

Experimental runColumn number

1 2 3 4 5 6 7

1 1 1 1 1 1 1 1

2 1 1 1 2 2 2 2

3 1 2 2 1 1 2 2

4 1 2 2 2 2 1 1

5 2 1 2 1 2 1 2

6 2 1 2 2 1 2 1

7 2 2 1 1 2 2 1

8 2 2 1 2 1 1 2

Classical 23 FD column no.

C B BC A AC AB ABC

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

69

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

fi ed, and the production process is reset accordingly. Depending upon the situation at

hand, S/N ratios are maximized, minimized, or targeted to a specifi c limit or range

of limit. Table 13 lists the recommended performance parameters for Taguchi’s S/N

ratio. Th e table also encompasses the envisioned drug delivery applications, where

Taguchi arrays hold high promise and can fi nd plausible applications.

Besides robustness, the Taguchi methodology emphasizes minimization of loss

function—i.e., minimizing economic loss associated with running the experiments

at non-optimal conditions. In fact, Taguchi’s analysis begins with the operational

defi nition of quality as a measure in terms of loss; the greater the quality loss, the

lower the quality. Th e ideal point of highest quality is obviously the one represent-

ing no quality loss. Taguchi designs allow estimation for the maximum number

of main eff ects in an unbiased (orthogonal) manner, with the minimum number

of experimental runs. Most analyses of robust design experiments amount to a

standard ANOVA of the respective S/N ratios, ignoring two-way or higher-order

interactions and sometimes using accumulation analysis.⁷,⁴³ Besides screening of

infl uential variables, Taguchi array designs hold tremendous potential for response

surface modeling, especially when the number of factors is quite large.

TABLE 13: Taguchi’s Signal to Noise Ratios as Performance Statistics in Drug Delivery

Goal of optimization

studySignal to noise ratios Potential instances of

drug delivery relevance

Maximization of the response

SN n Yii

= −⎛

⎝⎜⎜

⎠⎟⎟∑10

1 12log

•Flux from transdermal patch through skin. •Bioadhesive strength of bioadhesive tablet. •MDT of an oral controlled release tablet.•Entrapment effi ciency in nanoparticles.•Dissolution rate of rapid release tablets.•Shelf–life of a drug delivery system.• Floating time of hydrodynamically balanced system.

Minimization of the response

SN n

Yii

= −⎛

⎝⎜⎜

⎠⎟⎟∑10

1 2log •Drug leakage from liposomal systems.•T85% of a fast release solid dispersion.

Target to a specifi c value or a range

SN

Y

s= −

⎝⎜⎜

⎠⎟⎟10

2

2log

• Lag time release of enteric coated formulations.

• Release exponent value for zero–order kinetics.

•Dispersion time in dispersible tablets.•HLB of micro-emulsions.•Hardness of oral compressed matrices.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

70

B. SINGH ET AL.

k. Optimal Designs

If the experimental domain is of a defi nite shape—either cubic or spherical—the

standard experimental designs are normally used. However, when the domain is ir-

regular in shape, optimal designs can be used.⁴,²⁸,⁹⁶ Th ese are the nonclassic custom

designs generated by exchange algorithm using computers.⁷,⁹⁷ In general, such custom

designs are generated based on a specifi c optimality criterion such as D-, A-, G-, I-,

and V-optimality criteria.⁶,⁷,³¹ Th ese optimality criteria are based upon the minimiza-

tion of various parameter and design prediction variances. Th e variable space in such

designs consists of a candidate set of design points. Th e candidate set consists of all

the possible treatment combinations that the formulator wishes to consider in an

experiment. Th ese candidate sets are elected based upon any one of these criteria.

Th e most popular criterion in the custom designs is D-optimality. D-optimal

designs are based on the principle of minimization of variance and covariance of

parameters. Th e optimal design method requires that a correct model be postulated,

the variable space be defi ned, and the number of design points be fi xed in such a

way that will determine the model coeffi cients with maximum possible effi ciency.

Th ese powerful designs can be continuous—i.e., more design points can be added to

it subsequently, and the experimentation can be carried out in stages. In particular,

while augmenting an experimental design, the domain loses its shape; a D-optimal

design can be employed for further studies. Many new terms can be added to the

original model in any direction, and corresponding optimal new test runs (with re-

spect to this expanded model) can be determined. Two sets of experiments, carried

out in diff erent blocks, can also be grouped together. Depending upon the problem,

these designs can also be used along with factorial, central composite, and mixture

designs. Besides formulation and process optimization, these optimal designs are

also successfully used for screening of factors.⁷⁴,⁸³,⁹¹,⁹⁶,⁹⁸

Apart from these commonly employed experimental designs, there are some relatively

less popular designs, described below.

l. Rechtschaffner Designs

Th ese designs are of importance in situations where the model involves main eff ects

and fi rst-order interactions.⁶,⁹⁹ Although these designs are saturated, they are neither

balanced nor orthogonal except for the fi ve-factor design, where main eff ects can

be independently estimated. Notwithstanding the fact that the use of the designs

has seldom been reported for factor infl uence studies, they hold suffi cient promise

for the pharmaceutical formulator.⁶,¹⁰⁰

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

71

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

m. Cotter Designs

Th is design is generally used for screening purposes and is advantageously used

when a larger number of factors is to be screened with lesser resources, and there is

likelihood of interactions among the factors.⁵⁷

n. Other Designs

Some scientists have also employed the Latin square design¹⁰¹ and other orthogonal

arrays¹⁰² for optimizing pharmaceutical formulations. Experimental designs such

as distance-based designs, hybrid designs, etc. can also be employed for systematic

DoE studies with limited fruition.

2. Choosing Experimental Designs

Th e choice of an experimental design is customarily a compromise between the

information required and the number of experimental studies to be conducted.⁹ It

depends largely upon the objectives of the study and the number of factors to be

investigated. If the primary purpose of the experiment is to screen out or select the

few important main eff ects from the many less important ones, screening designs are

used.⁸,¹⁵,¹⁹,³¹ By and large, low-resolution designs suffi ce for the purpose of simpler

screening of a large number of experimental parameters. Th ese are usually FDs (full

or frac tional), PBDs, or Taguchi designs. Screening designs support only the linear

responses. Th us, if a nonlinear response is detected or a more accurate picture of the

response surface is required, a more complex design type is necessary. Hence, when

the investigator is interested in estimating interaction and even quadratic eff ects, or

intends to have an idea of the local shape of the response surface, response surface

designs are used.

For interaction models, resolution IV or V designs are usually preferred. However,

some inter action terms in the model may be confounded with others, and further

experimentation might be required to decou ple these terms at a later stage.⁹,³¹ De-

signs such as BBD or CCD, which support nonlinear re sponses, are commonly used

for RSM optimization applications. When the formulator has several factors that are

proportions of a mixture formulation, mixture designs are specifi cally favoured.¹⁵

On the whole, the fi rst-order experimental designs must enable estimation of

the fi rst-order eff ects, preferably free from interference by the interactions among

factors and other variables.⁷,⁵⁴ Th ese designs also allow testing for “goodness of

fi t” of the proposed model. Even if they are able to determine the existence of the

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

72

B. SINGH ET AL.

curvature in the response surface, they should normally be employed in the absence

of curvature of the response surface.

If there are only a smaller number of factors to be studied at extreme levels, then

2k FDs are acceptable. If there are more factors and levels, then perhaps a FFD or

BBD is better. If the number of product factors and processing parameters under

TABLE 14. Application of Important Experimental Designs Depending Upon the

Nature of Factor, Models, and Strategies

2k FD xk FD FFD PBD CCD BBD EQD SMD EVD TGD DOD

Factor type

Formulation

Process

Both

Number of factors

≤3

4–6

>6

� � �

Factor level

2

≥3

� �

� �

� �

Model proposed

Linear model

Interaction model

Quadratic model

Mixture model

Custom made model

— �

Screening (effect study)

� � � � — — — � — � �

Factor infl uence study

� � � � — — — � — � �

Response surface mapping

� � � � � � � � �

BBD: Box–Behnken design; CCD: Central composite design; DOD: D–Optimal design; EQD: Equiradial design; EVD: Extreme vertices design; FD: Factorial design; FFD: Fractional factorial design; PBD: Plackett–Burman design; SMD: Simplex mixture design; TGD: Taguchi design

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

73

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

consideration are many (i.e., ≥7), a Taguchi design may be better. In such cases,

creating cause–eff ect diagrams (i.e., Ishikawa fi shbone diagrams) can be quite use-

ful.⁹ Computer-assisted designs such as D-opti mal are better suited to situations

wherein a large number of qualitative factors are incorporated in the design and/or

when the resultant experimental domain is irregular in shape.⁴,⁵⁴ Th e compilation

in Table 14 acts as a help guide while selecting an experimental design, based upon

the motive of the study.

To facilitate better interpretation of results, it is always worthwhile to run one or

more replicate batches (especially at the central points) and test them to determine

the reproducibility of the batches, accuracy and precision of analytical data, and

reliability of the contour plots.

III.D. Step IV: Formulation of DDS and Their Evaluation

A design matrix is generated according to the selected experimental design. Various

drug delivery formulations are prepared according to the generated design matrix

in a randomized manner.⁵,⁹,¹⁹,³⁹,⁵⁵ Randomization ensures that “noisy” factors are

spread uniformly across all “control” factors. Th e factors are varied at the selected

levels while keeping all other process and formulation variables as constant. Th e raw

material and the experimental conditions should also be kept constant to avoid any

variability from the unwanted sources. Subsequently, the prepared drug delivery

formulations are suitably evaluated for corresponding performance parameters and

response variables. Th e analytical methods used for the purpose should yield results

with maximum precision and reproducibility, because the success of any optimization

study depends largely upon the accuracy and the reliability of the input data.

III.E. Step V: Computer-Aided Modeling and Optimization

1. DoE Data Analysis and Modeling

Th e planned conduct of experimentation is succeeded by deft interpretation of data.

Th is is a vital phase that further involves several steps.⁷,⁴⁶,⁵³,¹⁰³ DoE data analysis

starts with an overview and examination of data for the presence of any outliers

or obvious problems. A wide array of plots are drawn to uncover anomalies or

provide insights that go beyond what most quantitative techniques are capable of

discovering. Plots such as response histograms, response versus time order scatter

plots, response(s) versus factor levels plots, main eff ects mean plots, and normal or

half-normal plots can be plotted for better understanding of the system. After this,

the polynomial equations are generated based upon the proposed mathematical

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

74

B. SINGH ET AL.

model. Varied statistical tests of signifi cance such as ANOVA or Student’s t test

are applied to test the model and to further simplify it. Residual graphs and other

model diagnostic plots are also plotted to confi rm the correct transformation of

the data. To accomplish the task, it is important that the formulation scientist car-

rying out the DoE data analysis be aware of the fundamental statistical principles

of data transformation, normality, linearity, residual analysis, lack of model fi t tests,

ANOVA, p values, etc.⁶,⁷,³¹

a. Model Selection

“All models are wrong. But some are useful.” Th is assertion of Box and Draper⁴⁶

characterizes the situation that a formulation scientist faces while optimizing a

system.⁴⁶ Accordingly, the success of the optimization study depends substantially

upon the judicious selection of the model. In general, a model has to be proposed

before the start of the DoE optimization study.⁴⁵ Model selection depends upon the

types of variables to be investigated and the type of study to be made—e.g., descrip-

tion of the system, prediction of the optima or feasible regions, or factor screening.

Th e choice also depends on a priori knowledge of the experimenter about possible

interactions and quadratic eff ects.⁴,²⁸,⁴⁷,⁵⁴ If the model chosen is too simple, higher-

order interactions and eff ects may be missed because the relevant terms are not

part of the model. If the model selected is too complicated, over-fi tting of the data

may occur. Th is results in large variance in the predictions and low reliability of the

predicted optimum. Th e models mostly employed to describe the response are the

fi rst-, second-, and very occasionally, third-order polynomials. A fi rst-order model

is initially postulated. If a simple model is found to be inadequate for describing the

phenomenon, the higher order models are followed.

A series of computations are performed after hypothesizing the model, for cal-

culating the coeffi cients of polynomials and their statistical signifi cance, to enable

the estimation of the eff ects and interactions.

Calculation of the coeffi cients of polynomial equations. Regression is the most widely

used method for quantitative factors.¹⁰⁴,¹⁰⁵ It cannot be used for qualitative factors,

because interpolation between discrete (categorical) factor values is meaningless.

In ordinary least-squares regression (OLS), a linear model, expressed as Eq. (14), is

fi tted to the experimental data for estimating the values of β such that the sum of

squared diff erences between predicted and observed responses is minimized.

Y X Y X X= + = + +β β β β β0 1 1 0 1 1 11 12or (14)

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

75

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

Multiple linear regression analysis (MLRA) can be performed for more factors,

Xi, interactions, XiXj, and higher order terms, as depicted in Eq. (15).

Y X X X X= + + + +β β β β β0 1 1 2 2 1 2 1 2 ... (15)

In certain situations in which the factor/response relationship is nonlinear,

multiple nonlinear regression analysis (MNLRA) may also be performed.¹⁰⁵ Re-

gression analysis can only be performed on the coded data or the original values

after one or several models have been postulated, the choice being based on some

expectation of the response surface. In situations in which there are large numbers

of variables, such as in multivariate studies, the methods of partial least squares

(PLS) or principal component analysis (PCA) can also be employed for regres-

sion.²⁹,⁷³,¹⁰³ PLS is an extension of MLRA and is used in situations in which there

are fewer observations than the number of predictor variables.²⁹,⁷³,¹⁰³,¹⁰⁶ Also, it

aids in selecting suitable predictor variables and the outliers before carrying out

classical linear regression. Th e other multivariate analytical technique, PCA, aims

at reducing data dimensionality while retaining as much variation among the data

as possible.²⁹,¹⁰⁷,¹⁰⁸ It linearly transforms a large number of intercorrelated vari-

ables, often referred to as the original variables, into the same or fewer number of

uncorrelated variables.¹⁰⁹ Each of these uncorrelated variables is called a principal

component (PC). Th ese PCs are transformed in a hierarchical way so that the vari-

ances of these components are in descending order, such that the fi rst several PCs

explain most of the variation among the original variables. Data analyses are then

performed based on these leading PCs instead of the original variables.

Estimation of the signifi cance of coeffi cients and model. Signifi cance of coeffi cients

can be carried out using ANOVA, followed by Student’s t test.³³,⁵³,¹¹⁰,¹¹¹ ANOVA

computation can be performed using the Yates algorithm to fi nd the signifi cance of

each coeffi cient. Th is ANOVA helps in determining the signifi cance of the model as

well as of the lack of fi t. It is always advisable to retain only signifi cant coeffi cients

in the fi nal model equation. Th e values of Pearsonian coeffi cient of determination

(r²)and that adjusted for degrees of freedom (r ²adj) of the polynomial equation are

also compared. Th e value of r ² is the proportion of variance explained by the regres-

sion according to the model and is the ratio of the explained sum of squares to that

of the total sum of squares.

rSS SS

SSTOTAL RESIDUAL

TOTAL

2 = −( ) (16)

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

76

B. SINGH ET AL.

Th e closer the value of r ² to unity, the better the fi t and the better, apparently,

the model.⁶,⁷,¹⁰⁴,¹⁰⁵,¹¹⁰ However, there are limitations to its use in MLRA, especially

in comparing the models with a diff erent number of coeffi cients fi tted to the same

data set. A saturated model will inevitably give a perfect fi t, and a model with almost

as many coeffi cients as data is likely to yield a higher value for r ². In such cases, r ²adj

is preferred, which corrects the r ² value for the number of degrees of freedom. Th e

value of r ²adj is calculated using equaivalent mean squares (MS) in place of sum of

squares (SS), as described in Eq. (17).⁶,¹⁰⁴ Its value is usually less than r ².

rMS MS

MSadj

TOTAL RESIDUAL

TOTAL

2 = −( ) (17)

Predicted residual sum of squares (PRESS) is calculated as the sum of squared diff er-

ences between the observed values (Yi) and the predicted values ( iY ), calculated using

the leave-one-out method.⁶,²⁸,⁴¹ Ideally, the value should be zero or close to it.

2ˆ( )i iPRESS Y Y (18)

Eq. (19) computes the cross-validated value of r ²—i.e., Q²—as the predictive

power of the model.¹⁴ Relative to r ², it underestimates the goodness of fi t. A fi t of

Q² > 0.5 is considered as fairly good, while Q² > 0.9 is usually taken as excellent.

QPRESS

SSTOTAL

2 1= − (19)

Fınally, all these parameters are assessed to help in choosing the most appropriate

model for a particular response. Th e fi nal polynomial equation is subsequently used

to calculate the magnitude of eff ects and interactions. Table 15 is a typical ANOVA

table generated during modeling of the controlled release buccoadhesive compressed

matrices employing HPMC and Carbopol as the factors.¹⁸,¹⁹

Statistically signifi cant values (p < 0.001) of Fısher’s ratio (F) and insignifi cant

value of “lack of fi t F-value” (p > 0.001) unambiguously ratify that the proposed

model fi ts well into the data.

Model diagnostic plots. One or more of these plots are usually plotted to investigate

the goodness of fi t of the proposed model:

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

77

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

• Actual vs. predicted: A graph is plotted between the actual and the predicted response

values.⁷,¹⁸,¹¹² Th is helps in detecting a value or a group of values that are not eas-

ily predicted by the model. Ideally, such plots, passing through origin, should be

highly linear—i.e., with r ² values close to unity. Th ese plots are simple to construct

and comprehend. Th ey reveal the most pragmatic information of prognosis—i.e.,

whether the experimentally observed values of responses are analogous to those

predicted using optimization methodology. Fıgure 19a illustrates this concept.

• Residuals vs. predicted: Residuals (or error) is the magnitudinal diff erence be-

tween the observed and the predicted response(s). Studentized residuals are the

residuals converted to their standard deviation units.³¹,⁵³,¹¹² Th e residuals (or

studentized residuals) are plotted versus the predicted values of the response

parameters. Th e plot tests the assumption of constant variance. It should have a

random and uniform scatter with points close to zero axis and a constant range

of residuals across the graph (Fıg. 19b). Distinct patterns such as expanding

variance (megaphone pattern) in the plots indicate the need for a suitable data

transformation (such as logarithmic, exponential, square root, inverse).

TABLE 15. ANOVA Table for a Response Variable*

Model terms Source DF Mean square F p value

Model 7 217.25 1.330E+5 <0.0001

X12 HPMC 1 300.62 1.841E+5 <0.0001

X12 Carbopol 1 155.63 95065.65 <0.0001

X1 HPMC * HPMC 1 1.69 1034.50 <0.0001

X2 Carbopol * Carbopol 1 1.390E–3 0.85 0.4243

X1X2 HPMC * Carbopol 1 14.10 8634.99 <0.0001

X12X2 HPMC2 * Carbopol 1 0.015 9.44 0.0545

X1X22 HPMC * Carbopol2 1 2.60 1594.72 <0.0001

Residuals 3 1.633E–3

Lack of Fit 1 4.299E–4 14.33 0.0632

Pure error 2 3.000E–4

Corrected Total 10

* Data taken from Singh & Ahuja18,19

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

78

B. SINGH ET AL.

Actual

Pre

dic

ted

0.81

1.04

1.26

1.49

1.71

0.81 1.04 1.26 1.49 1.71

(a)

PredictedSt

ud

en

tize

d R

esi

du

als

-3 .00

-1.50

0.00

1.50

3.00

0.81 1.04 1.26 1.49 1.71

(b)

Run Number

Stu

de

ntiz

ed

Re

sid

ua

ls

-3 .00

-1.50

0.00

1.50

3.00

1 2 3 4 5 6 7 8 9

(c)Studentized Residuals

No

rmal

% P

rob

abili

ty

-1 .00 -0 .50 0.00 0.50 1.00

1

510

2030

50

708090

95

99

(d)

Run Number

Ou

tlie

r T

-10.10

-6.70

-3.30

0.10

3.50

1 2 3 4 5 6 7 8 9

(e)

Run Number

Co

ok'

s D

ista

nce

0.00

1.09

2.19

3.28

4.38

1 2 3 4 5 6 7 8 9

(f)

Outlier

R u n N u m b e r

Le

ve

rag

e

0.00

0.17

0.33

0.50

0.67

0.83

1.00

1 2 3 4 5 6 7 8 9

L a m b d a

Ln

(Re

sid

ua

lSS

)

-6 .92

-3.41

0.10

3.62

7.13

-3 -2 -1 0 1 2 3

(g) (h)

(f)

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

79

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

• Residuals vs. run: Th is is a plot of the residuals versus the order of the experi-

mental run.⁷,³⁴ It checks for the “lurking variables” that may have infl uenced the

response during the experiment. Th e plot should show a random and uniform

scatter, as in Fıgure 19c. Trends indicate a time-related variable lurking in the

background.

• Residuals vs. factor: Th is is a plot of the residuals versus any selected factor.³¹ It

checks whether the variance, not accounted for by the model, is diff erent for

diff erent levels of a factor. Ideally, the plot should exhibit a random scatter. Pro-

nounced curvature may indicate a systematic contribution of the independent

factor that is not accounted for by the model.

• Normal probability plot: Th is investigates the normal probability distribution of

residuals, as judged from the linear trend of the points, when plotted on a probit

scale (Fıg. 19d). Defi nite patterns such as an S-shaped curve suggest that the

transformation of the response data may provide a better analysis.⁷,³⁴,⁵³ On the

other hand, when this graph is plotted for “eff ects,” its interpretation is diff erent.

Here, insignifi cant eff ects should eff ectively follow an approximately normal

distribution with the same location and scale, with signifi cant eff ects remain-

ing distant from this normal distribution line. Th erefore, this is an alternative

method of determining signifi cant eff ects. Th ose eff ects that are substantially

away from the straight line fi tted to the normal plot are considered signifi cant.

Although this is a somewhat subjective criteria, it tends to work well in prac-

tice. It is helpful to use both the numerical output from the fi t and graphical

techniques such as the normal plot in deciding which terms are required to be

kept in the model.

• Outlier T: Th is is a measure of by how many standard deviations the actual

value deviates from the value predicted after deleting the point in question.¹¹³

Many times, this is referred to as an externally Studentized residual, because

the individual case is not used in computing the estimate of variance. Outliers

should be investigated to fi nd out if a special cause can be assigned to them. If

a defi nite cause is found, then it may be acceptable to analyze the data without

that point. However, if no special cause is identifi ed, then the point probably

FIGURE 19. Various types of diagnostic plots for selecting suitable model(s). (a) predicted

vs. actual; (b) Studentized residuals vs. predicted; (c) Studentized residuals vs. run; (d)

normal probability plots; (e) outlier T plot; (f) Cook’s distance plot; (g) leverage plot; (h)

Box–Cox plot.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

80

B. SINGH ET AL.

should remain in the data set. Th e graphical plots provide a better perspective

on whether a case (or two) grossly deviates from the others or not. Fıgure 19e

depicts the same with one distinct outlier.

• Cook’s distance: Th is provides measures of the infl uence, potential or actual, of

the individual runs.⁷,¹¹⁴ It is a measure of the eff ect that each point has on the

model. A point that has a very high distance value relative to the other points

may be an outlier, as shown in Fıgure 19f.

• Leverage: Th is is a measure of degree of infl uence of each point on the model

fi t.⁷ If a point has a leverage of 1, it controls the model and the model must

go through that point (Fıg. 19g). A point with leverage of near one should be

reduced by adding or replicating points.

• Box–Cox plot for power transforms: Th e Box–Cox plot is a tool to help deter-

mine the most appropriate power transformation for application to response

data.⁷,¹¹³,¹¹⁵ Most data transformations can be described by the power function,

σ = fn(µα), where σ is the standard deviation, µ is the mean, and α is the power.

If the standard deviation associated with an observation is proportional to the

mean raised to the α power, then transforming the observation by (λ = 1 – α)

power yields a scale satisfying the equal variance requirement of the statistical

model. Fıgure 19h shows a typical Box–Cox plot plotted between Ln(Residuals)

and λ. Here, the value of λ near 0 suggests no power transformation.

Transformation of data. DoE situations confronting non-normal distribution char-

acteristics or lack of the fi tness of the proposed model to the data and instability of

the response variance invariably call for appropriate transformation of experimental

data.⁷,¹¹³,¹¹⁵ Th e Box–Cox plot helps in choosing the correct form of transformation

signifi cantly. Box 3 outlines various possible and pragmatic data transformations

put to usual practice.

BOX 3. Various Types of Data Transforma -tions Adopted in Experimental Designs

List of transformations performed

Y √y logit (y)

log10(y) 1/y arcsin (y)

loge (y) 1/√y yλ

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

81

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

2. Search for Optimum

Optimization of one response or the simultaneous optimization of multiple responses

can be accomplished either graphically or numerically. Optimization aims at de-

termining the experimental conditions that lead to the best value of the response.

Th is optimum response is often a maximum or minimum. On the other hand, if

optimization aims to achieve a target value, then a single point or an optimum zone

is defi ned within the experimental region.⁶,²¹,³³,¹¹⁶ A bird’s eye view of various op-

timization methodologies for single or multiple responses is described below.

a. Graphical Optimization

Known popularly as response surface analysis, graphical optimization displays the

area of feasible response values in the factor space.⁶,¹⁶,³³,⁵⁴ For this, graphical opti-

mization criteria are set. Selection of optima using graphical methods is not based

upon minimization or maximization of any function. Hence, graphical techniques

require only computability, but not continuity or diff erentiability of the function(s),

as needed in the classical techniques. Th e experimenter has to make a choice, trading

off one objective for other(s), according to the relative importance of the objectives

considered.²³,²⁸ Th e success in locating an optimum lies in the sagacious interpre-

tation and/or comparison of the resulting plots, leading to attainment of the best

compromise. One or more of the following techniques may be employed for this

purpose.

Location of the stationary point. After completing the experimental work, often

the goal of the formulation scientist is to locate the optimum.⁷,⁵⁴ Th e nature of the

response surface is interpreted graphically, and a stationary point is located, which

may be maximum, minimum, or a target value. At this point, the partial derivatives of

the response with respect to the design variable are all zero. Fıgure 20a–b shows the

location of the stationary points in case of a maximum and minimum, respectively.

Th e case in which the stationary point is not a maximum or minimum is known as

the saddle point, as shown in Fıgure 20c.⁴⁵,¹¹⁷

Canonical Analysis. When the number of factors investigated is large—i.e., more

than two—use of a graphical procedure (as explained above) cannot be interpreted

with dexterity.¹⁶,²¹,⁵⁴ Th is is especially true when interactions among various factors

are also present. In such circumstances, canonical analysis is preferentially employed.

Canonical analysis starts with transformation of the model into a new coordinate

system with the origin at the stationary point.⁷,¹¹⁸,¹¹⁹ Rotation of the response func-

tion to a new set of axes that corresponds to the principal axes of the actual contour

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

82

B. SINGH ET AL.

system is carried out. Equation (20) is known as the canonical form of the model.

Th e equation illustrates the transformation.

� �

y y w w ws k k= + + + +λ λ λ1 12

2 22 2... (20)

where wi are the transformed independent variables, and λi are constants known as

eigenvalues or characteristic roots.⁷ Th e nature of the response surface can be deter-

mined from the stationary point, and the signs and magnitude of the λi. In case of

all positive values of λi, the stationary point is a minimum, whereas for all negative

values of λi, the stationary point is a maximum. If λi are of diff erent signs, then the

stationary point is a saddle point. Furthermore, the response surface is steepest in

the wi direction, for which a magnitude of λi is the greatest. Th ere are situations

when there is a region, rather than a point, within which the estimated optimum

response exists. Th is is often termed the stationary ridge.⁷,⁵⁴ If the stationary point

is outside the region of exploration for fi tting the second-order model and one (or

more) values of λi is near zero, then the surface may be a rising ridge. Fıgure 21a–b

illustrates the case of a stationary and rising ridges.

Even after canonical analysis has been performed, there are situations in which a

researcher needs additional help in determining the best operating conditions—e.g.,

when the stationary point is outside the experimental domain (i.e., rising ridge)

or when the stationary point is a saddle point. In such cases, ridge analysis, which

(a) (b) (c)

FIGURE 20. Diagrammatic representation of contour lines for location of the stationary

point, S. (a) Maximum; (b) minimum; (c) saddle point.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

83

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

determines the optimum response at a given distance from the center of the design

region, can aid in optimum location.⁴⁵,⁴⁶

Search methods. Search methods are employed for choosing the upper and lower limits

of the responses of interest.¹⁶,²⁸,³³,¹²⁰ Th e response surfaces in these search methods,

as defi ned by the appropriate equations, are searched to fi nd the combination of

independent variables yielding the optimum. Two major steps are used—feasibility

search and grid search. Together, these techniques are also referred to as the brute force

method.¹⁶,²⁵,²⁸,³³,³⁷,¹²⁰ Th e feasibility search method is used to locate a set of response

constraints that are just at the limit of possibility. One selects several values for the

responses of interest, and a search of the response surface is made to determine

whether a solution is feasible. Th e feasibility search method yields the possibilities of

satisfying the constraints. Subsequently, the exhaustive grid search is applied, wherein

the experimental range is further divided into a grid of specifi c size and searched

methodically. Th e grid search method has been successfully used to provide a list of

possible formulations and their corresponding response values.¹⁸,¹⁹,¹²¹

Overlay plots. Th e response surfaces or contour plots are superimposed over each

other to search for the best compromise visually.²⁸ Minimum and maximum bound-

aries are set for acceptable objective values. Th e region is highlighted wherein all

the responses are acceptable. Th is is termed an overlay plot or combined contour plot.

Within this area, an optimum is located, trading off diff erent responses.⁴,¹²,¹²²,¹²³

Th e use of overlay diagrams is limited only to three or four response variables.

(a) (b)

FIGURE 21. Diagrammatic representation of (a) ridges; (b) rising ridges.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

84

B. SINGH ET AL.

Fıgure 22 depicts an instance of overlay plots used for locating optimum formula-

tion with response values of release up to 18 hours (Rel18h) between 80 and 85%

and bioadhesive strength (F) between 24 and 28 g.¹⁵,⁵² Th e white area between the

contours of the response variables represents the feasible optimum region containing

the formulations with desired features.

Pareto optimality charts. In order to fi nd the most optimum factor combinations

satisfying multiple criteria of a formulation, a pareto optimality approach may also be

used.²⁸,³²,⁷⁵,¹²⁴ Th ese optimality charts are also called multiple criteria decision mak-

ing plots.³² In this method, a graph is plotted between the predicted values of the

objectives and the variables. Th e space occupied by the resulting cloud of points is

called the feasible criterion space. Special subsets of the points (forming a shell partly

around the cloud) are the pareto-optimal (PO) points. A PO vertex is a point in

the feasible criterion space, if there exists no other point in that space yielding an

improvement in one criterion without causing degradation in the other. In contrast

to the overlay method, here optimization is not performed by preselecting the desired

values of both the criteria and constructing contour plots. Instead, it is performed by

plotting a 2-D PO plot. Th ere is no objective hierarchy within the PO points, and

this technique can be used with any number of factors and criteria. No weighting

factors or upper and lower boundaries are needed.

HPMC

So

d.

CM

C

-1 .0 -0 .5 0.0 0.5 1.0

-1 .0

-0 .5

0.0

0.5

1.0

R e l 1 8 h : 8 0

R e l 1 8 h : 8 5

F : 2 4

F : 2 8

FIGURE 22. A contour overlay plot, plotted between two excipients, X1 and X2 shows the

region between the two set criteria—i.e., Release till 18 h (Rel18h) should be between 80

and 85% and bioadhesive strength (F ) should be between 24 and 28 g.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

85

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

b. Mathematical Optimization Methods (Numerical Optimization)

Graphical analysis is usually preferred in the case of single response. However, in cases

of multiple responses, it is usually advisable to conduct numerical or mathematical

optimization fi rst to uncover a feasible region.⁴,⁶,²⁸

Desirability functions. Th is technique involves a way of overcoming the diffi culty

of multiple, sometimes opposing, responses.¹²⁵ In this approach, each response i is

associated with its own partial desirability function (di). If the value of the response

is optimum, its desirability equals 1, and if it is totally unacceptable, its value is 0.

Th us, the desirability for each response can be calculated at a given point in the

experimental domain. An overall desirability function can be calculated by multiply-

ing all of the r partial functions together and taking its rt root. Th e optimum is the

point with the highest value of desirability. Th e contour plots of desirability surface

around the optimum should be studied along with the contour plots of the other

responses, as described under overlay plots.⁴,⁶,⁴¹

An r number of individual desirability functions are combined together, usually

as their geometric mean, to obtain the overall desirability function (D) for the system

whose maximum value can then be expressed within the domain. Th e quantitative

relationship is given as Eq. (21).

D idi

rr

=⎛

⎝⎜⎜

⎠⎟⎟

=∏

1

1/

(21)

An alternative, more general, form of the overall desirability function is shown

by Eq. (22).

D d pi ii

r

=⎛

⎝⎜⎜

⎠⎟⎟

=∏

1

(22)

where pi is the weighting of the it response, normalized so that ∑=

=i

r

ip1

1.

Th e technique allows optimization to take into account the relative importance

of each response, while selecting the most appropriate form of the partial desir-

ability function. Th e corresponding desirability is calculated over the domain for

each response using model equations. Th e overall function is then determined and

graphically mapped over the domain. Examination of various contour plots of desir-

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

86

B. SINGH ET AL.

ability surface furnishes a reliable picture of the acceptable region. Th e shape of the

function does not tend to be as smooth as the response surface(s). Th is numerical

optimization by the desirability method leads to a single point. It is necessary to

complete the determination of the optimum by drawing the contour plot of the

desirability surface.⁶,¹²⁵ Desirability surfaces should always be compared with the

response surfaces for the original factors.

Objective functions. Th ese methods are used to seek an optimum formulation by

solving the objective function either for a maximum or a minimum in the pres-

ence of equality and/or inequality constraints.¹¹,¹⁶,¹¹⁶ Th e objective function may

be expressed as Eq. (23), and the inequality and equality constraints as Eqs. (24)

and (25), respectively.

Y f X X= ( )1 2 (23)

G X f X X( ) ( , )= ≥1 1 2 0 —i.e., inequality constraint (24)

H X f X X( ) ( , )= =2 1 2 0 —i.e., equality constraint (25)

If the objective function is expressed as a function of a single variable—i.e.,

Y = f(X ), the calculus-based mathematical approach is applied to fi nd the maximum

or minimum of a function. Th e fi rst derivative of the function can be taken, and

by setting it equal to zero, the value of X can be solved to obtain the maximum or

minimum. When the relationship for the response Y (objective function) is given as

a function of two or more independent variables, as in Eq. (23) for X₁ and X₂, the

problem is slightly more involved. Mathematically, appropriate manipulations with

partial derivatives of the function can locate the necessary pair of X values for the

optimum. Th is approach is known as classical optimization and is applicable only to

unconstrained problems. Th ese techniques, however, fi nd relatively limited use in

optimization of pharmaceutical drug formulations and delivery systems, where the

problems are generally the constrained ones.¹¹,¹⁶

Sequential unconstrained minimization technique. As the name suggests, the sequen-

tial unconstrained minimization technique (SUMT) can also be used for solving the

objective function for an unconstrained maximum or a minimum.¹¹⁶,¹²⁶,¹²⁷ In this

method, the constrained optimization problem is transformed to an unconstrained

one by adding a penalty function, with the resulting function called the transformed

unconstrained objective function. However, because diff erent starting points may lead

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

87

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

to diff erent optimum solutions, application of a suitable random number technique

such as the Monte Carlo approach can be used.

Lagrangian method. Th is method can be used for optimization of functions expressed

in Eqs. (23)–(25) using a series of sequential steps—determining objective functions

and constraints, changing the inequality constraint to equality constraint by intro-

ducing a slack variable for each inequality constraint.¹¹,¹⁶,⁷⁴,¹²⁸ Several equations are

combined into a Lagrange function, with one Lagrange multiplier for each constraint.

Th e Lagrange function is then partially diff erentiated for each variable, and a set of

simultaneous equations is resolved by setting the derivatives equal to zero.

c. Extrapolation Outside the Domain

When choosing a very extensive experimental domain is diffi cult or the possible

experimental domain is not known at the beginning of the study, the optimum

may be determined by extrapolation.⁴,⁶,²⁸ Th ere are two main model-based meth-

ods for extrapolating outside the domain—steepest ascent (fi rst-order model) and

optimum path (second-order model). Th e model-independent sequential methods

are also available.

Steepest ascent (or descent) methods. Th ese methods are direct optimization methods

for fi rst-order designs.⁷,⁵⁴ Th ey are good choices when the optimum is outside the

domain and is to be arrived at rapidly.⁶,²⁸ Th ese approaches are an amalgamation of

model-independent and model-dependent methods. Assuming that the response

is to be maximized, it is determined as a function of coded variables, X₁, X₂, and

so on. Th e position(s) and direction of maximum rate of increase (steepest ascent,

∆x) are found by taking their partial derivatives and setting them equal to zero. A

straight line is then drawn along that direction from the center of the region of

interest. Th is is followed by experimentation at a suitable spacing (α) along this line,

according to Eq. (26) and subsequent measurement of response. Th is is continued

until an optimum is reached, as illustrated in Fıgure 23.

Xn = Xold + α ∆x (26)

Optimum path method. Th is method is analogous to the steepest ascent method,⁶

but it is used for extrapolating from a second-order design along a curved path.

Model-independent sequential methods. Lack of a priori knowledge about the

eff ects of variables makes the sequential methods good choices for optimiza-

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

88

B. SINGH ET AL.

tion.⁶,⁸,¹⁶,¹⁹,²⁸ In the sequential approach, optimization is attempted in a step-

wise fashion. Experimentation is started at an arbitrary point in the experimental

domain, and responses are evaluated. Subsequent experiments are designed based

on the results of these studies, according to an algorithm directing the new ex-

periments toward an optimum. Whether the chosen optimum is a maximum or

minimum, the general term used for this approach is hill climbing.²⁸ An important

aspect of sequential designs is to know when the procedure has fi nished. Although

there are many diff erent stopping rules, sometimes the best method involves the

experimenter’s skill in judging a true optimum. Th e advantage of the approach lies

in the fact that neither a priori knowledge nor the planning of experiments, all at

once, is required.¹⁶,¹²⁹ Despite its interactive nature, this approach is not devoid of

drawbacks. Th e major pitfalls include inapplicability for multiple objective problems

and situations where response surface is not continuous, diffi culty in locating global

optimum, unreliable results when multiple optima exist, and inability to generate

mathematical model.⁸,²⁸

Sequential simplex and its modifi cations. Th is technique consists of fi rst generating

the data from n + 1 experiments, where n is the number of independent variables

or factors.⁸,¹⁰,³³ Based on n + 1 responses and predetermined rules, one result is

eliminated and a new experiment is performed. A decision is made as a result of

experimentation, eventually terminating the study at an optimal response.

Evolutionary operations (EVOP). Th is is a very common technique in many

disciplines other than pharmaceutical technology. Th e underlying basis for this

FIGURE 23. Graphical depiction of steepest ascent method of optimization search.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

89

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

approach is that the production procedure (formulation and process) is allowed

to evolve to the optimum by careful planning and constant repetition.⁶,¹⁶,⁴⁹,¹²⁹

Th e process is run in such a way that it produces a product meeting all the speci-

fi cations and, at the same time, generates information on product improvement.

Generally, these involve factorial and simplex designs requiring a large number

of experiments.³³ In a typical industrial process this extensive experimentation is

usually not a problem, because the process will be run repeatedly.¹⁶,⁴⁶ However, in

most situations involving drug delivery development, this is not so because there

is often insuffi cient freedom in the formula or process to allow the necessary ex-

perimentation. In a pharmaceutical development setup, however, more effi cient

methods are desirable.

d. Artifi cial Neural Networks

Of late, the application of artifi cial neural networks (ANNs) in the fi eld of pharma-

ceutical development and optimization of dosage forms has become a burning topic

of discussion.¹³⁰ Th e technique is widely practiced in optimization of DDS.²⁰,¹³⁰-

¹³⁴ ANNs are model-independent computational paradigms that can simulate the

neurological processing ability of the human brain. Neural networks, consisting of

interconnected adaptive processing units, so-called neurons, are able to discern complex

and latent patterns in the information presented to them. ANN is a computer-based

learning system that can be applied to quantify a nonlinear relationship between

causal factors and pharmaceutical responses by means of iterative training of data

obtained from a designed experiment. Th e results obtained from implementation

of an experimental design are used as input information for learning. Once trained,

the neurons of an ANN may be used to forecast outputs from new sets of input

conditions.¹³²,¹³⁵,¹³⁶

A typical ANN must have one input layer and one output layer, and may contain

one or more hidden layers, as depicted in Fıgure 24.¹³⁰,¹³⁵ Th e information is passed

from the input layer to the output layer through hidden layer(s) by the network

connections or synapses. Modeling starts with a random set of synaptic weights and

proceeds in iterations. During each iteration, connection weights are adapted via

selected modeling. Th e basis of such modeling technique is to minimize the δ er-

ror—i.e., the diff erence between the momentary network signal and the aimed signal

based on the experimental results.¹³⁵ When the minimal value of δ error is obtained,

learning is completed and connection weights become the memory units. After this,

the test set of values can be applied on a learned ANN to evaluate it. Subsequently, it

can be used for output prediction on the basis of the new input values. Th e modeling

is invariably done with suitable computer software.

Th e prediction ability (PA) or reliability of an ANN output depends heavily on

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

90

B. SINGH ET AL.

the training data.¹³⁷ Two problems that tend to diminish PA are overfi tting (i.e., few

data points per network connection) and overtraining (i.e., long network training

period). Th us, ANN does not work well with many variables and few formulations.

Furthermore, the results from ANN cannot be treated statistically, and no defi ni-

tive reasons can be given for the same. In an attempt to improve PA and to reduce

training eff orts, genetic neural networks (GNN) and generalized regression networks

(GRN) have been used with fruition.²⁰,¹³⁷ While the former employs a combination

of genetic algorithms with ANN, the latter uses the modelization of the function

more or less directly from the training data. Because ANNs require a great deal

of iterative computations, the use of versatile computer software dedicated for the

purpose becomes almost obligatory for their execution.¹³⁵

3. Choosing an Optimization Methodology

In the case of single response, graphical analysis is usually opted for.⁶ However, in

the case of multiple response variables, certain responses can oppose one another.

Accordingly, changes in a factor that improve one response may have a negative

FIGURE 24. Schematic diagram illustrating various parts of an artifi cial neural network. X1 – X3

represent the input factors; Y is the response variable connected to the input layer via various

nodes of hidden layer (H1–H9). W11 and W93 represent the connections between the corre-

sponding input factors and the nodes of the hidden layer while W1y, W5y, and W9y denote the

connections between the corresponding respective hidden nodes and output layer, Y.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

91

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

eff ect on another. Because it is not usually possible to obtain the best values for

all the responses, optimization principally embarks upon fi nding the experimental

conditions in which diff erent responses are most satisfactory over all.⁶,²⁸ In single-

response optimization, RSM helps in optimizing the formulation or process by

examining the response surface directly. In case of a multi-objective optimization

study, because there is a certain degree of subjectivity involved in weighing up the

relative importance of various objectives, other methods such as desirability func-

tions are preferred.⁴,²⁸,⁵⁴ When the optimum is outside the experimental region,

optimization is conducted through steepest ascent and optimum path methods.

Model-independent methods such as sequential simplex, which do not involve re-

sponse surface mapping, can be employed in the absence of concrete knowledge

about design modeling.⁶ Salient aspects of the suitability of various optimization

approaches have been summarized in Table 16.

III.F. Step VI: Validation of Optimization Methodology

In an industrial setting, it is highly desirable to have a reliable and stable process or

formulation. Th erefore, validation of the optimization methodology is a very crucial

step that tells about the prognostic ability of the model studied.¹⁵,¹⁹,³¹,¹³⁴ Hence, fol-

lowing data modelization and analysis, the polynomials so generated are tested for

TABLE 16. Suitability of Various Optimization Methods Under Variegated Situations

Optimization method Model situations for use

Graphical analysis Mathematical model of any orderNormally no more than 4 factorsPreferably in single response

Desirability function Mathematical model of any orderNumber of factors between 2 and 6Multiple responses

Steepest ascent First–order modelOptimum outside the domainSingle response

Optimum path Second–order modelOptimum outside the domainSingle response

Sequential simplex No mathematical modelDirect optimizationSingle or multiple responses

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

92

B. SINGH ET AL.

their predictive abilities. Various new DDS are selected from the diff erent regions of

the experimental domain, formulated and evaluated according to the standard operat-

ing conditions laid down for the formulations prepared earlier. Such formulations are

commonly termed checkpoints or confi rmatory runs.¹⁶,³¹,³³ Normally six to eight runs

are adequate, although a minimum of three runs should be conducted to allow an

estimate of variability at that region. Also, the environment and conditions should be

similar to the original experiment. Th e results obtained from these checkpoints are

then compared with the predicted ones, and the residual analysis is performed. Th e

residuals are also plotted against the observed data to check for any typical pattern

such as ascending or descending lines or cycles. Also, the model fi tness parameters

such as r ², r ²adj, and PRESS, obtained from the linear plot between the observed and

the predicted values, guide the predictive ability of the model.

III.G. Step VII: Scale-Up and Implementation in Production Cycle

To corroborate DoE performance on the production platform, this step is executed

only in the industrial milieu to ensure that the conducted DDS optimization study

is reproducible and robust.⁴,⁶ Afterwards, the results are implemented to a larger

scale production cycle.

IV. COMPUTER USE IN OPTIMIZATION

Although DoE optimization principles can be manually applied by using algorithms

found in pertinent handbooks, the pertinent software saves considerable time by

performing most calculations involved in a DoE exercise.³⁵,¹³⁸ It also eases the

complexity surrounding DoE by emphasizing graphical solutions over numerical

tables. One of the main reasons for not using DoE has been an apprehension of

statis tics, which many researchers consider to be a complication. With the advent

of sophisti cated computer packages specially designed for DoE, this is no longer

an obstacle.³⁸ Although an awareness of several statistical tests is useful to conduct

DoE successfully, one doesn’t need to be a statistician with an in-depth knowledge

of diverse statistical methodologies.

Computer software has been used almost at every step during the optimization

cycle, ranging from screening of factors, selection of design, use of response surface

designs, generation of the design matrix, plotting of 3-D response surfaces and 2-D

contour plots, robustness testing, application of optimum search methods, interpre-

tation of results, and fi nally, validation of the methodology.¹⁵,²⁵,³⁷,³⁸ In particular,

ANN optimization is based totally upon the computer interface, tailor-made for

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

93

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

the purpose.²⁰,¹³⁰,¹³⁵ Many software packages, through helpful wizards, lead the user

quite rationally through various phases of design, analysis, graphing, and optimiza-

tion, even without a mathematical model or statistical equations in sight. Use of

pertinent software can make the DoE optimization task a lot easier, faster, more

elegant, and more economical.⁶,²⁸,³⁵,¹³⁸ Specifi cally, the erstwhile impossible task

of generating varied kinds of 3-D response surfaces manually can be accomplished

with phenomenal ease using appropriate software.³⁰,³⁸

IV.A. Choice of Computer Software Package

Many commercial software packages are also available that either are dedicated to a

set of experimental designs or are of a more general statistical nature with modules for

select experimental designs. However, the dedicated software is frequently considered

to be better, because the user pays only for the DoE capabilities.³⁸ In contrast, the

more powerful, comprehensive, and expensive statistical packages such as MINITAB,

SPSS, SAS, BBN, and BMDP, are geared toward larger enterprises off ering diverse

facilities for statistical computing, support for networking and client/server com-

munication, and portability with a variety of computer hardware.¹²,³⁸ Because the

use of computers is nearly obligatory for implementing an optimization plan, the

choice of appropriate software is crucial. Consequently, while selecting an ideal DoE

software, it is vital to look for not only a statistical engine that is fast and accurate,

but also the availability of the following features:

• a wide selection of designs for screening and RSM optimization

• the facility to generate design matrix according to the chosen experimental

design

• a choice of suitable model fi tness tests and model diagnostic plots

• design evaluation tools that will reveal aliases and other potential pitfalls

• graphic tools displaying the rotatable 3-D response surfaces, 2-D contour plots,

and eff ect and interaction plots

• amenity to randomize the order of experimental runs

• simple, intuitive, and user-friendly graphic user interface (GUI), easy-to-use

menu-based operation, and thorough context-sensitive help using context-

tailored dialogs and well-formatted output for eff ortless justifi cation later

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

94

B. SINGH ET AL.

• a well-documented software manual with tutorials to get you off to a quick

start

• a spreadsheet module, fl exible enough for data entry as well as dealing with

missing data and changed factor levels

• a comprehensive glossary of various terms employed and needed during DoE

optimization

• after-sales technical support, online help through answers to various FAQs, and

training courses off ered by the manufacturing vendors

Table 17 lists some commonly used computer software packages for DoE optimi-

zation, along with their select salient features. Today, these dedicated off -the-shelf

software packages commonly sell for prices varying widely from $99 to $2500,

depending upon the features encompassed in each. However, the actual number of

available computer software systems for DoE is much greater, as the fi eld is still

rapidly growing.

V. EPILOGUE: CAUTIONS IN DoE OPTIMIZATION

Th e success of any DoE optimization study depends largely upon the crucial choice of

experimental design and experimental domain.⁴,¹⁵ An incorrect experimental design

can adversely aff ect the reliability of the prognosis, while an unsuitable experimental

range may either miss the optimum or require a greater number of experiments.

Simple experimental designs coupled with rational statistical tools for data analysis

can furnish vast amounts of information about the system under investigation from

only small experiments. However, an inappropriate DoE may generate insuffi cient

data from a limited number of experiments, eventually leading only to “half-baked

delicacy” formulations. Th e formulator can, beyond doubt, attain “the best” drug

delivery formulation, but only within the experimental domain studied, in which

the generated models have been validated. Predictions outside the region are nei-

ther advisable nor useful. Furthermore, the approach of DoE to map the responses

as well as to use the model to optimize drug delivery is limited only to the cases

in the optimal region. When this is not done, either the size of the domain has to

be reduced or a more complex model chosen to explain the phenomenon leading

to increased experimentation. Despite the well established applications of DoE in

predicting responses, its reliability of prognosis can also vary. Keeping in mind the

famous adage, garbage-in-garbage-out (GIGO), the degree of predictability will

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

95

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

TABLE 17. Important Computer Software for Optimization and Their Salient Features

Software Salient features Source

Design Expert Powerful, comprehensive and popular package used for optimizing pharmaceutical formulations and processes; allows screening and study of infl uential variables for FD, FFD, BBD, CCD, PBD and mixture designs; provides 3-D plots that can be rotated to visualize the response surfaces and 2-D contour maps; numerical and graphical optimization

www.statease.com

JMP Comprehensive DoE software for automated data analysis of various designs of RSM, with diverse graphics, help features, and documentation

www.jmp.com

FUSION PRO Powerful state-of-the-art user-friendly DoE software for automated data analysis, includes graphic and help features, with facilities of various optimal and nested designs

www.smatrix.com/fusion-pro.html

DOE PRO XL& DOE KISS

MS–Excel compatible DoE software for automated data analysis using Taguchi, FD, FFD and PBD, with a difference that DOE KISS is applicable only to single response and is relatively inexpensive

www.sigmazone.com

ECHIP Used for designing and analyzing optimization experiments

www.echip.com

DOE PC IV Used for designing the optimization experiments

www.adeptscience.co.uk/as/products/qands/qasi/doepciv/

STATISTICA ANN-based software based on GRN technique

www.statsoftinc.com

NEMROD Suitable for FDs and CCDs, has features for numerical optimization and graphic outputs

www.umt.ciw.uni-karlsruhe.de/22713

MODDE Suitable for response surface modeling and evaluation of fi tting of model

www.umetrics.com

DOE WISDOM Supports designs for screening, D-optimal, Taguchi and user defi ned designs, also options are available for Pareto-optimality charts

www.launsby.com

(continues)

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

96

B. SINGH ET AL.

Software Salient features Source

OPTIMA Generates the experimental design, fi ts mathematical equations to the data and graphically depicts response surfaces

www.optimasoftware.co.uk

XSTAT Aids in selection of an experimental design, has modules for numerical optimization and graphic outcomes

www.amazon.com

Multisimplex® AB Aids in optimization based on simplex and D-optimal designs

www.multisimplex.com

Cornerstone™ DoE software with features for executing various experimental designs

www.brooks.com

COMPACT Optimization software for systematic DoE and response surface methodology studies with state–of–art mathematical search techniques

www-fp.mcs.anl.gov/otc/ guide/SoftwareGuide/Blurbs/compact.html

Omega Only for mixture designs; only program that supports multi-criterion decision making by Pareto optimality, up to six objectives and has various statistical functions

www.winomega.com

iSIGHT General DoE software with features for implementation of Taguchi, CCDs and FDs

www.engenious.com/release1_11isightenhance. html

SOLVER Optimization software for linear and nonlinear problems with state-of-the-art mathematical programs

www.solver.com

MATREX MS–Excel compatible optimization software with facilities for various experimental designs and Taguchi design

www.rsd-associates.com/matrex.htm

GRG2 Mathematical optimization program to search for the maximum or minimum of a function with or without constraints

www.fp.mcs.anl.gov/otc/Guide/SoftwareGuide/Blurbs?grg2.html

ANN: Artifi cial neural network; DoE: Design of experiments; FD: Factorial design; FFD: Fractional factorial design; CCD: Central composite design; GRN: Genetic regression network; PBD: Plackett–Burman design; RSM: Response surface methodology.

TABLE 17. Continued

depend largely upon the accuracy and enormity of the input data and the choice of

the appropriate procedures.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

97

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

Regardless of the outstanding benefi ts of DoE, the experimenter should never

consider it as either a magic wand or a panacea for all product development problems.

In fact, DoE and product knowledge complement each other. A designed product

or process tends to enhance product information instead of acting as a surrogate to

the pharmaceutical experience.⁶,¹⁶,²⁸,³³,⁵⁴ At times, the wise scientist even chooses

the infl uential variables through empiricism and observation, bypassing the rigors

of screening and factor infl uence studies. In any case, DoE does not replace much-

needed formulation skills, prudence, and creative artistry, but rather supplements

and expedites the formulation development process.

VI. CONCLUSIONS

A product development scientist always remains in a dynamic environment, taking

drug delivery challenges in stride time and again. Th ese challenges arise invariably

as a result of escalating competitiveness among manufacturers to improve effi cacy

and cost-eff ectiveness of products, rapidly changing compendial and regulatory

specifi cations for drug delivery devices, and increasing quality consciousness among

physicians as well as patients. Furthermore, while optimizing such formulations, there

is always a constraint on time, resources, and materials. Hence, it is important for a

pharmaceutical scientist to use eff ective methodology to develop products in a timely

manner without sacrifi cing quality. However, despite applying the best knowledge,

skills, and wisdom to achieve the said goal, the outcome is not easily ascertainable.

Th e formulator may either hit the bull’s eye quickly or miss the target altogether

even after arduous workouts. Employing DoE optimization initially, keeping in view

the possible changes that might occur later, can make it much simpler to modify

existing formulations and meet redefi ned objectives. Th is is because these system-

atic approaches do not just unravel a true optimum of the objectives, but yield the

complete graphic manifestation and mathematical relationships within the realms

of the experimental domain. Also, such experimental design studies can be valuable

in product and/or process validation and subsequent scale-up operations.

Th e use of DoE is a leading edge approach to optimization and screening of ex-

perimental parameters. Currently, it has gained acceptance as a pivotal developmental

tool in diverse industrial processes. However, its enormous potential has not been

fully harvested in drug delivery development, research, and industry. Regardless of

some reports and publications, we have yet to make the most of this revolutionary

practice for routinely optimizing the DDS. Th e major impediment is our traditional

stance of sticking to established norms. As any human mind can easily grasp and

implement the simple COST methodologies, they have become well entrenched in

our product development system. What is needed today is to persuade our fellow

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

98

B. SINGH ET AL.

“creatures of habit” to adopt this newer paradigm of DoE methods and challenge

the old weary ones. Th is shift in paradigm can provide astute insight for future im-

provements, leading to newer opportunities in the form of next-generation product

launches. Short and rapid development cycles are the endeavor of pharmaceutical

houses. Becoming part statistician to accomplish this endeavor does not seem to

be a frightening prospect. Certainly, espousing new “attitudes” and “aptitudes” can

only lead to attainment of higher “altitudes.”

ACKNOWLEDGMENTS

Research grants received from the Council of Scientifi c and Industrial Research,

New Delhi, and M/s Panacea-Biotec Ltd., New Delhi, for funding the research

projects on optimization of drug delivery devices are gratefully acknowledged. Th e

authors appreciate vital help rendered by Mr. Raghavendra Kanti Gupta in the project

initiation, Dr. (Mrs.) Monika Bakshi, Dr. Ravinder Agarwal, Mr. Rajesha BC, Ms.

Sonia Pahuja, and Mr. Madela Ram Babu for their signifi cant help in surveying the

literature on DoE optimization.

REFERENCES

1. Lee VHL. Advanced drug delivery reviews: cornerstone in the stimulation and

dissemination of innovative drug delivery research. Adv Drug Deliv Rev 2004; 56:

1–2.

2. Crommelin DJA, Storm G, Jiskoot W, Stenekes R, Mastrobattista E, Hennink WE.

Nanotechnological approaches for the delivery of macromolecules. J Control Release

2003; 87:81–88.

3. Vasir JK, Tambwekar K, Garg S. Bioadhesive microspheres as a controlled drug delivery

system. Int J Pharm 2003; 255:13–32.

4. Lewis GA. Optimization methods. In: Swarbrick J, Boylan JC, editors. Encyclopedia

of Pharmaceutical Technology. 2nd ed. New York: Marcel Dekker, 2002.

5. Kannan V, Kandarapu R, Garg S. Optimization techniques for the design and devel-

opment of novel drug delivery systems, part I. Pharm Tech 2003; Feb:74–90.

6. Lewis GA, Mathieu D, Phan-Tan-Luu R. Pharmaceutical Experimental Design. 1st

ed. New York: Marcel Dekker, 1999.

7. Montgomery DC. Design and Analysis of Experiments. 5th ed. New York: Wiley,

2001.

8. Araujo PW, Brereton RG. Experimental design II. Optimization. Trends in Anal

Chem 1996; 15:63–70.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

99

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

9. Tye H. Application of statistical “design of experiments” methods in drug discovery.

Drug Discov Today 2004; 9:485–491.

10. Shekh E, Ghani M, Jones RE. Simplex search in optimization of capsule formulation.

J Pharm Sci 1980; 69:1135–1142.

11. Fonner DE, Buck JR, Banker GS. Mathematical optimization techniques in drug

product design and process analysis. J Pharm Sci 1970; 59:1587–1596.

12. Bolhuis GK, Duineveld CAA, de Boer JH, Coenegracht PMJ. Simultaneous opti-

mization of multiple criteria in tablet formulation: Part I. Pharm Tech 1995; Jun:

42–50.

13. Huisman R, van Kamp HV, Weyland JW, Doornbos DA, Bolhuis GK, Lerk CF. De-

velopment and optimization of pharmaceutical formulations using a simplex lattice

design. PharmWeek Sci 1984; 6:185–194.

14. Trygg J, Wold S. Introduction to statistical experimental design—what it is? Why

and where is it useful? Editorial page, 2002, www.acc.umu.se/~tnkjtg/Chemometrics/

Editorial, accessed on 30th August, 2004.

15. Singh B, Gupta RK, Ahuja N. Computer-assisted optimization of pharmaceutical

formulations. In: Jain NK, editor. Pharmaceutical Product Development. New Delhi:

CBS Publishers, 2004; in press.

16. Schwartz JB, Connor RE. Optimization techniques in pharmaceutical formulation

and processing. In: Banker GS, Rhodes CT, editors. Modern Pharmaceutics. 3rd ed.

New York: Marcel Dekker, 1996.

17. Banker GS, Anderson NR. Tablets. In: Lachman L, Lieberman HA, Kanig JL, editors.

Th e Th eory and Practice of Industrial Pharmacy. 3rd ed. Bombay: Varghese Publishing

House, 1987.

18. Singh B, Ahuja N. Development of controlled-release buccoadhesive hydrophilic

matrices of diltiazem hydrochloride: optimization of bioadhesion, dissolution, and

diff usion parameters. Drug Dev Ind Pharm 2002; 28:433–444.

19. Singh B, Ahuja N. Response surface optimization of drug delivery system. In: Jain

NK, editor. Progress in Controlled and Novel Drug Delivery Systems. 1st ed. New

Delhi: CBS Publishers, 2004.

20. Colbourn EA, Rowe RC. Neural computing boosts formulation productivity. Pharm

Tech IT Innov 2003; (Suppl):22–25.

21. Podczeck F. Th e development and optimization of tablet formulations using math-

ematical methods. In: Banker GS, Rhodes CT, editors. Modern Pharmaceutics. New

York: Marcel Dekker, 1996.

22. Armstrong NA, James KC. Understanding Experimental Designs and Interpretation

in Pharmaceutics. London: Ellis Horwood, 1990.

23. Doornbos DA. Optimization in pharmaceutical sciences. Pharm Week Sci 1981; 3:

33–61.

24. Fısher RA. Th e Design of Experiments. 1st ed. Edinburgh: Oliver and Boyd, 1935.

25. Schwartz J, Flamholz J, Press R. Computer optimization of pharmaceutical formula-

tions I: general procedure. J Pharm Sci 1973; 62:1165–1170.

26. Haaland PD. Experimental Design in Biotechnology. New York: Marcel Dekker,

1989.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

100

B. SINGH ET AL.

27. Fısher RA. Statistical Methods for Research Workers. Edinburgh: Oliver and Boyd,

1925.

28. Doornbos DA, de Haan P. Optimization techniques in formulation and processing.

In: Swarbrick J, Boylan JC, editors. Encyclopedia of Pharmaceutical Technology. New

York: Marcel Dekker, 1995.

29. Kettaneh-Wold N. Use of experimental design in the pharmaceutical industry. J Pharm

Biomed Anal 1991; 9:605–610.

30. Porter SC, Verseput RP, Cunnigham CR. Process optimization using design of expe-

riemnts. Pharm Tech 1997; October:1–7.

31. Anonymous. Nist/sematech e-handbook of statistical methods. www.itl.nist.gov/

div898/handbook/, 2002.

32. Bolhuis GK, Duineveld CAA, de Boer JH, Coenegracht PMJ. Simultaneous optimiza-

tion of multiple criteria in tablet formulation: Part II. Pharm Tech 1995; September:

42–51.

33. Bolton S. Optimization techniques. In: Pharmaceutical Statistics: Practical and Clinical

Applications. 3rd ed. New York: Marcel Dekker, 1997.

34. Cochran WC, Cox GM. Experimental Design. 2nd ed. New York: Wiley, 1992.

35. Down GRB, Miller RA, Chopra SK, Millar JF. Use of a desk top computer in a search

type optimization of tablet formulations. Drug Dev Ind Pharm 1980; 6:311–330.

36. Bohidar NR, Restaino FA, Schwartz JB. Selecting key parameters in pharmaceutical

formulations factors by regression analysis. Drug Dev Ind Pharm 1979; 5:175–216.

37. Schwartz JB, Flamholz JR, Press RH. Computer optimization of pharmaceutical

formulations II: Application in trouble shooting. J Pharm Sci 1973; 62:1518–1519.

38. Potter CD. Experiment design software: Better data, less work. Th e Scientist 1994; 8:

18–20.

39. Stetsko G. Statistical experimental design and its application to pharmaceutical de-

velopment problem. Drug Dev Ind Pharm 1986; 12:1109–1123.

40. Bolton S. Factorial designs. In: Pharmaceutical Statistics: Practical and Clinical Ap-

plications. 3rd ed. New York: Marcel Dekker, 1997.

41. Anderson M, Kraber S, Hansel H, Klick S, Beckenbach R, Cianca-Betancourt H.

Design Expert® Software Version 6 User’s Guide. MN: Statease Inc., 2002.

42. Box GEP, Connor LR, Cousins WR, Davies OL, Himsworth FR, Sillitto GP, editors.

Th e Design and Analysis of Industrial Experiments. 2nd ed. London: Oliver and

Boyd, 1960.

43. Taguchi G. System of Experimental Designs. New York: UNIPUB/Krauss Interna-

tional, 1987.

44. Stack CB. Confounding and interaction. In: Chow S-C, editor. Encyclopedia of

Biopharmaceutical Statistics. New York: Marcel Dekker, 2003.

45. Myers RH, Montgomery DC. Response Surface Methodology: Process and Product

Optimization using Designed Experiments. New York: Wiley, 1995.

46. Box GEP, Draper NR. Empirical Model-Building and Response Surfaces. 1st ed.

New York: Wiley, 1987.

47. Das MN, Giri NC. Design and Analysis of Experiments. 2nd ed. New Delhi: Wiley

Eastern Limited, New Age International Limited, 1994.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

101

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

48. Singh B, Agarwal R. Design development and optimization of controlled release

microcapsules of diltiazem hydrochloride. Indian J Pharm Sci 2002; 64:378–385.

49. Wehrlé P, Stamm A. Statistical tools for process control and quality improvement in

the pharmaceutical industry. Drug Dev Ind Pharm 1994; 20:141–164.

50. Loukas YL. A 2 (k-p) fractional factorial design via fold over: application to optimiza-

tion of novel multicomponent vesicular system. Analyst 1997; 122:1023–1027.

51. Korsemeyer RW, Gurny R, DoElker E, Buri P, Peppas NA. Mechanisms of solute

release from porous hydrophilic polymers. Int J Pharm 1983; 15:25–35.

52. Singh B, Chakkal SK, Ahuja N. Computer-aided design, development and optimi-

zation of controlled release mucoadhesive formulations of atenolol. In: Proceedings

of National Seminar on Pharmaceutics in the Light of Drug Delivery Challenges;

Chandigarh, India, 2003.

53. Box GEP, Hunter WG, Hunter JS. Statistics for Experimenters. New York: Wiley,

1978.

54. Myers WR. Response surface methodology. In: Chow S-C, editor. Encyclopedia of

Biopharmaceutical Statistics. New York: Marcel Dekker, 2003.

55. Kannan V, Kandarapu R, Garg S. Optimization techniques for the design and

development of novel drug delivery systems, part II. Pharm Tech 2003; March:

102–118.

56. Murphy JR. Screening design. In: Chow S-C, editor. Encyclopedia of Biopharmaceuti-

cal Statistics. New York: Marcel Dekker, 2003.

57. Anonymous. JMP® Design of Experiments, Version 5 User’s Guide. Cary: SAS In-

ternational, 2002.

58. Abu-Izza K, Garcia-Contreras L, Lu DR. Preparation and evaluation of sustained

release AZT loaded microspheres: optimization of the release characteristics using

response surface methodology. J Pharm Sci 1996; 85:144–149.

59. Wehrlé P, Nobelis P, Cuiné A, Stamm A. Response surface methodology: an interesting

statistical tool for process optimization and validation: example of wet granulation in

a high shear mixer. Drug Dev Ind Pharm 1993; 19:1637–1653.

60. Li J. Factorial designs. In: Chow S-C, editor. Encyclopedia of Biopharmaceutical

Statistics. New York: Marcel Dekker, 2003.

61. Acikgöz M, Kas H, Orman M, Hincal A. Chitosan microspheres of diclofenac sodium:

I. Application of factorial design and evaluation of release kinetics. J Micoencapsul

1996; 13:141–160.

62. Korakianiti ES, Rekkas DM, Dallas PP, Choulis NH. Optimization of the pelletization

process in a fl uid-bed rotor granulator using experimental design. AAPS PharmSciTech

2000; 1:article 35: 1–5.

63. Loukas YL. Computer-based expert system designs and analyzes a 2(k-p) fractional

factorial design for the formulation optimization of novel multicomponent liposomes.

J Pharm Biomed Anal 1998; 17:133–140.

64. Eddington ND, Ashraf M, Augsburger LL, Leslie JL, Fossler MJ, Lesko LJ, Shah VP,

Rekhi GS. Identifi cation of formulation and manufacturing variables that infl uence

in vitro dissolution and in vivo bioavailability of propranolol hydrochloride tablets.

Pharm Dev Technol 1998; 3:535–547.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

102

B. SINGH ET AL.

65. Plackett RL, Burman JP. Th e design of optimum multifactorial experiments. Biometrica

1946; 33:305–325.

66. Loukas YL. A Plackett-Burman screening design directs the effi cient formulation of

multicomponent DRV liposomes. J Pharm Biomed Anal 2001; 26:255–263.

67. Pena Romero A, Costa JB, Castel-Marteaux I, Chulia D. Statistical optimization of a

controlled release formulation obtained by a double compression process: application of an

Hadamard matrix and a factorial design. Drug Dev Ind Pharm 1989; 15:2419–2440.

68. Ozil P, Rochat MH. Experimental design, an effi cient tool for studying the stability

of parenteral nutrition. Int J Pharm 1988; 42:11–14.

69. Li JZ, Rekhi GS, Augsburger LL, Shangraw RF. Th e role of intra- and extragranular mi-

crocrystalline cellulose in tablet dissolution. Pharm Dev Technol 1996; 1:343–355.

70. Box GEP, Wilson KB. On the experimental attainment of optimum conditions. J

Royal Stat Soc Ser B 1951; 13:1–45.

71. Box GEP, Behnken DW. Some new three-level designs for the study of quantitative

variables. Technometrics 1960; 2:455–475.

72. Bodea A, Leucuta SE. Optimization of propranolol hydrochloride sustained release

pellets using Box-Behnken design and desirability function. Drug Dev Ind Pharm

1998; 24:145–155.

73. Westerhuis JA, Coenegracht PMJ. Multivariate modelling of the pharmaceutical two-

step process of wet granulation and tabletting with multiblock partial least squares. J

Chemometrics 1997; 11:372–392.

74. Bodea A, Leucuta SE. Optimization of hydrophilic matrix tablet using a D-optimal

design. Int J Pharm 1997; 153:247–255.

75. Fassihi R, Fabian J, Sakr AM. Application of response surface methodology to design

optimization in formulation of a typical controlled release system. Drugs made in

Germany 1996; 39:122–126.

76. Lin AY, Muhammad NA, Pope D, Augsburger LL. A study of the eff ects of curing

and storage conditions on controlled release diphenhydramine HCl pellets coated

with Eudragit NE30D. Pharm Dev Technol 2003; 8:277–287.

77. Shah RD, Kabadi M, Pope DG, Augsburger LL. Physico-mechanical characterization

of the extrusion-spheronization process. Part 2. Rheological determinants for successful

extrusion and spheronization. Pharm Res 1995; 12:496–507.

78. Kramar A, Turk S, Vrecer F. Statistical optimization of diclofenac sustained release

pellets coated with polymethacrylic fi lms. Int J Pharm 2003; 256:43–52.

79. Podczeck F. Th e development and optimization of tablet formulations using math-

ematical methods. In: Alderborn G, Nystrom C, editors. Pharmaceutical Powder

Compaction Technology. New York: Marcel Dekker, 1995.

80. Pinto JF, Podczeck F, Newton JM. Investigations of tablets prepared from pellets

produced by extrusion and spheronisation. II. Modelling the properties of the tablets

produced using regression analysis. Int J Pharm 1997; 152:7–16.

81. Chatchawalsaisin J, Podczeck F, Newton JM. Th e infl uence of chitosan and sodium

alginate and formulation variables on the formation and drug release from pellets

prepared by extrusion/spheronisation. Int J Pharm 2004; 275:41–60.

82. DoEhlert DH. Uniform shell designs. Appl Stat 1970; 19:231–239.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

103

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

83. Vojnovic D, Moneghini M, Rubessa F. Experimental design for a granulation process

with “a priori” criterias. Drug Dev Ind Pharm 1995; 21:823–831.

84. Snee RD. Design and analysis of mixture experiments. J Qual Technol 1971; 3:159–169.

85. Cornell JA. Experiments with Mixtures: Design, Models, and the Analysis of Mixture

Data. 2nd ed. New York: Wiley, 1990.

86. Cornell JA. Experiments with mixtures: An update and bibliography. Technometrics

1979; 21:95–106.

87. van Kamp HV, Bolhuis GK, Lerk CF. Optimization of a formulation for direct com-

pression using a simplex lattice design. PharmWeek Sci 1987; 9:265–273.

88. Scheff é H. Experiments with mixtures. J Royal Stat Soc Ser B 1958; 20:344–360.

89. Ferrari F, Bertoni M, Bonferoni CM, Rossi S, Caramella C. Dissolution enhancement

of an insoluble drug by physical mixture with a superdisintegrant: optimization with

a simplex lattice design. Pharm Dev Technol 1996; 1:159–164.

90. Hirata M, Takayama K, Nagai T. Formulation optimization of sustained release tablet

of chlorpheniramine maleate by means of extreme vertices design and simultaneous

optimization technique. Chem Pharm Bull 1992; 40:741–746.

91. Campisi B, Chicco D, Vojnovic D, Phan-Tan-Luu R. Experimental design for a pharmaceu-

tical formulation: optimization and robustness. J Pharm Biomed Anal 1998; 18:57–65.

92. Taguchi G. Introduction to Quality Engineering. White Plains, New York: UNIPUB/

Krauss International, 1986.

93. Wehrle P, Palmieri GF, Stamm A. Th e Taguchi’s performance statistic to optimize

theophylline beads production in a high-speed granulator. Drug Dev Ind Pharm

1994; 20:2823–2843.

94. Yang SC, Zhu JB. Preparation and characterization of camptothecin solid lipid

nanoparticles. Drug Dev Ind Pharm 2002; 28:265 - 274.

95. Palmieri GF, Wehrle P. Evaluation of ethylcellulose coated pellets optimized using

the approach of Taguchi. Drug Dev Ind Pharm 1997; 23:1069–1077.

96. Lewis GA, Chariot M. Non classical experimental designs in pharmaceutical formula-

tions. Drug Dev Ind Pharm 1991; 17:1551–1570.

97. de Aguiar PF, Bourguignon B, Khots MS, Massart DL, Phan Tan Luu R. D-optimal

designs. Chemom Intell Lab Syst 1995; 30:199–210.

98. Chariot M, Lewis GA, Mathieu D, Phan Tan Luu R, Stevens HNE. Experimental

design for pharmaceutical process characterisation and optimization using an exchange

algorithm. Drug Dev Ind Pharm 1988; 14:2535–2556.

99. Rechtschaff ner RL. Saturated fractions of 2n and 3n factorial designs. Technometrics

1967; 9:569–575.

100. Nielloud F, Mestres JP, Fortune R, Draussin S, Marti-Mestres G. Formulation of oil-

in-water submicron emulsions in the dermatological fi eld using experimental design.

Polymer Int 2003; 52:610–613.

101. Khan MA, Karnachi AA, Singh SK, Sastry SV, Kislalioglu SM, Bolton S. Controlled re-

lease coprecipitates: formulation considerations. J Control Release 1995; 37:131–141.

102. Zhou F, Vervaet C, Massart DL, Massart B, Remon JP. Optimization of the processing

of matrix pellets based on the combination of waxes and starch using experimental

design. Drug Dev Ind Pharm 1998; 24:353–358.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

104

B. SINGH ET AL.

103. Lindberg N-O, Lundstedt T. Application of multivariate analysis in pharmaceutical

development. Drug Dev Ind Pharm 1995; 21:987–1007.

104. Bolton S. Linear regression and correlation. In: Pharmaceutical Statistics: Practical

and Clinical Applications. 3rd ed. New York: Marcel Dekker, 1997.

105. Myers RH. Classical and Modern Regression with Applications. Boston: PWS-KENT

Publishing, 1990.

106. Pattarino F, Marengo E, Gasco MR, Carpignano R. Experimental design and partial

least squares in the study of complex mixtures: microemulsions as drug carriers. Int J

Pharm 1993; 91:157–165.

107. Bohidar NR, Restaino FA, Schwartz JB. Selecting key parameters in pharmaceutical

formulations by principal component analysis. J Pharm Sci 1975; 64:966–969.

108. Benkerrour L, Duchene D, Puisieux F, Maccario J. Granule and tablet formulae study

by principal component analysis. Int J Pharm 1984; 19:27–34.

109. Liu A, Schisterman EF. Principal component analysis. In: Chow S-C, editor. Ency-

clopedia of Biopharmaceutical Statistics. New York: Marcel Dekker, 2004.

110. Bolton S. Analysis of variance. In: Pharmaceutical Statistics: Practical and Clinical

Applications. 3rd ed. New York: Marcel Dekker, 1997.

111. Bolton S. Statistical applications in the pharmaceutical sciences. In: Lachman L, Li-

eberman HA, Kanig JL, editors. Th e Th eory and Practice of Industrial Pharmacy. 3rd

ed. Bombay: Varghese Publishing House, 1987.

112. Singh B, Mehta G, Kumar R, Bhatia A, Ahuja N, Katare OP. Design, development

and optimization of nimesulide-loaded liposomal systems for topical application. Curr

Drug Deliv 2005. [In press.]

113. Bolton S. Transformations and outliers. In: Pharmaceutical Statistics: Practical and

Clinical Applications. 3rd ed. New York: Marcel Dekker, 1997.

114. Cook DR. Detection of infl uential observations in linear regression. Technometrics

1977; 19:15–18.

115. Box GEP, Cox DR. An analysis of transformations. J Royal Stat Soc Ser B 1964; 26:

211–243.

116. Takayama K, Nagai T. Novel computer optimization methodology for pharmaceutical

formulations investigated by using sustained release granules of indomethacin. Chem

Pharm Bull 1989; 37:160–167.

117. Senderak E, Bonsignore H, Mungan D. Response surface method as an approach to

optimization of an oral solution. Drug Dev Ind Pharm 1993; 19:405–424.

118. Law MFL, Deasy PB. Use of canonical and other analyses for the optimization of an

extrusion-spheronization process for indomethacin. Int J Pharm 1997; 146:1–9.

119. Gonzalez AG. Optimization of pharmaceutical formulations based on surface-response

experimental designs. Int J Pharm 1993; 97:149–159.

120. Shah S, Morris J, Sulaiman A, Farhadieh B, Truelove J. Development of misopristol 3

hour controlled release formulations using response surface methodology. Drug Dev

Ind Pharm 1992; 18:1079–1098.

121. Bohidar NR, Bavitz JF, Shiromani PK. Formula optimization for a multiple potency

system with uniform tablet weight. Drug Dev Ind Pharm 1986; 12:1503–1510.

122. Lin K, Peck GE. Development of agglomerated talc. Part 2. Optimization of the

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

105

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

processing parameters for the preparation of granulated talc. Drug Dev Ind Pharm

1995; 21:159–173.

123. Westerhuis JA, de Haan P, Zwinkels J, Jansen WT, Coenegracht PMJ, Lerk CF. Op-

timization of the composition and production of mannitol/microcrystalline cellulose

tablets. Int J Pharm 1996; 143:151–162.

124. Bouckaert S, Massart DL, Massart B, Remon JP. Optimization of a granulation proce-

dure for a hydrophilic matrix tablet using experimental design. Drug Dev Ind Pharm

1996; 22:321–327.

125. Derringer G, Suich R. Simultaneous optimization of several response variables. J Qual

Tech 1980; 12:214–219.

126. Takayama K, Imaizumi H, Nambu N, Nagai T. Mathematical optimization of for-

mulation of indomethacin/polyvinylpyrrolidone /methyl cellulose solid dispersions by

the sequential unconstrained minimization technique. Chem Pharm Bull 1985; 33:

292–300.

127. Fıacco AV, McCormik GP. Nonlinear Programming: Sequential Unconstrained

Minimization Techniques. New York: Wiley, 1968.

128. Lipp R, Heimann G. Statistical approach to optimization of drying conditions for a

transdermal delivery system. Drug Dev Ind Pharm 1996; 22:343–348.

129. Schwartz JB. Optimization techniques in product formulation. J Soc Cosmet Chem

1981; 32:287–301.

130. Takayama K, Fujikawa M, Obata Y, Morishita M. Neural network based optimization

of drug formulations. Adv Drug Deliv Rev 2003; 55:1217–1231.

131. Takayama K, Takahara J, Fujikawa M, Ichikawa H, Nagai T. Formula optimization

based on artifi cial neural networks in transdermal drug delivery. J Control Release

1999; 62:161–170.

132. Achanta AS, Kowalski JG, Rhodes CT. Artifi cial neural networks: implications for

pharmaceutical sciences. Drug Dev Ind Pharm 1995; 21:119–155.

133. Sun Y, Peng Y, Chen Y, Shukla AJ. Application of artifi cial neural networks in the

design of controlled release drug delivery systems. Adv Drug Deliv Rev 2003; 55:

1201–1215.

134. Kuppuswamy R, Anderson SR, Hoag SW, Augsburger LL. Practical limitations of

tableting indices. Pharm Dev Technol 2001; 6:505–520.

135. Bourquin J, Schmidli H, van Hoogevest P, Leuenberger H. Application of artifi cial

neural netrworks (ANN) in the development of solid dosage forms. Pharm Dev Technol

1997; 2:111–121.

136. Zupancic Božic D, Vrecer F, Kozjek F. Optimization of diclofenac sodium dissolution

from sustained release formulations using an artifi cial neural network. Eur J Pharm

Sci 1997; 5:163–169.

137. So S-S, Karplus MJ. Evolutionary optimization in quantitaive structure activ-

ity relationship: an application of genetic neural networks. J Med Chem 1996; 39:

1521–1530.

138. Singh B. Computer-aided education in pharmaceutical sciences. Ind J Pharm Educ

1997; 31:93–102.

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University