Provisioning of deadline-driven requests with flexible transmission rates in WDM mesh networks

14
IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 2, APRIL 2010 353 Provisioning of Deadline-Driven Requests With Flexible Transmission Rates in WDM Mesh Networks Dragos Andrei, Student Member, IEEE, Massimo Tornatore, Member, IEEE, Marwan Batayneh, Student Member, IEEE, Charles U. Martel, and Biswanath Mukherjee, Fellow, IEEE Abstract—With the increasing diversity of applications sup- ported over optical networks, new service guarantees must be offered to network customers. Among the emerging data-intensive applications are those which require their data to be transferred before a predefined deadline. We call these deadline-driven re- quests (DDRs). In such applications, data-transfer finish time (which must be accomplished before the deadline) is the key service guarantee that the customer wants. In fact, the amount of bandwidth allocated to transfer a request is not a concern for the customer as long as its service deadline is met. Hence, the service provider can choose the bandwidth (transmission rate) to provision the request. In this case, even though DDRs impose a deadline constraint, they provide scheduling flexibility for the service provider since it can choose the transmission rate while achieving two objectives: 1) satisfying the guaranteed deadline; and 2) optimizing the network’s resource utilization. We investi- gate the problem of provisioning DDRs with flexible transmission rates in wavelength-division multiplexing (WDM) mesh networks, although this approach is generalizable to other networks also. We investigate several (fixed and adaptive to network state) bandwidth-allocation policies and study the benefit of allowing dynamic bandwidth adjustment, which is found to generally improve network performance. We show that the performance of the bandwidth-allocation algorithms depends on the DDR traffic distribution and on the node architecture and its parameters. In addition, we develop a mathematical formulation for our problem as a mixed integer linear program (MILP), which allows choosing flexible transmission rates and provides a lower bound for our provisioning algorithms. Index Terms—Bandwidth-on-demand, deadline-driven request (DDR), flexible transmission rate, large data transfers, wavelength- division multiplexing (WDM) network. I. INTRODUCTION T ODAY, telecom networks are experiencing a large in- crease in the bandwidth needed by their users as well as in the diversity of the services they must support. There Manuscript received January 20, 2009; approved by IEEE/ACM TRANSACTIONS ON NETWORKING Editor A. Somani. First published October 30, 2009; current version published April 16, 2010. This work was supported by the National Science Foundation (NSF) under Grant CNS-06-27081. Prelimi- nary versions of this work were presented at the Optical Fiber Communications Conference (OFC), February 2008, and at the IEEE International Conference on Communications (ICC), May 2008. D. Andrei, M. Tornatore, C. U. Martel, and B.Mukherjee are with the Department of Computer Science, University of California, Davis, CA 95616 USA (e-mail: [email protected]; [email protected]; martel- [email protected]). M. Batayneh is with Integrated Photonics Technology (IPITEK) Inc., Carlsbad, CA 92008 USA (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNET.2009.2026576 are many bandwidth-hungry applications (ranging from grid computing and eScience applications to consumer applications, e.g., IPTV or video-on-demand) that require flexible bandwidth reservation and need strict quality-of-service (QoS) guarantees. The requirements of these new applications—large band- width, dynamism, and flexibility—can be well accommodated by optical networks using wavelength-division multiplexing (WDM) [1], particularly when they have reconfigurability capabilities. Reconfigurability can be provided by agile optical crossconnects (OXCs) and by control plane protocols such as ASON/GMPLS, which are designed to handle the automatic and dynamic provisioning of lightpaths [2]. A new switching paradigm suitable for such emerging on-demand data-intensive applications is dynamic circuit switching (DCS) [3] (based on the mature technology of Optical Circuit Switching), which can efficiently handle the “bursty traffic” generated by these applications, and transport it over high-capacity circuits (which can be sublambda granularity circuits or lightpaths) established dynamically over the WDM network backbone [3]; we consider DCS as the switching technology employed in this study. A new class of network services that may need on-demand flexible bandwidth allocation are deadline-driven applications, which require the transfer of large amounts of data before a pre- defined deadline. Such deadline-driven applications occur espe- cially in the fields of eScience and high-end grid computing [4]. Let us consider a remote visualization application [4], [5] that requires the transfer of a large dataset (which could contain sci- entific data obtained from high-energy particle physics experi- ments, astronomical telescopes, medical instruments, etc.) from a remote location. Since the remote visualization (which may use costly computing resources) cannot start before all its input data is transferred, the customer can advertise his preferred visualization start time (i.e., the deadline for the large data transfer) in advance [4]–[6]. In general, any application that needs coordinated use of several resources (with a strict dependency workflow) can benefit from being deadline-aware [5]. Such deterministic reservation of resources is essential in the case of high-performance computing [4], [6]. The works in [4], [6], and [7] consider deadlines as QoS parameters for data-intensive applications. Deadline-driven applications may have diverse bandwidth and deadline requirements. For example, real-time applications, such as large bulk data transfers or stock market information exchange applications, require immediate service, while data- base/server backup applications may require a large bandwidth, but not necessarily immediately, thus having looser deadlines. The possibility to use different transmission rates to serve an application combined with the deadline requirements creates different scenarios by which the service provider can serve these 1063-6692/$26.00 © 2009 IEEE

Transcript of Provisioning of deadline-driven requests with flexible transmission rates in WDM mesh networks

IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 2, APRIL 2010 353

Provisioning of Deadline-Driven Requests WithFlexible Transmission Rates in WDM Mesh Networks

Dragos Andrei, Student Member, IEEE, Massimo Tornatore, Member, IEEE,Marwan Batayneh, Student Member, IEEE, Charles U. Martel, and Biswanath Mukherjee, Fellow, IEEE

Abstract—With the increasing diversity of applications sup-ported over optical networks, new service guarantees must beoffered to network customers. Among the emerging data-intensiveapplications are those which require their data to be transferredbefore a predefined deadline. We call these deadline-driven re-quests (DDRs). In such applications, data-transfer finish time(which must be accomplished before the deadline) is the keyservice guarantee that the customer wants. In fact, the amountof bandwidth allocated to transfer a request is not a concern forthe customer as long as its service deadline is met. Hence, theservice provider can choose the bandwidth (transmission rate)to provision the request. In this case, even though DDRs imposea deadline constraint, they provide scheduling flexibility for theservice provider since it can choose the transmission rate whileachieving two objectives: 1) satisfying the guaranteed deadline;and 2) optimizing the network’s resource utilization. We investi-gate the problem of provisioning DDRs with flexible transmissionrates in wavelength-division multiplexing (WDM) mesh networks,although this approach is generalizable to other networks also.We investigate several (fixed and adaptive to network state)bandwidth-allocation policies and study the benefit of allowingdynamic bandwidth adjustment, which is found to generallyimprove network performance. We show that the performance ofthe bandwidth-allocation algorithms depends on the DDR trafficdistribution and on the node architecture and its parameters. Inaddition, we develop a mathematical formulation for our problemas a mixed integer linear program (MILP), which allows choosingflexible transmission rates and provides a lower bound for ourprovisioning algorithms.

Index Terms—Bandwidth-on-demand, deadline-driven request(DDR), flexible transmission rate, large data transfers, wavelength-division multiplexing (WDM) network.

I. INTRODUCTION

T ODAY, telecom networks are experiencing a large in-crease in the bandwidth needed by their users as well

as in the diversity of the services they must support. There

Manuscript received January 20, 2009; approved by IEEE/ACMTRANSACTIONS ON NETWORKING Editor A. Somani. First published October30, 2009; current version published April 16, 2010. This work was supported bythe National Science Foundation (NSF) under Grant CNS-06-27081. Prelimi-nary versions of this work were presented at the Optical Fiber CommunicationsConference (OFC), February 2008, and at the IEEE International Conferenceon Communications (ICC), May 2008.

D. Andrei, M. Tornatore, C. U. Martel, and B.Mukherjee are with theDepartment of Computer Science, University of California, Davis, CA 95616USA (e-mail: [email protected]; [email protected]; [email protected]).

M. Batayneh is with Integrated Photonics Technology (IPITEK) Inc.,Carlsbad, CA 92008 USA (e-mail: [email protected]).

Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TNET.2009.2026576

are many bandwidth-hungry applications (ranging from gridcomputing and eScience applications to consumer applications,e.g., IPTV or video-on-demand) that require flexible bandwidthreservation and need strict quality-of-service (QoS) guarantees.The requirements of these new applications—large band-width, dynamism, and flexibility—can be well accommodatedby optical networks using wavelength-division multiplexing(WDM) [1], particularly when they have reconfigurabilitycapabilities. Reconfigurability can be provided by agile opticalcrossconnects (OXCs) and by control plane protocols such asASON/GMPLS, which are designed to handle the automaticand dynamic provisioning of lightpaths [2]. A new switchingparadigm suitable for such emerging on-demand data-intensiveapplications is dynamic circuit switching (DCS) [3] (based onthe mature technology of Optical Circuit Switching), whichcan efficiently handle the “bursty traffic” generated by theseapplications, and transport it over high-capacity circuits (whichcan be sublambda granularity circuits or lightpaths) establisheddynamically over the WDM network backbone [3]; we considerDCS as the switching technology employed in this study.

A new class of network services that may need on-demandflexible bandwidth allocation are deadline-driven applications,which require the transfer of large amounts of data before a pre-defined deadline. Such deadline-driven applications occur espe-cially in the fields of eScience and high-end grid computing [4].Let us consider a remote visualization application [4], [5] thatrequires the transfer of a large dataset (which could contain sci-entific data obtained from high-energy particle physics experi-ments, astronomical telescopes, medical instruments, etc.) froma remote location. Since the remote visualization (which may usecostlycomputingresources)cannot startbeforeall its inputdata istransferred, thecustomercanadvertisehispreferredvisualizationstart time (i.e., the deadline for the large data transfer) in advance[4]–[6]. In general, any application that needs coordinated use ofseveral resources (with a strict dependency workflow) can benefitfrom being deadline-aware [5]. Such deterministic reservation ofresources is essential in the case of high-performance computing[4], [6]. The works in [4], [6], and [7] consider deadlines as QoSparameters for data-intensive applications.

Deadline-driven applications may have diverse bandwidthand deadline requirements. For example, real-time applications,such as large bulk data transfers or stock market informationexchange applications, require immediate service, while data-base/server backup applications may require a large bandwidth,but not necessarily immediately, thus having looser deadlines.The possibility to use different transmission rates to serve anapplication combined with the deadline requirements createsdifferent scenarios by which the service provider can serve these

1063-6692/$26.00 © 2009 IEEE

354 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 2, APRIL 2010

deadline-driven requests (DDRs). This creates opportunitiesfor the service provider to enhance the network performanceby exploiting these opportunities while meeting the customer’srequirements (i.e., deadlines).

Consider a scenario in which a user needs to transfer a largedata file, e.g., a 10-GB (80 Gb) file. Without counting prop-agation delays, the transfer could finish in 8 s if it is offereda 10-Gbps channel, or in 80 s if a 1-Gbps bandwidth pipe isprovided. The user states his preferred deadline to the serviceprovider. If the user can tolerate a maximum transfer time of80 s, the service provider can allocate either of the two transmis-sion rates (1 or 10 Gbps), as the deadline is met in both cases.Since the service provider’s objective is to assign the transmis-sion rate so that the network’s bandwidth is utilized efficiently,the question is how to determine this rate considering the net-work’s state?

In this paper, we investigate the problem of DDR provisioningin WDM networks by exploiting the opportunity of allocatingflexible bandwidth to the requests. Hence, our work uses trafficgrooming [8], [9] (which is the problem of efficiently aggre-gating a set of low-speed requests onto high-capacity channels).However, in contrast to traditional traffic grooming (presentedlater in this section), in which requests have predefined band-width requirements (e.g., OC-48 traffic over OC-192 channels),in our study, the combination of file sizes, deadlines, and net-work state determines the bandwidth that will be allocated tothe incoming request, and our algorithms try to improve net-work performance while satisfying the DDR’s deadline.

Since the performance of a WDM network also dependson the type of node architecture used, we study the problemusing the two dominant OXC architectures in WDM networks:opaque and hybrid, which are characterized by complete andpartial optical–electronic–optical (OEO) conversion, respec-tively. We first propose routing and wavelength assignmentalgorithms, suitable for our DDR-oriented problem, and thenallocate bandwidth to DDRs. Our bandwidth-allocation poli-cies are: 1) fixed, which use the same predefined policy for allthe requests; 2) adaptive, which consider network state whenallocating bandwidth; and 3) with changing rates, in whichthe algorithms allow readjustment of the transmission ratefor ongoing DDRs. We also provide a mixed integer linearprogramming (MILP) formulation for our problem. Our resultsshow that the node architectures and their parameters, as well asthe DDR bandwidth distributions, significantly impact the per-formance of our algorithms, that the changing rates algorithmsusually improve over the performance of our other provisioningapproaches, and that provisioning DDRs in an opaque networkgenerally accepts more service than that in a hybrid network.

The important problem of traffic grooming is well researchedin the literature. Works in [10] and [11] consider the trafficgrooming problem in WDM networks under a static traffic sce-nario. Traffic grooming is also considered in a dynamic envi-ronment: The work in [12] proposes a graph model for dynamictraffic grooming, while in [13], the authors study the perfor-mance of optical grooming switches in a dynamic environment.In [14], it is shown that, by using the requests’ holding time in-formation, the performance of dynamic traffic grooming can beimproved.

Accommodating guaranteed network services in WDM net-works under various traffic models has also been addressed inthe literature. In the scheduled traffic model [15], [16], lightpathdemands with predefined knowledge of set-up and tear-downtimes, called scheduled lightpath demands (SLDs), are consid-ered. The focus in [15] and [16] is to encourage the reuse of net-work resources by time-disjoint demands. An extension of thescheduled traffic model is the sliding scheduled traffic model[17], [18], where requests of a fixed holding time are allowedto slide in a predefined time window. In [19], the authors con-sider an approach for provisioning dynamic sliding scheduledrequests. The work in [20] designs routing and wavelength as-signment (RWA) [21] algorithms for accommodating advancereservations in WDM networks. This work considers three fla-vors of advance reservations: 1) with specified start time and du-ration (similar concept as SLD); 2) with specified start time andunspecified durations; and 3) with specified duration and un-specified start time. The work in [6] designs a deadline-awarescheduling scheme for resource reservation in lambda grids;however, in contrast with our work, it does not consider therouting or the possibility of grooming the requests. In [22], theauthors devise heuristics and an ILP solution for accommo-dating large data transfers in lambda grids, while in [23], anapproach that provisions data-aggregation requests over WDMnetworks is proposed. To consider related works from the gen-eral computer science field, our problem has common featureswith fundamental resource-allocation problems (such as the dy-namic storage allocation problem [24]), however, these efficientresource-allocation algorithms can not be directly utilized byour DDR-oriented problem, in which resources (wavelength-links) for which the bandwidth allocation is done are not inde-pendent, but interrelated through a mesh connectivity; in addi-tion, our problem addresses the complexity of flexible rate allo-cation.

To the best of our knowledge, our study is the first to investi-gate the importance of allowing flexible transmission rates whenprovisioning DDRs in WDM optical networks. Today, this flex-ibility in the choice of the bit rate to support large data trans-fers can be achieved by using reconfigurable optical add-dropmultiplexers (ROADMs), which are able to accommodate multi-granularity traffic. Thanks to the flexible transmission rates, theservice providers now have the opportunity to improve networkresource utilization (and network cost), while still meeting theircustomers’ deadlines.

The rest of the paper is organized as follows. Section IIpresents the problem and node architectures. Section IIIpresents the RWA algorithms, the bandwidth-allocationschemes, the MILP formulation, and the changing-rates algo-rithms. In Section IV, we discuss illustrative numerical results.Finally, Section V concludes the paper.

II. PROBLEM DESCRIPTION

In this section, we formally describe the characteristics ofour DDR provisioning problem. We are given a WDM meshnetwork, with its physical topology represented by a graph

, where is the set of nodes, is the set offiber links, and is the set of wavelengths on each link.Each physical link has wavelengths of capacity (e.g.,

ANDREI et al.: PROVISIONING OF DDRs WITH FLEXIBLE TRANSMISSION RATES 355

Fig. 1. Node architectures. (a) Hybrid architecture. (b) Opaque architecture.

). We need to provision dynamically arrivingDDRs. Each incoming DDR is defined by the tuple

(1)

where source node destination nodearrival time of size of the large file to be transferred,and deadline of , specified by the network customer,which is defined as the difference between the maximum timewhen the file must be fully transferred and the arrival time .

DDR is considered provisioned if we can choose, as thebandwidth allocated to , a transmission rate such that

(2)

where is the minimum required rate to meet the dead-line of the request, i.e., . Note that, for the largefile sizes considered here, propagation delays (on the order oftens of ms for the typical backbone mesh network) are negligiblecompared to the large transmission time. In addition, theallocated for the request cannot exceed (which in ourproblem is a wavelength’s capacity ). Thus, the holding timeof ranges between , obtained when ,and , obtained when .

To provision a DDR , we need to do the following:• Find a route and assign wavelengths to , so that there

is enough bandwidth on to meet ’s deadline .• Determine a specific transmission rate [from the

bandwidth range in (2)], which will be allocated to ,with the objective stated below. To choose this specificrate , we may use one of the following types ofbandwidth-allocation algorithms (presented in detail inSections III-C and E):— Fixed allocation: Allocate a fixed amount of bandwidth

to , depending on its . We choose to allo-cate the maximum end-to-end available bandwidth on

’s chosen path (policy called ) or the minimumbandwidth required to meet ’s deadline (policy called

).— Adaptive allocation: Use network-state information to

improve the performance of the bandwidth-allocation

algorithm. A first policy, simply called “Adaptive,” useslink-congestion information to determine which fixedallocation policy to use. A second policy, called “Pro-portional,” allocates bandwidth to each request propor-tionally to its .

— Changing Rates: Allow the transmission rate of existingrequests to change over time to accommodate new re-quests that can not be provisioned otherwise.

The objective of our DDR provisioning algorithms is to sat-isfy the current request while retaining maximum resourcesunused to accommodate future traffic.

A. Node Architectures

We briefly present the two typical node architectures forWDM mesh networks for which we design our algorithms.

Fig. 1(a) shows the hybrid switch architecture. This archi-tecture has two components: 1) an OXC, which can opticallybypass the incoming lightpaths; and 2) an electronic switch( ) (e.g., an IP router), where lightpaths can be initiated/ter-minated. Notice that if a lightpath simply bypasses the nodeoptically (through the OXC), it must use the same wavelength(in Fig. 1, and are abbreviations for wavelengthadd/drop). Fig. 1(b) presents the opaque switch architecture,which performs full OEO conversion. Now, a lightpath is firstdemultiplexed to the lowest electronic port speed granularity,while electronic signals are multiplexed to outgoing lightpaths.Therefore, opaque OXCs can perform wavelength conversion.

III. PROVISIONING OF DEADLINE-DRIVEN REQUESTS

A. DDR RWA for Hybrid Architecture

Upon arrival of a DDR , we search for a route with atleast unused bandwidth and for a feasible wavelengthassignment. If a valid RWA for is found, we provision .Path can span over one or multiple lightpaths from source todestination of the request. The set of all lightpaths in the net-work forms the virtual topology. The term virtual link is used todenote a lightpath.

1) RWA Algorithm for Hybrid Architecture: For our routecomputation, we use an integrated architectural model (in which

356 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 2, APRIL 2010

both virtual and physical link information are known in a uni-fied control plane) [8]. In existing traffic grooming literature,one approach that uses an integrated model is the auxiliary graphmodel [10], [12], [13], [25]. We also use a lightweight (and com-putational effective) auxiliary graph, suited for our DDR-ori-ented problem.

For each DDR , we create an auxiliary graph de-pending on current network state. Thus, contains informa-tion from both the current virtual topology and cur-rent available physical topology graph . consistsof all existing lightpaths, while contains the status of freewavelengths in the physical network . has the same vertexset as , but its edges have different meaning. For any node pair

, auxiliary graph has an edge if either:1) there is a existing lightpath with enough free capacity for(i.e., at least ); or 2) there is a direct physical link be-tween nodes (if the link exists and has at least one unusedwavelength).

Algorithm 1: RWA for Hybrid Architecture:Input: DDR , ; Current networkstate consisting of and .Output: Virtual path for R, consisting of a sequence oflightpaths; null, if no feasible path is found.1) Construct auxiliary graph :

a) If there exists a lightpath between node pairwith free capacity , add an edgeto the auxiliary graph .

b) Copy into all physical links that have at leastone unused wavelength if no existing lightpathbetween node pair was already added into

in Step 1a.c) The weights of the edges of (virtual and phys-

ical edges) are assigned depending on groomingpolicy.

2) Generate a set of K-Shortest Paths ( ) between sand d on graph . Any path is a sequenceof existing lightpaths and physical links.

3) For each path doa) Transform into a sequence of lightpaths by cre-

ating new lightpaths from physical links on .To maintain wavelength continuity, transformsequences of physical links located betweentwo consecutive existing lightpaths on pathinto new lightpaths, by segmenting the physicallinks into wavelength-continuous lightpaths (seeSection III-A-II).

b) If there are enough transceivers at the interme-diary nodes on to setup the new lightpaths, formvirtual path using the existing/new lightpathsand Return .

EndFor4) If no route is found, Return null.

Our RWA algorithm for Hybrid architecture is described inAlgorithm 1. We can either use existing lightpaths or createnew ones by using free physical resources. Depending onthe grooming policy used, different weights are assigned tothe edges in the auxiliary graph (as detailed below). Min-imum-weight path algorithms are then applied on .

Fig. 2. Segmentation example: Path � (formed from existing lightpaths andphysical links) is transformed into virtual path � .

in Algorithm 1 is the K-Shortest-Paths algorithm [26]. Thepaths obtained from applying on (in Step 2 ofAlgorithm 1) are a sequence of existing lightpaths and/orphysical links. Our algorithm must create lightpaths from thesephysical links (i.e., perform wavelength assignment) whilerespecting the wavelength-continuity constraints. We name thispart of our algorithm Segmentation. It is summarized in Step 3aof Algorithm 1 and detailed next.

In Step 1c of Algorithm 1, weights are assigned to the edgesof auxiliary graph . For our experimental results, we usea congestion-aware policy which prefers paths that are lesscongested (we also experimented with other grooming policies,e.g., a policy that puts the emphasis on having short paths,by assigning uniform costs to physical links and to lightpathphysical hops). A congested link is a link with many utilizedwavelengths out of . Use of a congested link should beavoided, as the network connectivity will be degraded if, forexample, all wavelengths of a link are fully utilized. Our con-gestion-aware policy increases the weights of congested links;however, these weights should not be too large compared to theweights of lowly utilized links (or to the weights of existinglightpaths), or else the congested links will never be used.Therefore, we set the weight of an existing lightpath to thenumber of its physical hops, and the weight of a physical linkto (inspired from the formula for average delay for

queues: [27]), where number ofwavelengths per link; number of used wavelengths on thelink; a parameter , chosen such that the weight of ahighly congested link is not too large; a scaling constantused to make the weights of lower utilized physical links closeto the weight of an existing lightpath’s physical hop, so thatexcessively long paths are avoided (for our numerical results,we chose and ).

2) Segmentation Algorithm: We use the Segmentation algo-rithm to maintain the wavelength-continuity constraint on path

(computed in Step 2 of Algorithm 1). Fig. 2 shows such a path, consisting of three physical links which are located between

two existing lightpaths. We cannot simply create a lightpath be-tween nodes 2 and 5 since there is no single common free wave-length on links 2–3, 3–4, and 4–5. Hence, we must segment thephysical path 2–3-4–5 into two new lightpaths, which respectthe wavelength-continuity constraint.

For all contiguous links on path (links 2–3, 3–4, and 4–5),the goal of our Segmentation approach is to create new light-paths that span as many physical hops as possible. This will re-sult in a small number of lightpaths between and (respec-tively, nodes 1 and 6).

ANDREI et al.: PROVISIONING OF DDRs WITH FLEXIBLE TRANSMISSION RATES 357

The Segmentation algorithm works as follows: Maintain a setof free wavelengths for the “physical subpath” between

two consecutive lightpaths (e.g., path 2–3-4–5 in Fig. 2). Ini-tially, the set includes all possible wavelengths . Foreach physical link , intersect the wavelength set withthe free wavelengths of link , until becomes empty, or untilwe are finished with all the links in . If no free wavelengthremains in the wavelength set , backtrack to the previous link,choose the first-fit wavelength [21], and segment the lightpathhere (we also check if there are enough free transceivers to setupthe lightpath). The Segmentation algorithm continues until wepass through all the links in .

We illustrate the idea of the Segmentation algorithm, by usingFig. 2. The “physical subpath” is 2–3–4–5, and the initialwavelength set contains all available wavelengths (the freewavelengths for each link are listed in Fig. 2). After we pass link2–3, includes wavelengths and ; after link 3–4, only ;and after 4–5, is empty. We backtrack to link 3–4, segment atnode 4, and use wavelength to setup lightpath between2–4. Similarly, we setup between 4–5.

B. DDR RWA for Opaque Architecture

Our algorithm for a network with opaque nodes isbased on the same general principles as Algorithm 1, i.e., useof an auxiliary graph ( ) created for each DDR, and then se-lection of a path by using a minimum-weight algorithm on

. However, the differences between Algorithm 1 and OpaqueRWA come from the different physical properties of Opaqueand Hybrid OXCs (see Section II-A), namely: 1) for the net-work with opaque switches, every utilized wavelength channelon each fiber link forms a lightpath between two adjacent OXCs,and therefore, the virtual topology is the same as the physicaltopology; and 2) Opaque OXCs provide wavelength conversion.

In order to compute the paths, our Opaque RWA algorithmuses the knowledge of what bandwidth-allocation algorithm isutilized by the service provider. An overview of the bandwidth-allocation algorithms was given in Section II, and they are de-tailed in Section III-C. The Opaque RWA algorithm works asfollows.

First, graph (formed of all links that have wavelength(s)with remaining capacity ) is constructed. On ,we generate a set of between and . Since our nu-merical results show that having short paths is important forOpaque algorithm’s performance, we only consider the paths

with the same number of hops as the minimum-costpath obtained. Next, for each path , we do the following. If thebandwidth-allocation policy is , in order to find awavelength assignment (WA) for each link , we first try tofind already-utilized wavelengths on link , with free capacity

(between all these wavelengths, we choose theone with smallest remaining capacity). If no such wavelengthis found on link , we choose a free wavelength on , by usingthe first-fit policy [21]. For the policy (which does not per-form grooming), a free wavelength is always selected by usingfirst-fit. If more feasible paths are found, for al-location policies, we choose the path with the smallest allo-cated capacity end-to-end.

Fig. 3. Maximum available bandwidth on all the lightpaths of path � is �.

C. Bandwidth-Allocation Algorithms

The customer’s main requirement is to meet the DDR’s dead-line. However, the service provider’s objective is to design band-width-allocation policies that maximize network utilization andconsequently minimize resources used. In this section, we de-sign algorithms for deciding what bandwidth should be allo-cated to the request. Note that these allocation policies can beused with either the Hybrid or Opaque architecture.

1) Fixed Bandwidth-Allocation Policies: Consider pathfor request (computed as in Section III-A or Section III-B),formed of one/multiple lightpaths. Let be the maximum avail-able bandwidth over all of ’s lightpaths (see Fig. 3). The max-imum bandwidth that can be offered to is , while the min-imum is .

Our first bandwidth-allocation policy is called : Aftercomputing path , always allocate the maximum availablebandwidth ( ) to the DDR. Intuitively, ’s advantage isthat the whole bandwidth is used efficiently, and no bandwidthremains idle, but the downside is that it can create resourcecontention and congestion at high network loads.

Our second policy is called . In this case, we offerbandwidth to the request. The advantage of

policy is that it tends to “pack” (groom) the requests verydensely on the lightpaths, leaving room for future requests.

A third policy, which is a combination between andcan be devised: , where . The bandwidth offered to

is . Intuitively, for larger valuesof , would perform close to , while for smallervalues of , it would perform closer to .

2) Adaptive Bandwidth-Allocation Policies: Policies de-scribed so far assign fixed bandwidth, so they do not considerthe current network state. In this section, we devise adaptive ap-proaches, which make use of current network state information.

The information that can be employed by the adaptive ap-proaches are (in addition to and ): 1) the congestionstate along the chosen virtual path ; and 2) the type of the

of the incoming request compared with ofother requests and information about distributions.

Adaptive Policy: This approach considers the congestion(number of utilized wavelengths) of all the physical links of thelightpaths that form path when determining what fixed policyto use.

The main idea here is that path may pass through links thatare already congested. In this case, allocating to requestis not a good policy, as congested links will become even morecongested for future requests. Hence, for congested links, it isbetter to allocate bandwidth to . On the other hand, ifnone of the links is highly congested, it may be better to use

358 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 2, APRIL 2010

, as there is enough free capacity to service future requeststhat may need one of the links on ’s path.

We show the idea of the “Adaptive” algorithm on the networkused in our simulations (Fig. 5). Considering uniform trafficrequests between any source-destination node pair, the linksat the network center (e.g., 9–12, 12–13, 10–13), which con-nect nodes of high degree, are usually more congested thanlinks at the periphery of the network (e.g., 1–2, 23–24). If wecompare a request between nodes 1 and 24, which finds path1–6-9–12-13–17-23–24, to a request between nodes 1 and 5,which finds path 1–2-3–5, it is better to assign bandwidthto the first DDR since it traverses links which are (or may inthe future be) highly utilized, and to the second DDR, aswe do not need to leave so much room for future requests onless-congested links.

The Adaptive algorithm allocates if there are congestedlinks along path , else it allocates . To define congestion,we choose a threshold . If there is at least one link on path

with more than used wavelengths, path is consideredcongested.

Proportional Policy: This policy is based on the idea thatDDRs with different should not be given the sameamount of bandwidth. A 500-Mbps file transfer need not be al-located the same bandwidth as a 5-Gbps transfer. If it is allo-cated a lot of bandwidth, the first transfer will finish quickly,but it will make bandwidth unavailable for future requests. Al-locating bandwidth proportional to the of the requestseems like an attractive policy.

We can use the and information aboutdistributions to decide what bandwidth to allocate to the DDR.To understand our assumptions about how DDR traffic is gen-erated, and that we consider the service provider to have classesof , please see the DDR traffic demand model andused bandwidth distributions, which are detailed in Section IV.

policy also considers that the service provider hasprevious statistics about user requests pattern, so it can computethe expected of all requests.policy is described in Algorithm 2.

Algorithm 2: Allocate Proportional:Input: DDR of minimum rate ;maximum bandwidth we can allocate to (see Fig. 3);

of all requests.Output: Bandwidth allocated to .1) Objective: Allocate proportional bandwidth depending

on ’s and the distribution.2) Compute .3) Attempt to allocate bandwidth to

, which would be fair to all requests of different.

4) If , allocate .Else if , allocate .Else allocate .

D. Mathematical Model

So far, we have examined RWA and bandwidth-allocation al-gorithms for DDRs. In order to better understand our problem,

we state it as a MILP, which can solve the RWA and band-width-allocation subproblems together. There are three varia-tions of our MILP. The first allocates flexible bandwidth to therequests; hence, it is named (by referring to thepolicies in Section III-C2). The other two allocate fixed band-width to the requests and are named andsince they use the and bandwidth-allocation policies.These MILP formulations can be used as benchmarks for ourheuristic provisioning approaches. Our MILP model assumesthat all DDR arrivals and deadlines are known, hence they arebased on static traffic. However, the solution of the MILP con-stitutes a valid lower bound on the performance of our provi-sioning approaches (which consider a dynamic traffic environ-ment).

Our MILPs can provision DDRs in a network equipped withopaque OXCs. The three MILP formulations are computation-ally complex, especially , as it includes: 1) se-lection of the appropriate bandwidth for DDR , which can betranslated into a flexible finish time for the transmission of ’sdata; 2) RWA and grooming; and 3) constraints for time-dis-jointedness of requests that share common resources. That iswhy we simplify the routing, by considering only alternateroutes for each DDR, an approach utilized in other works thatconsider time-domain scheduling [15], [16], [18].

The formulation for is given here.Given:

— Graph representing the physical topology of the network,as defined in Section II.

— : Set of DDRs, with ; each DDR has the no-tation in (1). Note that, for each request arrivaltime , deadline , and file size are known.

— : In this MILP, the wavelength capacity is divided intosublambdas for which we will maintain the resource

utilizations, each of capacity , where is the line rate.For example, in a network of line rate 10 Gbps, if ,the smallest sublambda is 2.5 Gbps.

— : Binary inputs representing predefined paths. Foreach request , we precompute the K-Shortest Paths [26].

, if ’s th path uses fiber link , otherwise.— : Number of different transmission rates that could be

allocated to request , with .— For each , with , we construct an or-

dered set ofpairs (with

, ). Set only main-tains the pairs for which ’s deadline is met, i.e., pairswhich satisfy . Hence, any of the pairs in canbe allocated to .Variables:

— : if request is allocated the -thcombination ),

; otherwise .— : Bandwidth assigned to request , expressed as

an integer multiple of the number of sublambdas( ). Both and (variable pre-sented next) are auxiliary variables (computed fromand the pairs in set ), used to simplify the description ofthe equations.

ANDREI et al.: PROVISIONING OF DDRs WITH FLEXIBLE TRANSMISSION RATES 359

— : Holding time to be assigned to request .— : Virtual connectivity variables. if is

routed through link , wavelength , and uses sublambdaon this wavelength-link, , , ,

; otherwise, .— : Route chosen for request (from path set ).

, if the request is routed on path , with; otherwise, .

— : if request is routed through fiber link ;otherwise .Constraints: To keep our mathematical formulation

simple, we use logical constraints (e.g., implications, disjunc-tions) in our model. These constraints can be easily linearizedby adding auxiliary integer variables. In addition, commercialMILP solvers (e.g., CPLEX [28]) allow the specification oflogical constraints in their optimization models [28].

1) Flexible transmission rate constraints

(3)

(4)

(5)

Equation (3) states that exactly onepair is chosen for

request . Equation (4) fixes ’s bandwidth , byselecting the appropriate precomputed bandwidth(depending on the value of ). Similarly, (5) fixes ’sholding time .

2) Routing, wavelength, and sublambda assignment

(6)

(7)

(8)

(9)

Equation (6) ensures that at most a single path can bechosen to route request . If no path is chosen to route

, this request cannot be provisioned. Equation (7) con-strains that is either routed on wavelength of fiber link

with all its allocated bandwidth ( ), or it is not routed

on this wavelength-link. Equation (8) establishes if requestuses link , by considering the path variables. Equation

(9) connects the path ( ) and virtual connectivity ( ) vari-ables, by using the variables.

3) Time-domain constraints:

(10)

Equation (10) states that any two distinct requests ,either do not overlap in time [first line of (10)], or they mustnot share same physical resources [line two of (10)].

We consider two alternate optimization goals:Objective A: Maximize the number of accepted requests

(11)

Equation (11) counts the number of accepted requests by con-sidering which requests found paths for their file transfers.

Objective B: Maximize total network throughput

(12)

Equation (12) considers total data transferred for each DDR andprovisions the requests that provide maximum throughput.

Note that , , and are given (constants), so (4), (5),and (8) are linear.

The two MILP formulations that allocate fixed bandwidth( and ) are obtained by removing (3)–(5)from the ’s model, and fixing and de-pending on the policy. For these MILPs, the finish time of theDDR transfers is known (as the rate is fixed); thus, the requestsbecome close to scheduled lightpath demands (SLDs), as de-fined in [15] and [16]. However, as a difference, and

accommodate sublambda granularity requests, whilethe works in [15] and [16] do not consider grooming.

E. Changing-Rates Algorithm

The bandwidth-allocation algorithms in Section III-C andthe mathematical models in Section III-D focus on directly de-ciding what bandwidth to allocate for a request . The allocatedbandwidth was fixed for the duration of ’s transfer. If no pathwith at least bandwidth is found for , the requestcannot be provisioned (see Algorithm 1). In this section, werelax the fixed bandwidth constraint, and allow changing (i.e.,reconfiguration of) the transmission rates dynamically, whichcan improve the network’s utilization and DDR acceptance rate.A practical example of a protocol that allows hitless dynamicbandwidth adjustment is SONET’s link-capacity adjustmentscheme (LCAS) [29], [30], which allows dynamic increasesor decreases of the bandwidth of a virtual concatenated group(VCG).

360 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 2, APRIL 2010

Fig. 4. Simplified example for Changing Rates (on a lightpath from path � ofrequest �).

The Changing-Rates technique tries to accommodate a newincoming request which otherwise cannot be provisioned,by changing the transmission rates of currently ongoing trans-fers. Lightpaths between ’s source and destination may be ser-vicing previous requests which do not have a stringent dead-line, but they were allocated extra bandwidth (as in theapproach). In this case, even if there is no path with enough ca-pacity for , it may still be possible for to be provisioned, if:

1) we free bandwidth for , by decreasing the rates of therequests which conflict with on a path ;

2) these already-scheduled requests conflicting with all stillmeet their deadlines;

3) we can free enough bandwidth on path , so that nowmeets its deadline.

Fig. 4 illustrates a simplified example of a lightpath on pathof an incoming request . The lightpath is represented (on

the left side) before the arrival of , and (on the right side) afterwas scheduled. For simplicity, before ’s arrival, we have

only one request scheduled on the lightpath (in practice,there can be multiple existing requests groomed on the light-path). ( , ) and ( , ) are the arrival time and dead-line of R and , respectively. The right-hand side (RHS) ofFig. 4 shows that we can decrease the rate of at time(while still meeting its deadline) and also accommodate . Thenew rates allocated to and are computed using alloca-tion which is proportional to the minimum rates needed to catchtheir deadlines (also, the of is updated at time

, considering its data which remains to be transferred).Algorithm 3: Changing Rates:

Input: DDR , minimum bandwidth ; currentnetwork state.Output: Path for , Bandwidth allocated to : ,New rates of the affected connection requests:

;Otherwise, null, if can not be provisioned.PHASE 11) Apply Alg. 1 ( ) to obtain virtual path .2) If exists: (i) and (ii) Return

.Else (we do not find any path ): go to PHASE 2.

PHASE 21) Apply Modified Alg. 1 (Modified RWA) and obtain

sets of s ( ), each set

being formed of virtual paths:.

2) For each ,a) On virtual path , find all connections (re-

quests) affected by . Let this connection list be:.

b) For each , test if we can decrease therate of , on its own path, so that it still meets itsDeadline.

c) For each lightpath , compute (by usingproportional allocation) separate values for therate of and new rates of .

d) For all the lightpaths , computeas the minimum of the computed rates

on all the lightpaths: ().

Similarly, the new rates of all are computed asthe minimum value (for all lightpaths) of the newrates computed in Step 2c.

e) If all the deadlines for existing affected connec-tions are still met: and Return{ , and the new rates of affected con-nections }.

3) , Return null.Algorithm 3 illustrates the Changing-Rates algorithm (for

networks using hybrid architecture), which attempts to changethe rates of some already-scheduled DDRs with the goal of ac-commodating . The algorithm consists of two main phases. InPhase 1, we try to provision DDR without any rate change(i.e., perform the RWA as in Section III-A and bandwidth al-location as in Section III-C). If a path (with at leastfree capacity) is found for , we allocate bandwidth to

. Subsequently, we can take away bandwidth from when-ever bandwidth may be needed by a future request. If no path isfound for , we will try to reconfigure existing (already-sched-uled) connection requests in Phase 2. According to Phase 2,Step 1, we first obtain sets of paths on which we attemptchanging the rates (parameter is 3 in our numerical results).In order to obtain a set of alternate virtual paths on which totry making room for (from these paths, in Step 2e, we willchoose one as ’s scheduling path), we apply a modified ver-sion of Algorithm 1. The main modifications are:

1) When constructing , in Step 1a of Algorithm 1, we ran-domly choose which existing lightpaths between two nodesto put into . Moreover, we no longer constrain lightpathsto have free capacity , because now we changethe rates of existing connections anyway.

2) In the modified version, we do not return one virtual path(Algorithm 1, Step 3b), but alternate virtual paths, onwhich we attempt to change the rates.

For each of the alternate virtual paths, we try changing therates of existing connection requests (which keep their paths,without any rerouting), compute the new rates by using propor-tional allocation, and see if all the requests with changed ratesstill meet their deadlines. If yes, we can provision and finish.We try all the alternate virtual paths , from all path sets, andif changing the rates fails on all the paths, ’s provisioning fails.

ANDREI et al.: PROVISIONING OF DDRs WITH FLEXIBLE TRANSMISSION RATES 361

Fig. 5. A representative US nationwide network.

1) Changing Rates for Long Paths: In Algorithm 3, we onlyattempt changing the rates when Phase 1 fails to find a path withenough bandwidth for request . However, because we apply

policy (in Phase 1), which utilizes lots of bandwidth inbursts, the paths found on the auxiliary graph can becomerather long. Long paths can degrade network performance (asthey use many wavelength-links). Therefore, in this scheme,we attempt changing the rates (Phase 2), even if a path isfound in Phase 1, when has at least more physical hopsthan the shortest path between ’s source and destination (if

, with a parameter). In Section IV, this algorithm will be de-noted by .

2) Changing Rates With Time Limitations: The Changing-Rates approaches presented so far assume an unlimited numberof rate changes for a scheduled DDR during its holding time. Inpractice, we may wish to limit the number of rate changes forthe lifetime of each DDR (as frequent rate changes may createsignaling overhead). In this variation, we allow changing the rateof an ongoing file transfer only if at least s have passed sincethe last rate change. As detailed in Section IV-B, changing therates helps even when is large. We denote this algorithm asChangeRates(T sec).

IV. ILLUSTRATIVE NUMERICAL RESULTS

We simulate a dynamic network environment to evaluate theperformance of our DDR provisioning algorithms. Fig. 5 showsthe topology used in our study. Each network edge has two uni-directional fiber links, one in each direction. Each link has 16wavelengths, each of capacity 10 Gbps (OC-192). All networknodes are equipped with either Hybrid or Opaque switches. Fornodes with degree , each switch in the network with Hy-brid OXCs has 32 bidirectional transceivers (i.e., 32 transmit-ters and 32 receivers), while for nodes with degree , eachswitch is equipped with 64 transceivers. DDR arrivals are inde-pendent and uniformly distributed among all source-destinationpairs. The number of considered alternate paths is . Ourresults are averaged over 20 simulation runs, each of 100 000DDRs.

We consider that the customer’s choice of file deadline canbe affected by the price offered by the service provider for the

required to transfer the file. In this case, it is pos-sible that the customer may relax its required deadline, so thatit chooses its preferred from the set of transfer ratesoffered by the service provider. A larger will lead toa more expensive service. Choosing the means fixingthe deadline as , where is the file size.

We investigate the performance of our DDR provisioning al-gorithms on three distributions of the requested . Thefirst bandwidth distribution of (denoted as ) is100 Mbps : 500 Mbps : 1 Gbps : 2.5 Gbps : 5 Gbps : 10 Gbps =50 : 25 : 15 : 7 : 2 : 1. File sizes are uniformly distributed in therange – GB for Mbps (leading to dead-lines between 8–200 s), – GB for 500 Mbps, – GBfor Gbps, and – GB for of 2.5, 5,and 10 Gbps. Our second bandwidth distribution of( ) is 100 Mbps : 500 Mbps : 1 Gbps = 50: 35: 15. This trafficset contains small rates compared with the 10-Gbps line rate.The third bandwidth distribution ( ) is 500 Mbps : 1 Gbps :2.5 Gbps = 50 : 30 : 20, with large rates compared with the linerate. File sizes for and are generated similarly as in

. Please note that all the three BDs chosen are skewed dis-tributions, with larger frequency for smaller bandwidth requests,similar to the distributions of practical traffic requests [13], [31].

To compare the performances of our approaches, we considertwo metrics. The first one is the fraction of requests which arenot provisioned out of the total number of file-transfer requests.The second metric corresponds to the sum of file sizes whichcannot be provisioned out of the total sum of file-transferrequests (and we will refer to it as “Fraction of UnprovisionedBytes”). This metric shows the total bandwidth that cannot beprovisioned out of total requested bandwidth. It is importantto note that, even if two provisioning algorithms andhave close performance relative to the second metric (samethroughput), but provisions more requests than (i.e.,performs better considering first metric), the service providerwould probably favor . This is because our current networksconsider volume discount for bandwidth pricing: from a serviceprovider perspective, it is usually preferable to service morecustomers, and gain a larger revenue.

The two metrics above [corresponding to the MILP objec-tives in (11) and (12)] are considered as interrelated in the objec-tives of our heuristics: Our algorithms focus on both achievingan effective bandwidth utilization and on rejecting few DDRs.Some of the solutions pursued by our heuristics to achieve theseobjectives are: use of short paths (in both hybrid and opaqueRWAs), minimization of used resources (e.g., “Segmentation”avoids setting up many lightpaths, thus saving transceivers), effi-cient bandwidth allocation (e.g., uses the entire bandwidthefficiently), or reallocation of bandwidth to prevent suboptimalbandwidth allocation (by ).

A. Performance of Fixed Bandwidth-Allocation Algorithms inHybrid OXCs, for Varying Number of Transceivers

First, we study the performance of our DDR provisioningalgorithms (employing the fixed bandwidth-allocation poli-cies and ), considering that the network in Fig. 5 isequipped with Hybrid OXCs with varying number of deployed

362 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 2, APRIL 2010

Fig. 6. Performance of provisioning DDRs in Hybrid networks equipped with varying number of transceivers. (a) Fraction of unprovisioned bytes for �� .(b) Fraction of unprovisioned bytes for �� .

transceivers (we subscript the policy name with , e.g., ,to indicate it is for Hybrid).

For the purpose of this section, we vary our base transceiverconfiguration (shown at the beginning of Section IV), by usinga factor of (i.e., we multiply the number of transceivers inthe base case by , for any node degree). signifiesfewer deployed transceivers than the basic configuration (e.g.,

, which corresponds to 38 transceivers for nodes withdegree ), deploys more transceivers (e.g., ,corresponding to 76 transceivers), while is our base con-figuration [denoted without any in Fig. 6(a) and (b)], whichwe further study in the later subsections. We investigated

.Fig. 6(a) and (b) show the fraction of unprovisioned bytes,

for and , for varying number of deployed transceivers(please note that achieves similar results). A first conclu-sion from Fig. 6(a) and (b) is that, in general, for both and

, by deploying more transceivers we are able to provisionmore bytes. Between the two fixed allocation approaches,is more sensitive to transceiver variation than . This isbecause (as we will show in Section IV-D) policy utilizesmore transceivers than , especially in the Hybrid case. Theperformance of does not change much with varying(for and , performs almost the same,for both and ). By increasing to more than 1.2,very small performance improvement can be obtained because,for , the number of transceivers is close to the physicalupper bound of one transceiver per wavelength.

Depending on the number of transceivers deployed, we dis-tinguish the following cases: 1) the case where transceivers arethe scarce resource in the network (and not the wavelengths),hence they lead to significant blocking (e.g., ); 2) thecase where blocking is mainly due to capacity (e.g., orlarger); and 3) intermediate cases (such as our base case), where(depending also on the bandwidth distribution) blocking is dueto both transceivers and capacity.

An interesting result obtained for all three BDs [see Fig. 6(a)and (b)] is that, for the cases where transceivers are not thescarce resource (e.g., ), provisions more bytes than

. For the cases where transceivers are the scarce resource(e.g., ), performs better than , as it requires

fewer transceivers. For intermediate cases (e.g., our base case),the relative performance of and depends on the im-pact of the transceivers on the bandwidth distribution (i.e., ifblocking is mostly due to transceivers or to capacity).

B. Performance of DDR Provisioning Approaches in HybridArchitecture for the Base Transceiver Configuration

In this section, we investigate the performance of our DDRprovisioning algorithms (for the three ) for our base trans-ceiver configuration deployed on Hybrid OXCs.

Fig. 7(a) shows the fraction of unprovisioned bytes for .We observe that, among the fixed allocation policies,slightly outperforms for low and intermediate loads. The

method obtains only a slight improvement over(more visible for lower network loads), whilefurther improves over . The Changing Rates for long pathspolicy ( ) provisions slightly more bandwidththan .

Fig. 7(b) shows the fraction of unprovisioned requests for, by comparing and policies. Note that

contains requests with ranging from 100 Mbps (50%of the requests) to 10 Gbps (1% of the requests). In Fig. 7(b),

and indicate the overall fraction of unprovisionedrequests, without considering the granularities of the blocked

, while the rest of the plotted values detail the fractionof unprovisioned requests for three types of : 1, 2.5,and 5 Gbps. Note that has similar fraction of unprovi-sioned requests, irrespective of the granularity, there-fore ’s plots are clustered together. We observe that bychoosing policy over , the number of unprovisionedconnections is approximately 5 times less. This large differencein performance is explained as follows. rejects almost no1 Gbps (or lower ) requests, which together sum upto 90% of the offered requests. cannot provision a rela-tively small number of 2.5 Gbps requests and a large numberof DDRs with of 5 and 10 Gbps. This is explainedby ’s ability to properly groom small-granularity requests incontrast to large-rate requests. These large requests will contendfor bandwidth with the smaller-granularity requests, which areeasier to provision. In contrast, rejects the same fraction

ANDREI et al.: PROVISIONING OF DDRs WITH FLEXIBLE TRANSMISSION RATES 363

Fig. 7. Performance of DDR provisioning for �� . (a) Fraction of unprovisioned bytes for �� . (b) Fraction of unprovisioned requests of different bandwidthgranularities for �� .

Fig. 8. Performance of the bandwidth-allocation policies for�� and�� . (a) Fraction of unprovisioned bytes for�� . (b) Fraction of unprovisioned requestsfor �� and effect of allowing a limited number of rate changes.

of requests of all , as uses the whole wavelengthcapacity for all types.

Fig. 8(a) shows the fraction of unprovisioned bytes for .Among the fixed allocation policies, rejects significantlymore bandwidth than . As expected, thepolicy has intermediate performance between and

. Considering the adaptive bandwidth-allocation policies,both and perform slightly betterthan , which is expected because they utilize more infor-mation (i.e., the network state). Both flavors ofimprove performance over the other bandwidth-allocationapproaches (same as for , provisionsslightly more bandwidth than ). Overall, for

, the performance can be improved if we utilize theadaptive policies over the fixed ones; further improvement ispossible if approaches are used.

Fig. 8(b) shows the performance of the allocation policiesfor and performs a sensitivity analysis on the ChangingRates with Time Limitations policy, which may be preferred inpractice as it involves less signaling overhead compared with thestandard . Time (shown in brackets) is theminimum time between two possible consecutive rate changesin the lifetime of a DDR. Fig. 8(b) shows that, for time periods of10, 20, and 30 s between rate changes, stilloutperforms . For 40 s, however, rate changes are no longer

applied because the period between allowed changes is too long(compared with holding time), and the performance is closer to

(recall that policy uses in Phase 1 ofAlgorithm 3).

C. Provisioning DDRs in Opaque Versus Hybrid Networks

We compare the performance of provisioning DDRs inOpaque and Hybrid networks (Opaque results are subscriptedwith , e.g., ). The fraction of unprovisioned bytes for

is shown in Fig. 9(a). In the case of and , thetype of OXC is not as important, because does not groomrequests (hence, the difference in ’s performance is mainlydue to the different RWA algorithms being used for Hybrid andOpaque). We observe that performs significantly betterthan . In addition, is able to provision more band-width than . This is because Opaque OXCs do full OEOconversion, so they can do better grooming than Hybrid OXCs.The Adaptive approaches are not shown in Fig. 9(a) becausetheir performance is not significantly different compared totheir corresponding .

Fig. 9(b) shows the performance of our policies for . Asin , and Adaptive policies perform better for Opaquethan for Hybrid. We observe that both Adaptive approachesslightly improve over their corresponding .

364 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 2, APRIL 2010

Fig. 9. Performance of provisioning DDRs in both Hybrid and Opaque networks for �� and �� . (a) Fraction of unprovisioned bytes for �� . (b) Fractionof unprovisioned bytes for �� .

Fig. 10. Average number of used transceivers per OXC for�� , for both Hy-brid and Opaque scenarios.

D. Resource Utilization

To evaluate the efficiency of our DDR provisioning ap-proaches, we studied the utilization of network resources(transceiver, wavelength, and lightpath utilization) for both theHybrid and the Opaque cases. Because of space considerations,in Fig. 10 we only show the average transceiver utilization,computed as the average (with and beingthe number of utilized transmitters and receivers) during thesimulation time, so weighted by the interval of time betweentwo consecutive events, as in [14]. We use bidirectional trans-ceiver slots. Since today’s networks are often overprovisioned,for this experiment, we assume that the capacity of the links(i.e., number of wavelengths) is large enough to satisfy allrequests. This way, we can compare the transceiver utilizationsfairly, with a constant value (e.g., zero) of unprovisionedbandwidth for all our approaches [14].

Fig. 10 shows the transceiver utilization for both node ar-chitectures for ( and achieve similar resultsto ). For both and policies, provisioning usingHybrid OXCs consumes fewer transceivers than provisioningusing Opaque OXCs. This is because, in the Hybrid case, trans-ceivers are used only at the lightpath’s end nodes, whereas inthe Opaque case, each intermediate node on the path has to ter-minate and set up lightpaths, which utilizes transceiver equip-

ment. Fig. 10 shows that, for both Hybrid and Opaque, ismore transceiver-efficient than . This is because usesa full wavelength’s capacity (i.e., optimal lightpath utilization),hence, transceivers are also used efficiently. On the other hand,

grooms connections, and lightpaths are kept active (andtransceivers utilized) even if only a fraction of the lightpath’scapacity is actually used. This result shows that is moreefficient than when transceivers are the bottleneck (resultobtained in Section IV-A).

E. Numerical Study for the Mathematical Model

The MILP formulations presented in Section III-D are solvedusing CPLEX [28], a commercial linear-programming package.To obtain a lower bound for our DDR provisioning heuristics,we use the mathematical formulations as benchmarks. Recallthat the MILPs solve the offline setting of our problem, whereall requests are known in advance, while our algorithms mustschedule each request when it arrives.

Since solving our MILP formulations is computationally de-manding, we use a small six-node mesh topology as in [22] (asix-node logical ring with two chords), equipped with Opaqueswitches, and bidirectional links equipped by two wavelengthsper link (each wavelength of capacity 10 Gbps). Our studiedbandwidth distribution has two MinRate granularities 5 Gbps :10 Gbps = 65 : 35. All file sizes are 10 GB. The number of al-ternate paths is . For all approaches compared in Fig. 11,we assign uniform (unit) costs to available links. Requests aregenerated uniformly between node pairs and DDR arrivals areindependent with average arrival rate of 1.0.

Fig. 11 shows the average number of unprovisioned requests1

for 10–20–30–40–50 DDRs (results in Fig. 11 are averages of10 ILP runs with different seeds). We observe that the perfor-mances of and are fairly close to those ofand , respectively. can provision more requeststhan (and similarly, provisions more DDRs than

). As detailed in Section III-D, is alower bound on both and . Fig. 11 showsthat can usually improve the performance of the

1Note that because the file sizes of all DDRs are the same (10 GB), the twoperformance metrics (Unprovisioned Requests and Unprovisioned Bytes) corre-sponding to the MILP objectives in (11)–(12) have identical results.

ANDREI et al.: PROVISIONING OF DDRs WITH FLEXIBLE TRANSMISSION RATES 365

Fig. 11. Results of the mathematical approaches.

fixed allocation MILPs, especially for larger number of offereddemands where there may be more opportunities for rate-choiceoptimization; however, the improvement may depend on manyparameters, such as bandwidth distribution or arrival process.ILP computation times (on a 3-GHz Pentium-4 HT processorwith 1-GB RAM) significantly increase for larger number of of-fered demands (for the few cases when CPLEX did not find theoptimum result in 12 h of execution, we chose the best objectiveobtained that far; the maximal distance to the optimal solutionis always under 2.2%). Computational times for and

are significantly smaller than those of .

V. CONCLUSION

We studied the problem of provisioning DDRs over WDMmesh networks by allowing flexible transfer rates. We in-vestigated the effect of using different node architectures(Hybrid and Opaque) on the performance of the network. Wedevised three categories of bandwidth-allocation algorithms:fixed, adaptive to network state, and strategies which allowtransfer-rate reconfiguration. We studied the problem on a com-prehensive set of traffic distributions. Our results show that:1) among the fixed bandwidth-allocation strategies, usu-ally performs better than for the cases when transceiversare not the scarce resource, while outperforms forthe cases when transceivers are the main source of blocking;2) changing-rates strategies, even if we change the transferrates infrequently, improve over the other bandwidth-allocationapproaches; for some bandwidth distributions, Adaptive policyslightly improves over fixed allocation strategies also; 3) pro-visioning DDRs in a network with Opaque OXCs (when usingsubwavelength granularity transfer rates) generally acceptsmore services than in a network using Hybrid OXCs; and 4) ourDDR provisioning algorithms are benchmarked by solutionsobtained from MILP models.

REFERENCES

[1] B. Mukherjee, Optical WDM Networks. New York: Springer, 2006.[2] D. Saha, B. Rajagopalan, and G. Bernstein, “The optical network con-

trol plane: State of the standards and deployment,” IEEE Commun.Mag., vol. 41, no. 8, pp. S29–S34, Aug. 2003.

[3] B. Mukherjee, “Architecture, control, and management of opticalswitching networks,” in Proc. IEEE/LEOS Photon. Switch. Conf.,Aug. 2007, pp. 43–44.

[4] I. Foster, M. Fidler, A. Roy, V. Sander, and L. Winkler, “End-to-Endquality of service for high-end applications,” Comput. Commun., vol.27, no. 14, pp. 1375–1388, Sep. 2004.

[5] “Grid Network Services Use Cases from the e-Science Community,” T.Ferrari, Ed., 2007, Open grid forum informational document.

[6] H. Miyagi, M. Hayashitani, D. Ishii, Y. Arakawa, and N. Yamanaka,“Advanced wavelength reservation method based on deadline-awarescheduling for lambda grid networks,” J. Lightw. Technol., vol. 25, no.10, pp. 2904–2910, Oct. 2007.

[7] M. Netto, K. Bubendorfer, and R. Buyya, “SLA-based advance reserva-tions with flexible and adaptive time QoS parameters,” in Proc. ICSOC,2007, vol. 4749, Lecture Notes in Comput. Sci., pp. 119–131.

[8] S. Balasubramanian and A. Somani, “On traffic grooming choices forIP over WDM networks,” in Proc. Broadnets, San Jose, CA, Oct. 2006,pp. 1–10.

[9] S. Huang and R. Dutta, “Research problems in dynamic trafficgrooming in optical networks,” presented at the 1st Int. WorkshopTraffic Grooming, San Jose, CA, Oct. 2004.

[10] H. Zhu, H. Zang, K. Zhu, and B. Mukherjee, “A novel generic graphmodel for traffic grooming in heterogeneous WDM mesh networks,”IEEE/ACM Trans. Netw., vol. 11, no. 2, pp. 285–299, Apr. 2003.

[11] K. Zhu and B. Mukherjee, “Traffic grooming in an optical WDM meshnetwork,” IEEE J. Sel. Areas Commun., vol. 20, no. 1, pp. 122–133,Jan. 2002.

[12] H. Zhu, H. Zang, K. Zhu, and B. Mukherjee, “Dynamic trafficgrooming in WDM mesh networks using a novel graph model,”in Proc. IEEE Globecom, Taipei, Taiwan, Nov. 2002, vol. 3, pp.2681–2685.

[13] K. Zhu, H. Zang, and B. Mukherjee, “A comprehensive study on next-generation optical grooming switches,” IEEE J. Sel. Areas Commun.,vol. 21, no. 7, pp. 1173–1186, Sep. 2003.

[14] M. Tornatore, A. Baruffaldi, H. Zhu, B. Mukherjee, and A. Pattavina,“Holding-time-aware dynamic traffic grooming,” IEEE J. Sel. AreasCommun., vol. 26, no. 3, pp. 28–35, Apr. 2008.

[15] J. Kuri, N. Puech, M. Gagnaire, E. Dotaro, and R. Douville, “Routingand wavelength assignment of scheduled lightpath demands,” IEEE J.Sel. Areas Commun., vol. 21, no. 8, pp. 1231–1240, Oct. 2003.

[16] C. V. Saradhi, L. K. Wei, and M. Gurusamy, “Provisioning fault-tol-erant scheduled lightpath demands in WDM mesh networks,” Proc.IEEE Broadnets, pp. 150–159, Oct. 2004.

[17] B. Wang, T. Li, X. Luo, Y. Fan, and C. Xin, “On service provisioningunder a scheduled traffic model in reconfigurable WDM optical net-works,” Proc. IEEE Broadnets, pp. 13–22, Oct. 2005.

[18] A. Jaekel, “Lightpath scheduling and allocation under a flexible sched-uled traffic model,” in Proc. IEEE Globecom, Nov. 2006, pp. 1–5.

[19] B. Wang and A. Deshmukh, “An all hops optimal algorithm for dy-namic routing of sliding scheduled traffic demands,” IEEE Commun.Lett., vol. 9, no. 10, pp. 936–938, Oct. 2005.

[20] J. Zheng and H. Mouftah, “Routing and wavelength assignment foradvance reservation in wavelength-routed WDM optical networks,”in Proc. IEEE Int. Conf. Commun., Jun. 2002, vol. 5, pp. 2722–2726.

[21] H. Zang, J. P. Jue, and B. Mukherjee, “A review of routing andwavelength assignment approaches for wavelength-routed opticalWDM networks,” Opt. Netw. Mag., vol. 1, no. 1, pp. 47–60,Jan. 2000.

[22] A. Banerjee, W. Feng, D. Ghosal, and B. Mukherjee, “Algorithms forintegrated routing and scheduling for aggregating data from distributedresources on a lambda grid,” IEEE Trans. Parallel Distrib. Syst., vol.19, no. 1, pp. 24–34, Jan. 2008.

[23] D. Andrei, M. Tornatore, D. Ghosal, C. Martel, and B. Mukherjee, “On-demand provisioning of data-aggregation requests over WDM meshnetworks,” in Proc. IEEE Globecom, Nov. 2008, pp. 1–5.

[24] E. Coffman, “An introduction to combinatorial models of dynamicstorage allocations,” SIAM Rev., vol. 25, no. 3, pp. 311–325, Jul. 1983.

[25] M. Batayneh, D. Schupke, M. Hoffmann, A. Kirstadter, and B.Mukherjee, “Optical network design for a multiline-rate carrier-gradeEthernet under transmission-range constraints,” J. Lightw. Technol.,vol. 26, no. 1, pp. 121–130, Jan. 2008.

[26] J. Yen, “Finding the � shortest loopless paths in a network,” Manage.Sci., vol. 17, no. 11, pp. 712–716, Jul. 1971.

[27] L. Kleinrock, Queuing Systems. New York: Wiley, 1976.[28] ILOG CPLEX 9.0 Oct. 2003 [Online]. Available: http://www.ilog.com/

products/cplex/, CPLEX 9.0 User’s Manual, Ch. 17, “Logical con-straints in optimization.”

366 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 2, APRIL 2010

[29] “Link capacity adjustment scheme (LCAS) for virtual concatenatedsignals,” ITU-T Recomm. G.7042, Feb. 2004.

[30] S. Acharya, B. Gupta, P. Risbood, and A. Srivastava, “PESO: Lowoverhead protection for Ethernet over SONET transport,” Proc. IEEEINFOCOM, pp. 165–175, Mar. 2004.

[31] S. Rai, O. Deshpande, C. Ou, C. Martel, and B. Mukherjee, “Reliablemultipath provisioning for high-capacity backbone mesh networks,”IEEE/ACM Trans. Netw., vol. 15, no. 4, pp. 803–812, Aug. 2007.

Dragos Andrei (S’05) received the B.S. degree fromthe Polytechnic University of Bucharest, Bucharest,Romania, in 2004, and the M.S. and Ph.D. degrees incomputer science from the University of California,Davis, in 2007 and 2009, respectively.

His research interests are on the design and perfor-mance analysis of algorithms for traffic engineeringand network optimization in optical backbone net-works and high-speed grids.

Massimo Tornatore (S’03–M’07) received theLaurea degree in telecommunication engineeringand the Ph.D. degree in information engineeringfrom Politecnico di Milano, Milan, Italy, in 2001and 2006, respectively.

During his Ph.D. course, he worked in collabo-ration with Pirelli Telecom Systems and TelecomItalia Labs, and he was a visiting Ph.D. student withthe Networks Lab, University of California, Davis,and with CTTC Laboratories, Barcelona, Spain. Heis currently a Post-Doctoral Researcher with the

Department of Computer Science, University of California, Davis. He is authorof about 60 conference and journal papers. His research interests includedesign, protection strategies, traffic grooming in optical WDM networks, andgroup communication security.

Dr. Tornatore was a co-recipient of the Best Paper Award at IEEE ANTS 2008and the Optical Networks Symposium in IEEE Globecom 2008.

Marwan Batayneh (S’07) received the B.S. degreefrom Jordan University of Science and Technology,Irbid, Jordan, in 2001, and the M.S. and Ph.D.degrees from the University of California, Davis,in 2006 and 2009, respectively, all in electrical andcomputer engineering.

Since July 2009, he has been with Integrated Pho-tonics Technology (IPITEK) Inc., Carlsbad, CA, asa Research and Development Scientist in the area ofcarrier-grade Ethernet. His research focus is on thedesign and analysis of carrier Ethernet architectures.

Charles U. Martel received the B.S. degree incomputer science from the Massachusetts Instituteof Technology, Cambridge, in 1975, and the Ph.D.degree in computer science from the University ofCalifornia (UC), Berkeley, in 1980.

Since 1980, he has been a computer science Pro-fessor at UC Davis, where he was Chairman of theDepartment from 1994 to 1997. He has worked on abroad range of combinatorial algorithms, includingapplications to networks, parallel, and distributedsystems, scheduling, and security. His current

research interests include the design and analysis of network algorithms, graphalgorithms (particularly for modeling small worlds), and security algorithms.As a five-time world bridge champion, he also has an interest in computerbridge playing programs.

Biswanath Mukherjee (S’82–M’87–F’07) receivedthe B.Tech. (Hons.) degree from the Indian Instituteof Technology, Kharagpur, India, in 1980, and thePh.D. degree from the University of Washington,Seattle, in 1987.

He holds the Child Family Endowed ChairProfessorship at University of California (UC),Davis, where he has been since 1987, and served asChairman of the Department of Computer Sciencefrom 1997 to 2000. He served a five-year term asa Founding Member of the Board of Directors of

IPLocks, Inc., a Silicon Valley startup company. He has served on the TechnicalAdvisory Board of a number of startup companies in networking, most recentlyTeknovus, Intelligent Fiber Optic Systems, and LookAhead Decisions Inc.(LDI). He is author of the textbook Optical WDM Networks (New York:Springer, 2006) and Editor of Springer’s Optical Networks book series.

Dr. Mukherjee is co-winner of the Optical Networking Symposium BestPaper Award at the IEEE Globecom 2007 and 2008 conferences. He is aco-winner of the 1991 National Computer Security Conference Best PaperAward. He is winner of the 2004 UC Davis Distinguished Graduate MentoringAward. He serves or has served on the editorial boards of eight journals,most notably the IEEE/ACM TRANSACTIONS ON NETWORKING and IEEENetwork. He served as the Technical Program Chair of the IEEE INFOCOM’96 conference. He served as Technical Program Co-Chair of the Optical FiberCommunications (OFC) Conference 2009. He is Steering Committee Chairof the IEEE Advanced Networks and Telecom Systems (ANTS) Conference(the leading networking conference in India promoting industry–universityinteractions), and he served as General Co-Chair of ANTS in 2007 and 2008.