Analysis of Multi-Server Scheduling Paradigm for Service Guarantees during Network Mobility

21
Wireless Pers Commun (2012) 63:177–197 DOI 10.1007/s11277-010-0114-5 Analysis of Multi-Server Scheduling Paradigm for Service Guarantees during Network Mobility Syed Zubair Ahmad · Muhammad Abdul Qadir · Muhammad Saeed Akbar · Abdelaziz Bouras Published online: 12 September 2010 © Springer Science+Business Media, LLC. 2010 Abstract Multi-server scheduling of traffic flows over heterogeneous wireless channels affix fresh concerns of inter-packet delay variations and associated problems of out-of- sequence reception, buffer management complexity, packet drops and re-ordering overhead. In this paper, we have presented an exclusive multi-server scheduling algorithm that is spe- cifically tuned for mobile routers equipped with multiple wireless interfaces and has attained multiple care-of-address registrations with its home agent (HA). The proposed adaptive, Self- clocked, Multi-server (ASM) scheduling algorithm is based on predetermined transmission deadlines for each arrived packet at the mobile router. The mobile flows receive desired ser- vice levels in accordance with their negotiated service rates and are only constraint by the cumulative capacity of all active links. The major challenge lies in the handling of asymmet- ric channels to stitch into a unified virtual channel of higher capacity with reliable service guarantees during mobility. The sorted list of transmission schedules is used to assign physi- cal channels in increasing order of their availability. This approach specifically encapsulates the physical layer disconnections during the handovers and ensures continuous service to ongoing flows. The proposed scheduling scheme is supplemented by an analytical model and simulations to verify its efficacy. The simulation results demonstrate higher degree of reliability and scalability of service provisioning to flows during mobility. S. Z. Ahmad (B ) · M. A. Qadir · M. S. Akbar Center of Distributed and Semantic Computing, Mohammad Ali Jinnah University, Islamabad campus, Islamabad, Pakistan e-mail: [email protected] M. A. Qadir e-mail: [email protected] M. S. Akbar e-mail: [email protected] A. Bouras Center of Studies, Research and Research-action Lumiere (CERRAL), IUT-Lumiere, University of Lumiere Lyon 2, Lyon, France e-mail: [email protected] 123

Transcript of Analysis of Multi-Server Scheduling Paradigm for Service Guarantees during Network Mobility

Wireless Pers Commun (2012) 63:177–197DOI 10.1007/s11277-010-0114-5

Analysis of Multi-Server Scheduling Paradigmfor Service Guarantees during Network Mobility

Syed Zubair Ahmad · Muhammad Abdul Qadir ·Muhammad Saeed Akbar · Abdelaziz Bouras

Published online: 12 September 2010© Springer Science+Business Media, LLC. 2010

Abstract Multi-server scheduling of traffic flows over heterogeneous wireless channelsaffix fresh concerns of inter-packet delay variations and associated problems of out-of-sequence reception, buffer management complexity, packet drops and re-ordering overhead.In this paper, we have presented an exclusive multi-server scheduling algorithm that is spe-cifically tuned for mobile routers equipped with multiple wireless interfaces and has attainedmultiple care-of-address registrations with its home agent (HA). The proposed adaptive, Self-clocked, Multi-server (ASM) scheduling algorithm is based on predetermined transmissiondeadlines for each arrived packet at the mobile router. The mobile flows receive desired ser-vice levels in accordance with their negotiated service rates and are only constraint by thecumulative capacity of all active links. The major challenge lies in the handling of asymmet-ric channels to stitch into a unified virtual channel of higher capacity with reliable serviceguarantees during mobility. The sorted list of transmission schedules is used to assign physi-cal channels in increasing order of their availability. This approach specifically encapsulatesthe physical layer disconnections during the handovers and ensures continuous service toongoing flows. The proposed scheduling scheme is supplemented by an analytical modeland simulations to verify its efficacy. The simulation results demonstrate higher degree ofreliability and scalability of service provisioning to flows during mobility.

S. Z. Ahmad (B) · M. A. Qadir · M. S. AkbarCenter of Distributed and Semantic Computing, Mohammad Ali Jinnah University,Islamabad campus, Islamabad, Pakistane-mail: [email protected]

M. A. Qadire-mail: [email protected]

M. S. Akbare-mail: [email protected]

A. BourasCenter of Studies, Research and Research-action Lumiere (CERRAL),IUT-Lumiere, University of Lumiere Lyon 2, Lyon, Francee-mail: [email protected]

123

178 S. Z. Ahmad et al.

Keywords Adaptive self-clocked multi-server (ASM) scheduling ·Network mobility (NEMO) · Quality-of-service (QoS) ·End-to-end (E2E) delay variations · Multi-path flow management

1 Introduction

The efficient communication resource utilization has been actively studied in the backdropof wireless and mobile networks. The scarcity of radio resource, accompanied with mobilityevents like; link quality low, link down etc., seriously impair service level agreements (SLA),agreed at the time of session initiation. In the upcoming era of heterogeneous wireless net-works (HWN), there may be scenarios where one network is acutely congested while othersmay be under utilized. In such scenario, network resources are much under utilized as com-pared to their installed capacity and service provisioning may still be low. This raises issuesof reliability and scalability of service models and requires alternative strategies to improvethe situation. The ever increasing density of multi-mode mobile devices (MMDs); such as,multi-interface mobile routers (MMRs), and multi-interface mobile terminal (MMTs) providevaluable cushion for improving service guarantees with load sharing [1]. The simultaneoususe of multiple available radio links has therefore, diverse applications in scenarios like;network mobility, bandwidth aggregation (BAG), mobility management, QoS with servicefairness along with optimum global resource utilization in an HWN environment.

The use of multiple interfaces has been predominantly studied in lieu of BAG [2–4]. Themain purpose of these schemes has been confined to maximize the bandwidth needs of appli-cation with capacity aggregation of low rate wireless links. These schemes are very useful forhomogeneous channels but not tested on heterogeneous channels under dynamic link condi-tions. The asymmetry of heterogeneous channels, caused by different channel characteristics;such as, different path lengths and different state-of-congestion on each path, raise issues ofout-of-sequence (OOS) reception of packets and can be a source of expensive buffering andre-ordering management overhead [5]. The BAG techniques are also discussed in terms ofmultiple reservations that ensure higher service provisions for the mobile sessions [6]. Thereservation based techniques of service provisioning require state-based forwarding that hasa higher state maintenance cost at each router along with higher overhead of negotiations,re-negotiations, and over provisioning during mobility.

In this paper, we present an adaptive, state-less, multi-server (ASM) scheduling schemethat provides instant service guarantees without need of over provisioning of resourcesthrough multiple reservations. The above mentioned support is achieved through schedul-ing backlogged flows on multiple available interfaces of an MMD. The bandwidth and time-criticality of each flow is maintained through self-clocking, using the concept of virtual clock(VC) [18]. The data rate ri for flow i is served through aggregated capacity Cs

A of all links.The capacity aggregation is achieved through a concurrent link monitoring service that takescare of physical link changes during mobility. The link capacity is measured in terms of itsframe availability in a virtual time slot, which is defined as super set of all physical framesin their ascending order of availability. These frames are than assigned to the backloggedtraffic in the ascending order of transmission deadlines. The rest of the paper is organized asfollows. Section 2 evolves problem formulation for multi-server scheduling during mobility.Section 3 presents proposed scheduling scheme. Section 5 describes analytical modeling ofproposed scheduling scheme. Section 6 discusses simulation results of the proposed scheme.We conclude in Section 6.

123

Analysis of Multi-server Scheduling Paradigm 179

2 Problem Formulation for Multi-Server Scheduling

The single server models of scheduling network traffic have been extensively discussedin lieu of server latencies, fair queuing and service guarantees [7,8]. The per flow statemaintenance models; generally referred to as latency-rate LR servers, quantify service guar-antees in terms of server latency and flow-rate [13]. Similarly, guaranteed rate (GR) serverswork on flow-rate and transmission rate parameters to quantify service guarantees [10]. TheGR servers also upper-bound maximum delay that a packet may experience before it is trans-mitted [9]. In [9], it is shown that both LR and GR servers are equivalent. The multi-serverscheduling over heterogeneous links faces some added challenges of asymmetric link condi-tion and path characteristics; such as, rate-variations, lack of inter-channel synchronization,variable frame-lengths and multiple path lengths. This eventually leads to inconsistent E2Edelays for packets of same flow, causing problems of packet drops, buffering cost and reordermanagement. During mobility, the problem is further convoluted with frequent changes inpoint-of-attachments (PoA), frequent disconnections & reconnections with associated laten-cies in completion of these processes. A multi-server scheduler in such environment hasto adapt accordingly, to provide service guarantees within physical constraints. Therefore,the problem of multi-server scheduling converges in accomplishing a single virtual pathwith higher capacity (equal or higher than each individual link), that maximizes QoS w.r.t.bounded-delay, guaranteed rates and provides higher probability of in-order arrival. Further,due to changes in PoA, the state-maintenance cost also needs to be minimized to avoidexcessive negotiations and over-provisioning.

The stateless scheduling of traffic on a single server has been studied in lieu of differ-entiated services (DiffServ) architecture of service provision [14,16]. The DiffServ modelmaintains flow states only at network edges, and the core router forward traffic without needof any per-flow state maintenance [11,17]. This feature enhances scalability of service pro-vision at the cost of reduced reliability [14]. The stateless model of guaranteed rate trafficforwarding has significant appeal during mobility as it can eliminate state maintenance atthe intermediate routers, which has lesser persistence due to changes in PoA [21]. In case ofmultiple tunnels in operation between the mobility agents; such as MMD and HA, the statemaintenance can be restricted at the tunnel ends [20]. Further, fairness property of schedulingservice censures burst arrivals due to diminishing capacity of servers. A multi-server sched-uler can also compensate such conditions with the aggregated capacity of multiple links.Therefore, we converge to solve reliable service guarantees provisioning problem duringmobility through an adaptive, self-clocked, multi-server scheduler. It needs to be adaptive tocater physical link changes; it should be self-clocked to avoid state maintenance overheadin the intermediate routers and should use multiple available links of an MMD to maximizeservice guarantees.

Virtual Clock and its numerous variant provide self-clocking mechanism to realizebounded delays at the servers [15,16]. Guaranteed rate clock (GRC) is one such schemefor per-flow scheduling algorithms and is defined in [18] as:

GRC j (p1f ) = a j (p1

f )+ l1f

r1f

, (1)

GRC j (pkf ) = max(a j (pk

f ), GRC j (pk−1

f ))+ lkf

r kf

, f or k > 1, (2)

123

180 S. Z. Ahmad et al.

Table 1 List of notations andsymbols pk

f Packet no. of flow f

ASM(pkf ) Deadline for packet pk

f for server s ∈ S belonging toASM

aASM(pkf ) Arrival time of packet k of flow f at server ASM server

lkf Length of packet k of flow f

ρkf Arrival time of flow f at ASM server

rkf Rate for packet k of flow f

lmaxs Length of Maximum length packet served at s ∈ S

θs Latency of server S

πi Propagation delay between nodes j& j + 1

Cs Capacity of server s ∈ S

δk,k+1f Delay variation of packet k, and k + 1 of flow f

ϕjf The upper bound for wait of OOS arrival of packet pk

f

ϑfj Buffer occupancy upper bound due to late arrival of

packet pkl

P[ψ

jf

]Probability of dropping of packet pk

fθs Latency of server s ∈ S

CsA Aggregated capacity of ASM, measured in bytes

BsA Aggregated backlog of ASM, measured in bytes

N Maximum number of servers at ASM

Hsk kth System busy period of ASM

β j Constant, generally given bylmax

jc j

where GRC j (pkf ) is the GRC of kth packet (pk

f ) of flow f at server j and is used as transmis-sion deadline of this packet. The other notations used in the rest of the paper are summarizedin Table 1. The GRC model guarantees that a packet Pk

f shall be transmitted by server j no

later than GRC j (pkf )+

lmaxjc j

, provided server capacity is not exhausted [18]. These guaranteesare quite useful under the assumption of consistent channel availability and strictly constant-bit-rate (CBR) traffic, but may lag behind in situations where channel availability is constraintby mobility. Further in burst arrival cases, it adds predetermined latencies for each packetof burst sub-streams irrespective of channel availability that not only increases inter-arrivaldelay of the sub-stream packets but also under-utilizes link resource, in case channel was free.We will show how this situation can be effectively rectified using our proposed multi-serverscheduling. The delay bound of packet k at j th GRC server is evaluated in [18] as,

GRC j (pkf ) ≤ GRC j−1(pk

f )+ maxi∈[1...k]li

f

r if

+ π j−1 + lmaxj−1

c j−1, (3)

Where maxi∈[1..k]li

f

r if

be the maximum lr ratio up to the packet number k, and π j−1 is the

propagation delay between nodes j − 1& j . It is noticeable that all the values on right handside of inequality (3) are known at server j − 1. Hence GRC j (pk

f ) achieved in (3), may be

described in terms GRC j−1(pkf ), of which forms a recurrence and can be traced back to the

123

Analysis of Multi-server Scheduling Paradigm 181

Fig. 1 System model of multi-server scheduling during network mobility

first server in the path. In such cases, the right hand side of the (3) is used as the upper boundof the scheduling model, at the server j .

The challenge of rate fluctuations and possible dis-connections, faced by MMDs duringmobility may be best handled through unified capacities of all available links connected totheir respective gateways. Figure 1 sketches one such scenario in which an MMD is modeledas a ASM server, comprising of N , GR servers i.e., ASMU

N = {GR01,GR0

2, . . . ,GR0N },where

U indicates up-link direction and can be replaced by D in the downlink direction. Each ofthe GR, servers in ASM leads a path comprising of GR servers, connected in series, as shownin the Fig. 1. The ASMU

N is serving in the uplink direction; where as, its counterpart ASMDN

serves in the downlink direction and is deployed at HA or any other configuration that actsas mobility agents, such as discussed in [21]. There is also a provision of direct parallelpath communication between the sender and receiver. The QoS guarantees may be furtherensured on static path during mobility by using DiffServ between the mobility agent andthe corresponding node (CN). This option though is trivial but is not a prerequisite for thisproposal. The scope of this paper, shall therefore, be restricted to schedule the multi-classtraffic over multiple tunnels between MMD and HA. The state of the system is maintainedat the tunnel ends by synchronizing the aggregated capacity (CA), and list of active flows asa function of there committed rates F(ri ), where F(ri ) = ∪all

i=1 fi (ri ).The proposed ASM scheduling server achieves greater degree of bounded delay guaran-

teesw.r.t. service requirements. Another major problem that is handled through ASM serveris the possibility of capacity depletion that eventually leads to dropping of packets. In case ofmulti-server traffic distribution, the probability of capacity depletion reduces by a factor of1/N as compared to a single server scheduling counterparts. In the next section, we presentthe proposed ASM scheduler with its salient properties and performance bounds.

123

182 S. Z. Ahmad et al.

3 ASM Scheduler

In this Section, we illustrate proposed ASM scheduling algorithm. The main stress of thealgorithm is to be adaptive with the physical link conditions that are transformed intoaggregate capacity in the system. The algorithm takes advantage of the excess capacityto plan sharper schedules for the traffic, particularly for the burst arrival case.

3.1 Scheduling Model

Let there be an ASM server S = {s1, s2, · · · , sN }, comprising of distinct servers, and eachof the servers si belonging to S is a GRC server. This implies that si = GR0

i , w.r.t. Fig. 1,but for the sake of simplicity, we will use notation S for ASM server, si for its componentsand ASM for the deadline of a specific packet. The ASM server also assumes a set of G Rservers in each of its forward paths starting from si . Each path may comprise of distinctnumbers of GRC servers. The ASM server virtually sees the available aggregated capacity

CsA =

M∑i=1

Csi ≥ Csi ,∀si ∈ S,where Csi is the capacity of i th server (i.e. si ). This eventually

means that the backlogged traffic can be served on multiple servers and shall result in fasterservice as compared to a single server. The ASM server schedules a packet according to thefollowing definition.

ASM(p1f ) = a(pk

f )+(

l1f

r1f

) (Bs

A

CsA

), (4)

ASM(pkf ) = max

(a(pk

f ),ASM(pk−1f )

)+

(lk

f

r kf

) (Bs

A

csA

), for k > 1 (5)

where ASM(pkf ) is the deadline for packet of k flow f over any of the si ∈ S, and Bs

A isthe total backlogged bytes in the system. It is noticeable in (4) and (5) that the deadlines

calculated for each packet in ASM is scaled by a factor of(

BsA

csA

)that helps in providing

a sharper deadline for each packet, provided capacity is more than total backlogged bytes.

The scaling of transmission schedule of a packet by(

BsA

csA

), ensures that deadlines shall be

adaptable with changes in available capacity in the system. This factor is also instrumentalin accommodating burst sub-stream on access bandwidth available during any arbitrary timeperiod. The scaling factor accommodates the availability M ≤ N of servers and any increasein number of available server shall automatically result in accompanying decrease in thedeadline of each scheduled packet. Similarly, a decrease in number of available servers shallalso increase the transmission deadlines.

It’s important to note here that, although there may be a possibility of more ASM serversin forward path, but for the sake of simplicity we assume only the first server to be ASM andall subsequent servers are of GRC type. In case of multiple ASM deployments in a network,performance gain can be significantly higher. The application of such deployment could findhigh applicability in large mobile ad-hoc networks (MANETs) and wireless sensor networks(WSNs), where the changes in network topology favor reactive, opportunistic strategies oftraffic forwarding over the multiple available paths. A subset of MMDs in a large set ofmobile nodes can effectively distribute load on available multiple paths.

123

Analysis of Multi-server Scheduling Paradigm 183

t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 t11t12t13t14 t15t16

Scheduler

Arrived Packets List

Deadline based sorted list of arrived packets

Channels

Logical view of mapping of sorted list on physical channels

Time

Fig. 2 A graphical representation of ASM scheduler

3.1.1 Traffic Distribution

The output of ASM server is a sorted list of departure schedules for the backlogged packets.The choice of a particular Si for this list rests with ASM, and is based on an ordered listof available transmission slots T = {t1, t2, . . . , tN }, over a backlogged period. Each of theti in T, contains information of physical channel number and time of its availability. Thebacklogged period is the accumulated service time of all backlogged packets, starting fromthe end of idle time of server and is generally cyclic in nature. The cyclic backlogged periodindicate continuous arrival of traffic at the server and completes only as the server becomesidle. The same is pictorially shown in Fig. 2, where we find three logical queues of traffic. Thequeue shown in the top of Fig. 2 contains traffic in its order of arrival. This queue is used byASM to calculate the deadlines of each arrived packet, in accordance with the guaranteed rateof the flow. The deadlines for each arrived packet result in a sorted list of departure schedule,in ascending order, which is shown at the middle of the Fig. 2. Finally using informationavailable in T , sorted traffic queue is mapped onto the physical channels according withinthe constraints of deadlines for each packet.

In TDMA based networks the frame and slot allocation information is provided by the basestations (BS) as per session/call admission agreement and is very accurate in terms of time.Similarly, CDMA based channel allocation scheme, too have a very accurate time allocationfor transmission in both uplink and downlink directions. Generally, the time-slots allocatedfor the multiple links shall be in accordance with the guaranteed rate of all admitted sessions.The above mentioned scenario persists without mobility events and the ASM attains a stablestate. In case of a mobility event, the reduced aggregate capacity due to disconnected linkis accommodated through increased deadline for each packet. There could be following twopossibilities in such scenario.

123

184 S. Z. Ahmad et al.

Case 1: CsA ≥ Bs

AIn this case, since the available capacity is more than the backlogged traffic, all the back-

logged traffic shall be served as before the mobility event that resulted in reduced capacityin the system. The minor difference that may obviously result is a slight increase in the ratio(

BsA

CsA

). The corresponding impact of such scenario is slightly escalated deadline for each

packet in current backlogged period. Hence the system remains within stable state after thedisconnection of a link due to mobility event.

Case 2: CsA < Bs

A

In this case, since capacity is less than the backlogged traffic, the ratio(

BsA

CsA

), goes greater

than 1, causing and increase in the deadline of the packet beyond the desired service rate.The increase in delay reduces service guarantees to certain degree depending on how large

is the(

BsA

CsA

)ratio. However, the service continues at a reduced rate till the restoration of the

lost connectivity. Although such scenario may not provide desired service guarantees, but itstill ensures service continuity. The burst arrival during this time may be more seriously hurtas it may get longer deadlines and may have a higher probability of being dropped due tocapacity depletion.

In the downlink direction, the anchor point where an ASM is deployed, splits trafficaccordingly, in a similar fashion as discussed for the uplink direction with a remote capacityexchange procedure. Since there is no sensors available at that end to measure link capacityat each of the wireless links, it is communicated through a packet exchange during the regis-tration update process [21]. It may be recalled that multiple care-of-addresses are registeredat the HA and these multiple tunnels are used to carry forward traffic destined to MMD.The work conserving nature of the scheme also helps it in serving burst sub-stream as itwould generate sharper schedules for packets of such sub-streams provided capacity is notexhausted. In the following text, we evaluate some key attribute of the ASM with respect toservice rate, burst handling and system (server) busy time.

Lemma 1 The kth System Busy period of ASM server, during a backlogged interval (t, t +τ ],is given by,

Hsk(t, t + τ) ≤

(Maxi∈[1..J ] ASM(pk

i )− Mini∈[1..J ]a ASM (pki )+ Maxs∈[1..N ]

Lmaxs

cs

),

where Hsk(t, t+τ), is kth system busy period of backlogged traffic during the interval (t, t+τ ].

Proof Let there be J backlogged packets in the kth system busy period i.e., the maximumtime after the service has started for a packet before server becomes idle [13]. In case ofASM, service is spread over multiple parallel channels of available servers. These channelsthough not symmetric or synchronized with each other, but predominantly overlap in timeand the frame duration Fk = ∑

(F js − F j

e ), for all j in T kj , is mostly equals a single largest

slot time in T kj . (Note: F j

s and F je are the starting and ending time of each slot in the virtual

frame T kj ).

In case of overlapping channels all the backlogged traffic will be served within Fk andthe last served packet shall be the one with farthest departure schedule. Hence the service ofthe last packet shall start no later than (Maxi∈[1..J ]ASM(pk

i ) − Mini∈[1..J ]aASM(pki )). The

second parameter for service completion is the length of the largest packet and the capacity

of the server over which it is scheduled. This can be given by Maxs∈[1..N ] LmaxsCs

. Therefore,the system busy period can be given as follows in Eqn. (6),

123

Analysis of Multi-server Scheduling Paradigm 185

Hsk(t, t,+τ)≤

(Maxi∈[1..J ] ASM(pk

i )− Mini∈[1..J ]a ASM (pki )+ Maxs∈[1..N ]

Lmaxs

Cs,

)(6)

Hence also completes the proof. ��Lemma 2 The Service Rate of ASM server is lower-bounded by Rs(t1, t2) ≥Min

(∑Ji=1 ρi ,

∑M≤Ni=1 ci

)

Hs (t1,t2), where Rs(t1, t2) is the service rate during the interval (t1, t2].

Proof The service rate of ASM is primarily constraint by the peak traffic arrival and thecorresponding busy period of the server. Assuming that the peak service rate over systembusy time remain close to a mean service rate, then the service rate can be abstracted by (7)

Rs ≥ Min(BsA,Cs

A) (7)

Now the backlog traffic is accumulated through multiple arrival rates during session busytimes of multiple flow, (7) can be further represented as (8)

Rs ≥ Min

(J∑

i=1

ρi ,

M≥N∑i=1

Ci

), (8)

Where (ρi , σi ) describe token-bucket conformant arrival with an average rate of ρi and max-imum burst size of σi , during time interval [t1, t2] . Lemma 1 provides us the system busytime during any backlogged interval. Therefore, the service rate shall be measure in terms ofthe system busy period and (8) can be represented as follows.

Rs(t1, t2) ≥Min

(∑Ji=1 ρi ,

∑M≤Ni=1 Ci

)

Hs(t1, t2), (9)

that also completes the proof of Lemma. ��Lemma 3 The maximum traffic backlogged in ASM server is upper bounded by the

BsA(t1, t2) ≤

J∑i=1

Max(ρi , σi )− Min(∑J

i=1 ρi ,∑M≤N

i=1 Ci

)

Hs(t1, t2),

Proof The proof of the lemma is directly derivable from lemma 2 that provides lower-boundfor the service rate of ASM. Since maximum arrival during time interval (t1, t2] is constraintby token-bucket regulator, the maximum arrival from all source at a given time can be givenby

∑Ji=1 Max(ρi , σi ). Equation (9) provides the service rate of the server and backlog is at

any time t, t1 ≤ t < t2 can be deliberated by the difference of arrival and service rate, asgiven in (10), which also completes the proof of lemma.

BsA(t1, t2) ≤

∑Ji=1 Max(ρi , σi )− Min

(∑Ji=1 ρi ,

∑M≤Ni=1 Ci

)

Hs(t1, t2), (10)

��The Lemma 3 indicates two important facts about the backlogged traffic. Firstly; it shows

the possibility of BsA(t1, t2) to be zero. This could be the case when the Max term on the

right-hand side equals the Min term. In this situation the server shall be idle after the interval

123

186 S. Z. Ahmad et al.

(t1, t2]. The second possibility is that the Min term is equal to the capacity of the system.This will increase the backlog and the chances of packet drop will also be increased. Thissituation may arise mainly due to some mobility event and shall restore to the normalcy asthe new POA of the dropped link is finalized again. In the following theorem, we evaluateburst accommodating capability of ASM server.

Theorem 1 The burst handling capability of ASM server during an arbitrary time interval(t1, t2] is given by

Q(t1, t2) ≤ CsA(t1, t2)+

J∑i=1

ρi −J∑

i=1

Max(ρi , σi ),

Proof The burst handling capacity lies between the positive gape of capacity and backloggedtraffic, as defined in (10). The higher probability of burst handling takes place when the gapis in favor of capacity. This can be expressed as in (11),

BsA(t1, t2)

csA(t1, t2)

< 1, (11)

Let it be assumed that the capacity and backlog gap is represented by the Q, which is alsorepresenting the free capacity then Eq. (11) can be represented as

BsA(t1, t2)+ Q(t1, t2) ≤ Cs

A(t1, t2), (12)

Q(t1, t2) ≤ CsA(t1, t2)− Bs

A(t1, t2), (13)

Putting the value of BsA from Lemma 4, with the assumption that the available capacity is

higher than the current backlogged traffic, we have

Q(t1, t2) ≤ CsA(t1, t2)+

J∑i=1

ρi −J∑

i=1

Max(ρi , σi ), (14)

The Eq. (14) shows that the available buffering capacity of ASM is augmented by the servicerate (from the the 2nd term of Eq. (14) on right-hand side) during the interval (t1, t2] provides

sufficient potential for handing burst arrival in case ofBs

A(t1,t2)cs

A(t1,t2)≤ 1. The gap between the

available capacity and current backlogged traffic also ensures sharper deadlines for the burstsub-stream arrival and preserves the timeliness of such traffic. This also completes the proofof the lemma.

In the next sub-section we present analytical models for the latency of the ASM server,its E2E delay bonds, delay variation bounds, buffer occupancy and OOS arrival bounds. ��

4 The Analytical Modeling

The multiple path scheduling with diversity in number of servers on each path raises issuesof E2E delay variations and out-of-sequence arrival. For the estimation of these two metrics,we first proof the latency of ASM server in the following lemma.

Lemma 4 The latency of ASM server is upper-bounded by θs(pkf ) ≤ ASM(pk

f ) + LmaxsCs,

where θs(pkf ) represents the latency faced by packet pk

f before it is completely transmitted.

123

Analysis of Multi-server Scheduling Paradigm 187

Proof To proof above lemma, we use the definition of ASM as given in (4) and (5). SinceASM(pk

f ) is the deadline for the packet transmission at any of the server s ∈ S, the maximum

delay before server s starts transmitting the packet is bounded by ASM(pkf ). Since the max-

imum transmission time needed would not be more than transmitting maximum sized packet

over server s ∈ S, which is equal to lmaxsCs, hence the delay of the ASM is upper-bounded as

given in (15)

θs(pkf ) ≤ ASM(pk

f )+ Lmaxs

Cs, (15)

This also completes the proof of lemma. ��This lemma provides a useful performance metric of the proposed scheduling scheme.

The delay bound of the ASM is sharper than most of GR scheduling schemes. Firstly; it isusing the full capacity of the server s ∈ S for the transmission delay, and is guaranteed to

be holding the inequality lmaxsCs

≤ lmaxsr f

. The inequality shows that the delay of ASM is lesserthan any of the guaranteed rate server. Secondly; the delay bound evaluated in Lemma 4, alsoremoves restriction of a specific rate and opens the chance for occasional burst sub-streamarrival and service. Hence, it can accommodate burst sub-stream with more flexibility andlesser delay.

It has been shown in [9,14] that a majority of the scheduling servers like PGPS, VC, GRCbelong to GR scheduling class. In case of multi-server scheduling, the E2E paths are alsoassumed to be comprising of GR servers. The ASM server schedules a packet on any ofavailable server after which each path may comprise of a maximum of Q servers (some ofthe paths may have lesser server count). Therefore, ASM server is common to all paths asit is the first server of each path and the E2E delay is given by the sum of all servers in the

path including ASM. The delay of p jf packet at the K th server; DK

(p j

f

)of a series of GR

servers is evaluated in [18] as follows,

DK(

p jf

)≤ GRCK

(p j

f

)+ αK − A1

(p j

f

), (16)

where GRCK(

p jf

)is the guarantee of server K to transmit the packet p j

f . It is also discussed

in [14] that the GRCK(

p jf

)can be linked back to the GRC1

(p j

f

)through back substitution.

Using the methodology given in [14,18], we evaluate the E2E delay of a packet scheduledthrough ASM over a path, comprising of GR servers.

Theorem 2 The E2E delay D(p jf ) of the packet p j

f , scheduled through ASM over the set ofGRC servers is upper-bounded by

D(

p jf

)≤

(ASM

(p j

f

)+ Lmax

s

Cs

)− a ASM

(p j

f

)

+(L − 1)maxk∈[1..L]lk

f

r k+(L−1)∑

i=1

(lmaxi

r f+ πi

),

where L ≤ Q is the number of servers in the E2E path and limax is the delay of the largest

packet served by server i .

Proof A simple E2E delay model for packets scheduled through ASM can be based onthree parameters; namely, the arrival of the packet, the service time at the last server and the

123

188 S. Z. Ahmad et al.

segmented E2E propagation delay after transmission at Lth server (last link), in the path.This can be given as

DL(

p jf

)≤ G RC L

(p j

f

)− a ASM

(p j

f

)+ αL (17)

where αL = lmaxLr f + πL and a ASM

(p j

f

)is the arrival time of p j

f at ASM.

Lemma 4 provides the latency of the ASM server. The latency of server depends on thearrival of packet and available capacity. The definition of ASM, as given by (4,5) showsthat in case of surplus capacity, the deadline of the packet shall be steeper than any standardGR server. Since ASM is the first server on the path, the latency of this server shall be theforemost component in the E2E delay quantification.

The E2E delay of GR server is also tuned by the foremost server on the path, as shown in

theorem 2 of [18]. Hence the GRCK(

p jf

)can be represented as.

G RC L(

p jf

)≤ ASM

(p j

f

)+ (L − 1)maxn∈[1... j]

lnf

r f+

L−1∑i=1

(lmaxi

r f+ πi

), (18)

By substitution of (18) in (17) we get,

DL(

p jf

)≤

(ASM

(p j

f

)+ lmax

s

Cs

)− a ASM

(p j

f

)+(L − 1) max

n∈[1.. j]ln

f

r f+(L−1)∑

i=1

(lmaxi

r f+πi

),

(19)

Since DL(

p jf

)is the last server in the path hence proof of theorem 2 follows. ��

The theorem 2 provides information about the E2E delay and possible sources of intra-path variation in delay. It can be noticed that the variable lengths of packets and delays ateach of the constituent servers on the path are the two main causes of variations in delay.The servers on the path, except the first one are assumed to be the GRC servers and need notmaintain the state of previous packet to schedule the next packet [18]. Therefore, E2E delaybounds are guaranteed with stateless configuration.

Now we come to the very important issue of inter-path delay variations which are pri-marily caused by the different number of servers traversed by two packets on two differentpaths.

Theorem 3 The E2E delay variations of two successive packets p jf and p j+1

f of the sameflow f scheduled on two different paths through ASM server is upper-bounded by,

δj+1, jf = D

(p j

f

)− D

(p j+1

f

)≤ a ASM

(p j+1

f

)− a ASM

(p j

f

)+ (K − L)maxn∈[1.. j+1]

lnf

r f

+(K−L)∑

i=1

(lmaxi

r f+ πi

),

Proof Let we assume that two consecutive packets p jf and p j+1

f of flow f are scheduled byASM on two different servers ss and st ∈ S, ∃s, t ≤ M ≤ N leading two different paths.The two servers map on two distinct paths, characterized by different number of intermediateservers K , and L respectively. We assume that K > L and the two paths are moderatelycongested. The E2E delay of one path is bounded through theorem 1, and we can directlyachieve the following delays of two packets, as given in (20, 21).

123

Analysis of Multi-server Scheduling Paradigm 189

D(p jf ) ≤

(ASM

(p j

f

)+ Lmax

ss

Css

)− a ASM

(p j

f

)+ (K − 1) max

n∈[1.. j]ln

f

r f

+(K−1)∑

i=1

(lmaxi

r f+ πi

), (20)

D(

p j+1f

)≤

(ASM

(p j+1

f

)+ Lmax

st

Cst

)− a ASM

(p j+1

f

)+ (L − 1)maxn∈[1.. j+1]

lnf

r f

+(L−1)∑

i=1

(lmaxi

r f+ π i

), (21)

Since the two servers ss and st are asymmetric in their characteristics, their delay may notbe anticipated without knowledge of the E2E path details. We argue that since flow f has aguaranteed service rate and the entire set of servers in the two paths are GR, the overall delayvariations of all the servers shall restrict within a tight range-bound. It is also noticeable thatthe latency of the ASM server shall be approximately equal. On the basis of this assumptionwe take two terms of (20) and (21) to be approximately equal as shown in (22) as,

(ASM

(p j

f

)+ Lmax

ss

Css

)∼=

(ASM

(p j+1

f

)+ Lmax

st

Cst

), (22)

Using (22) in conjunction with (20) and (21), their difference yields the following,

D(

p jf

)− D

(p j+1

f

)≤ a ASM

(p j+1

f

)− a ASM (p j

f )+ (K − L)maxn∈[1.. j+1]ln

f

r f

+(K−L)∑

i=1

(lmaxi

r f+ π i

), (23)

The Eq. (23) provides a tight bound on the E2E delay variation of two consecutive packets.Using δ j+1, j

f to represent delay variation δ j+1, jf = D j+1

f − D jf , completes the proof of the

theorem 3. ��The Eq. (23) describes that the E2E delay variations of the two packets are primarily

dependent on the difference of number of servers on the two paths, in the presence of servicegaurantees. Both the summation term and the maximum sized packet transmission delayterms, in (23) grow multiplicatively with the increase of the difference between the num-bers of servers on two paths. Theorem 3 provides basis for quantifying delay variations andaccordingly; helps in developing a specific buffering strategy. Alternatively, E2E delay vari-ation metric can also help in devising proactive strategies for controlling E2E delay variationat the sender side, provided sufficient time space is available for reordering the transmissionschedule according to the different link characteristics.

Theorem 4 The deterministic out-of-Sequence arrival bound ϕ jf of any two successive pack-

ets in case of multi-path traversal of a single flow f is upper-bounded by

ϕjf ≤ a ASM

(p j

f

)+ (K − L) max

n∈[1...K−L]ln

f

r f+(K−L)∑

i=1

(lmaxi

r f+ πi

),

Proof The proof of this theorem directly follows from the Theorem 3 which describes upper-bound on E2E delay variation of asymmetric parallel paths comprising of GR servers. It is

123

190 S. Z. Ahmad et al.

noticeable that E2E delay variation is directly dependent on the difference in count of numberof servers in each path. Let’s assume an arbitrary packet p j

f that traverses (K − L) serversin addition to the L server of its successor, then there are two major components of the OOSarrival latency namely the arrival of the predecessor packet, and maximum delay variationbetween the two packets. Hence the (24) provides a tight bound on OOS arrival of any packet.

ϕjf ≤ a ASM

(p j

f

)+ (K − L)maxn∈[1...K−L]

lnf

r f+(K−L)∑

i=1

(lmaxi

r f+ πi

), (24)

And the proof of theorem follows. ��Corollary 1 The ϕ j

f is a suitable metric of maximum allowed delay before a packet p jf is

considered as dropped and consecutive successors of this packet waiting for in-order for-warding shall be immediately forwarded. In such cases ϕ j

f may be treated as timeout eventfor the possible wait of an OOS arrival.

The Theorem 4 and its Corollary 1 provide a useful tight-bound for the OOS arrival andan upper-bound for the wait of such arrivals. The timeout ϕ j

f can be very effective in buffermanagement that may be essential for the asymmetric multi-channel transmission of real-time traffic. While serving real-time traffic on such channels, the bounded time arrival isalso convoluted with in-order arrival and buffers are essential to cope with this additionalconstraint. The following theorems analytically range-bound the occupancy of such buffersand minimize it to achieve scalable and cost effective buffering model.

Theorem 5 The Buffer Occupancy ϑ jf due to OOS arrival of packer p j

f is upper-bounded by

ϑjf ≤

∑ j+ki= j+1 lmax

i

a ASM(

p jf

)+ (K − L)maxn∈[1...K−L]

lnf

r f+ ∑(K−L)

i=1

(lmaxir f

+ π i) , (25)

Proof Let there be k packets arrived before the OOS timeout event of packet p jf i.e. ϕ j

f . Thenmaximum possible buffer occupancy value for this stream started from j and ending at j +k,could be for the packet p j

f , and represented by ϑ j+kf . In case of in-order arrival, packets are

removed from the buffers and result in vacancy; accordingly. In the worst case(

p jf

)may

arrive after (p j+kf ), within the timeout value

jf

). Hence the maximum buffer occupancy

can be represented by the possible maximum length of packets during the timeout intervalϕ

jf . The occurrence of timeout interval indicates a packet drop event and the latest in-order

sub-stream is forwarded. Therefore, the length of buffer occupied, divided by the maximumtime over which such occupancy persist provides the upper-bound of buffer occupancy. Thiscompletes the proof of the theorem. ��Theorem 6 The probability of dropping of packet p j

f , scheduled through ASM server is

bounded by (26), where P[ψ jf ] is the probability of dropping packet p j

f .

P[ψ jf ] ≤ α + P

[(Cs

A(t1, t2))− (BsA(t1, t2)) ≤ 0

]

+ P

jf ≥ a ASM

(p j

f

)+ (K − L) max

n∈[1...K−L]ln

f

r f+

K−L∑i=1

(lmaxi

r f+ πi

)]

(26)

123

Analysis of Multi-server Scheduling Paradigm 191

Proof The packet drop event may be caused by three sources, namely; the capacity of ASMis exhausted, the normal link loss with a fixed probability of α and OOS timeout event. Sinceall the three components are known, the proof directly follows. ��

4.1 Link Monitoring

The performance of scheduler is partially dependent on the accuracy of link monitoring ser-vices. The link monitoring is used to overcome the latencies involved in restoring layer-2and layer-3 bindings during the mobility events. There are numerous techniques available forgetting estimates of available bandwidth [4,22,23]. These techniques estimate bandwidth ofa single or multiple hop paths. Multi-hop techniques are generally slow to converge in findingthe single bottleneck link and assessing the capacity available in the path [23]. Since we areinterested only in multiple single hop capacities of parallel wireless section of the E2E path,the Media Independent Handover Function (MIHF) event services; as described in 802.21standard, provide sufficient information about link-layer changes [24]. These events are usedto aggregate capacity of multiple links, as described in [22]. The MIHF provides event basedlink status information and is used as a value added service for real-time information aboutthe changes in link quality. The bandwidth aggregation schemes presented in [2] and [25] arealso useful for combining the available bandwidth of multiple available links. The aggregatecapacity of the entire set of available links, is therefore represented by eqn. (27)

CsA =

M∑i=1

Csi ∃M ≤ N , (27)

where M represents a subset of the N servers and may or may not be equal to N . The CsA

achieved through Eq. (27) helps in adapting service guarantees according to the current stateof the systemw.r.t. backlog and capacity as described in Sect. 2. The precision of aggregatedcapacity greatly helps in service continuity and timely redirection of traffic to the alternativepaths ensuring much reduced impact of mobility. The mobility events generated by MIHF arelocal in nature and provide accurate information with minimum possible latencies. There-fore, link monitoring has high utility during mobility of a MMR/MMT. During mobility,multi-mode devices face serious physical layer rate fluctuations and handover compulsions.The link monitoring function can quickly record such changes and help in assessing statusof link as active or disconnected. It is worth mentioning here that the handover operation isnot the concern of ASM server, as it is only interested in knowing what links are active andat what capacity.

5 Simulation Results

The performance of the proposed ASM scheduling scheme was tested in both MatLab/Simu-link and ns-2 simulation environments. The algorithm was fine tuned in the MatLab/Simulinkenvironment. Later for realistic scenario based experiments, the algorithm was implementedin differentiated services module of ns-2. The Markov modulated Poison processes(MMPP)were used to generate traffic sources. Four different sets of flows {1, 2, 4, 8} were generatedto test the scalability of the proposed model. Three simultaneous tunnels between MR andHA were created through Mobile IP [21]. These tunnels form the multiple paths for the trafficbetween the MMD and HA. For simplicity the measurement were recorded at the tunnel ends.

123

192 S. Z. Ahmad et al.

Fig. 3 Distribution of delayvariation during multi-pathscheduling of flows through ASM

It is assumed that in case of satisfactory performance over these tunnels, the ultimate E2Eguarantees can be easily achieved. The normal schedulers of the mobile node were replacedby ASM server to imitate multi-server scheduling. The aggregate capacity information wasachieved through the number of frames of all links during a unit time. The availability ofphysical link was achieved through the MIH event services that provide timely informa-tion for the link layer changes. The capacity was also modified through link quality changeevents to have a realistic state of link availability during mobility. All the intermediate nodesbetween the MR and HA were customized to GR servers through modification in schedulingalgorithm of the routing nodes of ns-2. The traffic burst was generated through the tail enddistribution of MMPP with large deviations from the mean values. The inter-arrival timeof the packet at source was modeled through exponential distribution with mean of 10 ms,where as during burst generation, it is modeled through 2 ms mean. The metrics of interestduring the simulation study were the E2E delay, its variation over the life cycle of a flow, thebuffer occupancy density of the life cycle of flow and overall packet drop with fixed lengthbuffers for each flow at the receiver i.e. HA.

Figure 3 shows a plot of distribution of E2E delay variations in multi-path flows. The ASMschedules traffic from batches of 1, 2, 4 and 8 distant flows, independently over multiple avail-able interfaces. It can be noticed through plot that for a single flow, the variations in E2Edelay are range bounded between −30 and 40 ms, with a high frequency of packets reachingwith near zero variations. In case of 2, 4 and 8 flows, the variation range is −45 to 55, −60to 70 and −70 to 85 ms, respectively. However, the above mentioned ranges are the worsecase readings with very low frequency and majority of the packets follow normal distributionpattern, w.r.t. E2E delay. It is important to not that the ASM schedules packets on the basisof match making between the steepest deadline and the earliest available channel. Hence noadditional processing is carried out to find the best path for a specific flow. A significantlyhigh number of packets following the normal ranges of E2E delay indicate usefulness ofproposed multi-server scheduling algorithm.

In Fig. 4, we plot the distribution of buffer occupancy at the receiver at different sets ofsimultaneous flows. The buffer occupancy describes the impact of delay variations on man-aging the buffers for in-order delivery of packet to the destination. The plot indicates that thesignificant number of packets for all the four sets of flow fall in acceptable buffer occupancyof 3–6 buffers. This leads us to use a buffer size of 6 to take care of in-order delivery for99, 95, 90, and 86% of the packets transmitted with a delay variation pattern as shown inFig. 3. The higher buffering level may, some times reduce the packet-drop rate but increasesthe cost of buffering and may also not be scalable as buffer requirement increases with theincrease in number of flows. The plot also reflect the usefulness of ASM in scheduling packets

123

Analysis of Multi-server Scheduling Paradigm 193

Fig. 4 Distribution of buffer occupancy during multi-path scheduling of flows through ASM

Fig. 5 Packet drop (%) behavior during multi-path scheduling of flows through ASM

irrespective of their upper layer binding and achieves a more realistic virtual channel com-prising of multiple heterogeneous channels.

Figure 5 plots the packet drop at different buffer sizes per flow. Two buffer sizes were usedto see the impact of increase in buffering capacity on packet drop in scenarios where OOSarrival may be high. The four sets of flows i.e. 1, 2, 4 and 8 flows were tested at two bufferlevels of 4 and 6. It is noticeable from the plots that the impact of increase in buffering levelreduces packet drop significantly. The packet drop for 4 simultaneous sessions is around 10%at buffering level of 4, which could be considers fine for video and audio traffic. The packetdrop reduces considerably low to around 6% for the 4 sessions at the buffering level of 6.Similarly the packet drop is also under 10% for 8 concurrent sessions at buffering level of 6.The trend shows a further decrease of packet drop at the buffering levels of 8 and 10, but thetrade-off between the buffer maintenance cost and the added benefit favors lesser buffers tokeep mobile routers more scalable with lesser processing complexity.

Figure 6 plots the buffer occupancy during burst arrival. We take some burst arrivals in therange of 50–100 packets (practically it is much lesser) to quantify worst case performancelimits of the ASM scheduling scheme. It can be noticed that the buffer occupancy for 4 flowsis around 4 buffers for more than 90% of the traffic. For 8 flows, buffer occupancy rarelyexceed value of eight, that signifies that increase of buffering level to 8 can reduce the packet

123

194 S. Z. Ahmad et al.

Fig. 6 Distribution of buffer occupancy during service of burst-arrival at ASM

Fig. 7 Packet Drop behavior during burst arrival

drop to almost zero but the 8 buffers will remain mostly vacant and reduce the scalability ofthe scheme. Further, higher buffering levels are also not permissible due to the time criticalnature of packet. The packet is useful only if it comes within allowed time for the real-timeapplications. Hence a higher buffering may not be very useful despite adding significant costand complexity in the system.

Figure 7 plots the packet drop behavior during burst arrival. It can be noticed that thepacket drop for 8 flows is 14 and 12% at buffering levels of 4 and 6, respectively. The abovementioned drop rate can be significantly reduced for buffering levels of 8 but the same willincrease the cost and complexity along with reduce level of utility. A dynamic managementof buffers could be one possible solution to reduce the packet drop and increase the bufferutilization.

6 Conclusions

In this paper, a multi-server scheduling scheme is discussed for simultaneous use of multiplewireless interfaces of MMDs during mobility. The MMDs that are equipped with multiplenetwork interfaces can provide best support for QoS maximization during mobility. In anysuch application, the routing of a single flow on multiple available links generally increases

123

Analysis of Multi-server Scheduling Paradigm 195

intra-flow delay variations that are not suitable for the real-time traffic. The associated com-plexities of intra-flow delay variations include OOS arrival at the receiver, re-order overheadand increased buffer management overhead. In case of some intermediate shaping and polic-ing of the traffic, such as found in flow state-aware Ingress routers, packet drop probabilitymay also be increased due to higher buffer occupancy that is cause by the packets waitingfor their predecessor for in-order forwarding.

In this paper, we have presented a novel scheduling algorithm named as ASM that isbased on guaranteed rate server philosophy and schedules packets on the multiple links witha finite deadline for transmission of backlogged traffic. The scheme is adaptive in a sensethat it is highly sensitive to physical link changes and adjusts scheduling deadlines accordingto the available capacity. The scheduling scheme is capable of handling burst arrivals in casesufficient excess capacity is available. The deterministic bounds of ASM scheduler w.r.t.service latency, E2E delay, E2E delay variation and OOS arrival have been determined forconsistent service guarantees. The simulation results have shown higher service provisioningduring mobility for real-time traffic that require timely arrival. The achieved levels of higherconsistency in the simulation resultsw.r.t. reduced OOS arrival, buffer occupancy and packetdrop probability supports the efficiency of proposed multi-server scheduling and its analyticalmodel during mobility. All these metrics provide sound support for reliable service guaran-tees. The approach is specifically useful for mobile routers as it does not require frequentrenegotiations for assured service guarantees during mobility, in the presence of proposedscheduling ASM scheme.

References

1. Ng, C., Ernst, T., Paik, E., & Bagnulo, M. (2007). Analysis of multi-homing in network mobilitysupport. RFC 4980.

2. Kameswari, C., & Ramesh, R.R. (2006). Bandwidth aggregation for real-time applications in hetero-geneous wireless networks, IEEE Transaction on Mobile Computing, 5(4).

3. Kim, P., & Han, H. (2009). A packet distribution scheme for bandwidth aggregation on networkmobility. Internet Draft.

4. Huang, T., et al. (2009). Design, implementation, and evaluation of a programmable bandwidthaggregation system for home networks. Published in Journal of Network and Computer Applications,32(3), 741–759.

5. Piratla, N. M., & Jayasumana, A. P. (2008). Metrics of packet reorder—A comparative analysis.Published in International Journal of Communication Systems, 21(1), 99–113.

6. Wang, J., et al. (2008). A mobile bandwidth-aggregation reservation scheme for NEMOs. WirelessPersonal Communications, (44), 383–401.

7. Abhay K. Parekh, & Robert G. Gallager (1993). A generalized processor sharing approach to flowcontrol in integrated services networks: The single node case. IEEE/ACM Transactions on Networking,1(3) 344–357.

8. Abhay K. Parekh, & Robert G. Gallager (1994). A generalized processor sharing approach to flow con-trol in integrated services networks: The multiple node case. IEEE/ACM Transactions on Networking,2(2), 137–150.

9. Jiang, Y. (1998). Relationship between guaranteed rate server and latency rate server. ComputerNetworks, 43(5), 611–624.

10. Sariowan, H., et al. (1999). SCED: A generalized scheduling policy for guaranteeing Quality-of-Ser-vice. IEEE/ACM Trans. Networking, 7(5), 669–684.

11. Cruz, R. L. (1998). SCED+: Efficient management of quality of service guarantees. In Proceedingsof INFOCOM’98.

12. Jon, C. R. B., & Zhang, H. (1996). Hierarchical packet fair queuing algorithms. IEEE/ACM Transactionson Networking, 5(5), 675–689, Oct 1997. Also in Proceedings of SIGCOMM’96.

13. Stiliadis, D., & Varma, A. (1998). Latency-rate servers: A general model for analysis of trafficscheduling algorithms. IEEE/ACM Trains on Networking, 6(5), 611–624.

123

196 S. Z. Ahmad et al.

14. Kaur, J., & Vin, H. M. (2001). Core-stateless guaranteed rate scheduling algorithms. In Proceedingsof IEEE INFOCOM, (pp. 1484–1492).

15. Zhang, L. (1990). Virtual clock: A new traffic control algorithm for packet switching networks. InSIGCOMM Symposium on Communications Architectures and Protocols, ( pp. 19–29). Philadelphia,PA.

16. Stoica, I., & Zhang, H. (1999). Providing guaranteed service without per flow management. InProceedings of ACM SIGCOMM.

17. Blake, S., et al. (1998)An architecture of differentiated services. In IETF RFC 2475.18. Goyal, P., et al. (1996). Determining end-to-end delay bounds in heterogeneous networks. ACM/Springer-

Verlog Multimedia System Journal, 157–163.19. Xiao, H., & Jiang, Y. (2004). Analysis of multi-server round Robin scheduling disciplines. IEICE

Transactions on Communications, E87-B(12), 3593–3602.20. Jen-Yi, P., Jing-Luen, L., & Kai-Fung, P. (2008). Multiple care-of addresses registration and capacity-

aware preference on multi-rate wireless links. In AINAW, 22nd International Conference on AdvancedInformation Networking and Applications—Workshops (pp.768–773).

21. Johnson, D., Perkins, C., & Arkko, J. (2004). Mobility support in IPv6. RFC 3775.22. Ahmad, S. Z., Akbar, M. S., & Qadir, M. A. (2008). Towards dependable wireless networks a QoS

constraint resource management scheme in heterogeneous environment. In 4th International Conferenceon Emerging Technologies, (ICET) 2008. (pp. 182–186).

23. Shriram, A., & Kaur, J. (2007). Empirical evaluation of techniques for measuring available bandwidth.In Proceedings of IEEE INFOCOM, 2007, (pp. 2161–2169).

24. IEEE 802.21 Working Group Document. (2009). IEEE Standard for local and metropolitan areanetworks: Media independent handover services, IEEE P802.21/D07.00.

25. Sharma, V., Kalyanaraman, S., Kar, K., Ramakrishnan, K.K., & Subramanian, V. (2008). MPLOT: Atransport protocol exploiting multipath diversity using erasure codes. In Proceedings of 27th IEEEConference on Computer Communications. (INFOCOM).

Author Biographies

Syed Zubair Ahmad is a Ph.D. student at Mohammad Ali Jinnah Uni-versity, Islamabad. He works for Government of Pakistan as a net-work engineer. He completed his graduation in Electronics from Quaid-e-Azam University Islamabad, Pakistan, in 1990. He also completedhis M.S. software Engineering from National University of Science &Technology, Rawalpindi, Pakistan, in 2004. He has more than 15 yearsof experience in the area of computer communication and networks.His main areas of interest include mobile computing, mobile networks,network mobility and IP networking. He also has a vast teaching expe-rience in his areas of interest. Currently Modeling and Simulation ofnetwork traffic in mobile networks is his core research area. He hasalso worked on large scale computing grid development projects andhas experience in network programming and protocol development.

123

Analysis of Multi-server Scheduling Paradigm 197

Muhammad Abdul Qadir is a professor of computer science andDean of the Faculty of Engineering & Applied Sciences at MohammadAli Jinnah University, Islamabad, Pakistan. He is the head of the Cen-ter for Distributed & Semantic Computing (CDSC), a research groupin the department of computer science. He received the M.Sc. degreein Electronics from Quaid-i-Azam University Islamabad, Pakistan, andthe Ph.D. degree from University of Surrey, Guildford, UK in paral-lel processing/distributed computing. He has more than 20 years ofwork experience in industry, and academia. His present research inter-ests are in the areas of distributed and semantic computing. In distrib-uted systems the focus is on the optimization of transport services inmobile and wireless networks, media-independent handovers in hetero-geneous mobile networks, and fault monitoring in fault-tolerant distrib-uted systems. In semantic computing, the focus is on the developmentof semantic-based search engine, ontology evaluation, mapping andmerging, extraction of semantics from multi-media contents, context-based query refinement for the web, and semantic cache for relational

and XML databases. About ten graduate students are working under his supervision and the group has pro-duced more than 50 research papers in international refereed conferences and journals. He is on the technicaland organizing committee of some conferences, and a member of several technical organizations like IEEE.

Muhammad Saeed Akbar is a Ph.D. fellow at Mohammad AliJinnah University, Islamabad, Pakistan. He completed his graduationin Computer Science in 1999 and received his M.S. degree in Com-puter Science with specialization in Computer Networks, from Uni-versity of Agriculture, Faisalabad, Pakistan in 2004. Currently; he isworking as a lecturer in Department of Computer Science, Univer-sity of Agriculture, Faisalabad, Pakistan. His area of interest is mobil-ity management and optimization of transport services in heteroge-neous mobile wireless networks. Currently he is working on analy-sis of impact of mobility on the reliable transport protocols. His workalso covers customization of transmission control protocol (TCP) togracefully react to the mobility events in heterogeneous wireless net-works. He has more that ten years of experience in teaching andresearch.

Abdelaziz Bouras is professor in Computer Sciences, at the Univer-sity Lumiere of Lyon, France. He is also leading the LIESP Laboratoryat University Lumiere and managing the CERRAL knowledge trans-fer center of the Lumiere Technology Institute of Lyon, France. He hasvast experience in teaching computer science and information technol-ogy related course. He has successfully completed many internationalprojects in collaboration with international research centers. His mainarea of interest is modeling and simulation of decision support systems;with particular emphasis on product and service life cycle management.One of his recent international collaboration projects is the Erasmus-Mundus East-west Link for Innovation, Networking and KnowledgeExchange (eLink), which has significantly contributed in developingresearch collaboration between Asia and Europe. The main areas ofresearch in the elink project include various disciplines of informa-tion and communication technology including decision support sys-tems, mobile computing, intelligent information systems, modeling andsimulations, quality control, robotics, and environmental sciences.

123