An Agent Based Congestion Control and Notification Scheme for TCP over UBR

28
1 An Agent Based Congestion Control and Notification Scheme for TCP over ABR ˝ (This work is funded in part by the Engineering and Physical Science Research Council under project GR/L86937) K. Djemame, M. Kara and R. Banwait ˝ ATM-Multimedia Group - http://www.scs.leeds.ac.uk/atm-mm School of Computer Studies University of Leeds Leeds LS2 9JT, UK Email: {karim,mourad}@scs.leeds.ac.uk ˝ Abstract ˝ We overview in this paper the enhancement of TCP’s congestion control mechanisms using Explicit Congestion Notification (ECN) over ATM networks. Congestion is indicated by not only packet losses as is currently the case but an agent implemented at the network’s edge as well. The agent bridges the gap between the ATM layer and the TCP layer in the protocol stack at the receiver end and coordinates the congestion control algorithms of the TCP transport protocol and the ATM cell-oriented switching architecture. The novel idea uses ABR rate-based flow control to notify congestion and adjust the credit- based window size of TCP. The effects of running TCP ECN over ABR (Available Bit Rate) are studied with the help of two simulation models (LAN and WAN). The simulation results indicate that LANs having a single switch and WANs with multiple switches benefit most from TCP ECN. In almost all scenarios, TCP ECN achieved significantly lower cell loss, packet retransmissions, buffer utilisation and exhibited better throughput than TCP Reno. Keywords: Transmission Control Protocol, Explicit Congestion Notification, Asynchronous Transfer Mode, Agents, Performance Evaluation. 1 Introduction Integrating ATM (Asynchronous transfer Mode) technology have revealed poor throughput performance of TCP (Transmission Control Protocol) over ATM [18]. This poor performance is mainly due to packet fragmentation which occurs when a TCP packet flows into an ATM Virtual Circuit (VC) through the AAL5 (ATM Adaptation Layer 5). AAL5 is responsible for dividing TCP’s packets into sets of 53 bytes (48 bytes data units and 5 bytes header) called cells. Fragmentation at the AAL5 is necessary since the typical size of a TCP packet is much larger than that of a cell. The loss of even a single cell in any of the ATM network’s switches results in the corruption of the entire packet to which that cell belongs, and consequently causes its

Transcript of An Agent Based Congestion Control and Notification Scheme for TCP over UBR

1

An Agent Based Congestion Control and Notification Scheme for TCP over ABR

˝(This work is funded in part by the Engineering and Physical Science Research Council under project

GR/L86937)

K. Djemame, M. Kara and R. Banwait

˝ATM-Multimedia Group - http://www.scs.leeds.ac.uk/atm-mm

School of Computer StudiesUniversity of LeedsLeeds LS2 9JT, UK

Email: {karim,mourad}@scs.leeds.ac.uk

˝Abstract

˝We overview in this paper the enhancement of TCP’s congestion control mechanisms using ExplicitCongestion Notification (ECN) over ATM networks. Congestion is indicated by not only packet losses as iscurrently the case but an agent implemented at the network’s edge as well. The agent bridges the gapbetween the ATM layer and the TCP layer in the protocol stack at the receiver end and coordinates thecongestion control algorithms of the TCP transport protocol and the ATM cell-oriented switchingarchitecture. The novel idea uses ABR rate-based flow control to notify congestion and adjust the credit-based window size of TCP.

The effects of running TCP ECN over ABR (Available Bit Rate) are studied with the help of two simulationmodels (LAN and WAN). The simulation results indicate that LANs having a single switch and WANs withmultiple switches benefit most from TCP ECN. In almost all scenarios, TCP ECN achieved significantlylower cell loss, packet retransmissions, buffer utilisation and exhibited better throughput than TCP Reno.

Keywords: Transmission Control Protocol, Explicit Congestion Notification, Asynchronous Transfer Mode,Agents, Performance Evaluation.

1 IntroductionIntegrating ATM (Asynchronous transfer Mode) technology have revealed poor throughputperformance of TCP (Transmission Control Protocol) over ATM [18]. This poor performance ismainly due to packet fragmentation which occurs when a TCP packet flows into an ATM VirtualCircuit (VC) through the AAL5 (ATM Adaptation Layer 5). AAL5 is responsible for dividingTCP’s packets into sets of 53 bytes (48 bytes data units and 5 bytes header) called cells.Fragmentation at the AAL5 is necessary since the typical size of a TCP packet is much larger thanthat of a cell. The loss of even a single cell in any of the ATM network’s switches results in thecorruption of the entire packet to which that cell belongs, and consequently causes its

2

retransmission and severe window reduction actions taken in TCP, leading to bandwidth wastageand underutilisation.Techniques to improve cell discard behaviour such as Partial Packet Discard (PPD) and EarlyPacket Discard (EPD) are aimed at reducing the transmission of useless cells [18]. Performancedegradation in TCP over ATM may also be caused by the following [15]: (1) dynamics of TCP; (2)behaviour of ATM, and (3) interaction between TCP and ATM layer congestion control schemes.

In this paper we investigate the interaction between ATM and TCP congestion control mechanisms. Weexamine the enhancement of TCP’s congestion control mechanisms using Explicit Congestion Notification(ECN) over ATM networks. Congestion is indicated by not only packet losses as is currently the case but anagent implemented at the network’s edge as well. The agent bridges the gap between the ATM layer and theTCP layer in the protocol stack at the receiver end and coordinates the congestion control algorithms of theTCP transport protocol and the ATM cell-oriented switching architecture. The novel idea uses ABR(Available Bit Rate) rate-based flow control to notify congestion and adjust the credit-based window size ofTCP. ABR is a service category defined by the ATM Forum [3] and supports applications such as thosehandling data transfer which have the ability to reduce their sending rate if congestion is experienced in thenetwork. Agents are an important area of research and development in various domains such as artificialintelligence, high-performance computing and communication networks [1] Our proposal is driven from ourinterest in proposing a framework for a coherent approach for better coordination between TCP and ATMcongestion control algorithms [8]

The effects of running TCP ECN over ABR are studied with the help of two simulation models (LAN andWAN). Simulations under various scenarios then allowed the comparison of TCP Reno and the newimplementation based on four performance metrics. The simulation results indicate that LANs having asingle switch and WANs with multiple switches benefit most from TCP ECN, and that ECN makes betteruse of bandwidth. In almost all scenarios, TCP ECN achieved significantly lower cell loss and packetretransmissions. Throughput with TCP ECN was found to be better to that of TCP Reno and bufferutilisation was generally lower.

We begin section 2 by discussing some introductory material on the behaviour of TCP traffic with controlledwindows and background material for ABR. Some related work to the subject of congestion notification inTCP is presented in section 3. Integrating TCP and ABR and its effect is described in section 4. Theenvironment, network configurations and parameters used in the simulation are presented in section 5.Simulation results over LAN as well as WAN distances are discussed in section 6. Some concluding remarksand perspectives of the research for future developments are given is section 7. A list of acronyms is alsoincluded at the end of the paper.

2 Issues of Congestion Control

2.1 TCP ControlTCP is a very popular transport protocol [20], and is the most widely used for today’s Internet dataapplications. It combines logic for routing through an internet with end-to-end control. TCP is actually acollection of algorithms that send packets into the network without a reservation and then react to observableevents to occur. Such algorithms include congestion control and recovery from loss of packets [2].

Each sender maintains two windows: the window RWND the receiver has granted and a second windowCWND called the congestion window. The minimum of the congestion window and the receiver’s window isused when sending packets. The congestion control mechanism has two distinct phases: Slow Start andCongestion Avoidance. Upon starting a connection or restarting after a packet loss, the congestion windowsize is set to one packet, and then doubled once every Round-Trip Time (RTT), i.e., upon the receipt of an

3

acknowledgement (ACK). Congestion avoidance takes care of violation which would occur if the retransmittime is too short. During this phase, CWND is linearly increased as opposed to exponential increase duringslow start phase. A third parameter SSTHRESH is used to start the congestion phase and is usually initiallyset to 64kB (kilo Bytes). Upon notification of a network congestion, i.e., a timeout, SSTHRESH is set to halfthe current CWND and CWND is set to one and the slow start phase restarts once again. There are severalways to detect a packet loss: (1) a timer: when the timer is greater than the Retransmission Time Out (RTO),a packet is assumed to be lost , and (2) when a packet has been acknowledged more than 3 times, the nextpacket is assumed to be lost.

Earlier implementations of TCP were distributed in releases corresponding to the original implementation ofJacobson [12]. However, over the years, there have been some fine-tuning, improvements and additions tothe original TCP algorithms. Some of the proposed changes have been widely adopted and are part of TCPimplementations today. TCP-Tahoe and TCP-Reno are the (most) deployed current versions. Other revisionsfor TCP have been proposed and include TCP Vegas [6] and Selective Acknowledgements [16].

2.2 ABR MechanismThe ABR control unit adjusts the rate of ATM cells into the network dynamically according to thecongestion status of the network. This rate is called Actual Cell Rate (ACR). The source is not allowed tosend cells faster than ACR to avoid network congestion. The rate-based control scheme (explicit rate orrelative rate marking (EFCI, Explicit Forward Congestion Indication)) uses special Resource Management(RM) cells [3]. In the Explicit Rate (ER) scheme, the source sends an RM cell once in every Nrm (default

Nrm = 32) cells or in Trm (default Trm = 100 ms) time units along the VC to the destination. Each cell

contains three fields that provide feedback to the source: Congestion Indication (CI) bit, a No Increase (NI)bit, and an Explicit Rate field. The source sets the RM cell’s ER field to the rate at which it likes to send andtransmits the Forward RM (FRM) Cell. As each FRM cell is received by the destination, it is turned aroundand transmitted back to the source as a Backward RM (BRM) Cell. Any of the CI, NI and ER fields may bechanged by an ATM switch along the VC or the destination before the corresponding BRM cell returns tothe source. The calculation of the fair share can be done in accordance with any congestion control schemerecommended by the ATM Forum such as ERICA (Explicit Rate Indication for Congestion Avoidance) [14].ACR always varies between the limits of MCR (Minimum Cell Rate) and PCR (Peak Cell Rate). Severalrules are defined in [3] for modifying ACR according to the received informations. The major ones are:

NI bit = 0 and CI bit = 0, ACR ← max[MCR, min[ER, PCR, ACR + RIF x PCR]]NI bit = 1 and CI bit = 0, ACR ← max[MCR, min[ER, ACR]]CI bit = 1, ACR ← max[MCR, min[ER, ACR(1 - RDF)]]

RIF (Rate Increase Factor) and RDF (Rate Decrease Factor) are defined in [3] (default value: /16).

3 Explicit Congestion Notification in TCP

The TCP congestion control algorithms need to infer the state of the network from lost segments andvariations in the RTT or throughput. The reliance on dropped packets to detect congestion affects theperformance of delay sensitive applications. This is due to the time required to retransmit lost packets andthe need to buffer segments until missing data has arrived. A second disadvantage is the slow response tocongestion as a result of the TCP sender waiting for three duplicate ACKs or a retransmit timeout beforereacting [9].

As an alternative, Source Quench and fields in packet headers can be used to obtain information on the levelof congestion in the network [9]. The ECN proposal by Ramakrishnan and Floyd in [17] uses bits in theInternet Protocol (IP) and TCP headers to inform the TCP source about congestion in the network. In the

4

proposal, routers mark the Congestion Experienced (CE) bit in IP headers in response to incipientcongestion based on average queue lengths, using the RED (Random Early Detection) algorithm [10,5].Instead of dropping the packet, RED sets the CE bit in the packet header if such a bit is provided by the IPheader and understood by the transport protocol. When an IP packet with a CE bit set reaches its destination,the TCP receiver sets the ECN-Echo flag in the header of the next outgoing ACK segment. The TCPreceiver continues to set the ECN-Echo flag until it receives a TCP data segment with a Congestion WindowReduced (CWR) flag set in its header. This protects against ACKs that may be dropped by the network. Indelayed ACK implementations, the ECN-Echo flag is set if any of the IP packets received in that intervalhave their CE bit set. The response to ECN is further complicated by Fast Retransmit and RetransmitTimeout events.

Another congestion control method called Binary Congestion Notification (BCN) in TCP also obtainsfeedback from the network through bits in the IP header. As BCN attempts to integrate TCP and ATM’sABR service, its response to congestion differs from that of ECN [19]. Integration is achieved by gettingTCP to reduce its window by a multiplicative decrease factor (MDF) when congestion is experienced. Thisreaction to congestion is similar to the ABR response thus attempting to match the data rate of both layersbased on feedback from the network.

4 Integrating TCP ECN and ABR

TCP and ABR use two different ways to detect congestion in the network: TCP uses packet losses toregulate its rate (window) whereas ABR uses the feedback information transported by RM cells to control itsACR. It is worth mentioning that ABR does not wait for a packet loss to reduce its rate. Also, in case ofcongestion, the ABR control loop is designed to slow down the ABR source. With current implementationsof TCP, ATM networks are limited to packet drops as the only viable mechanism to inform TCP sources ofcongestion. Thus, notification is implicit.

The current TCP ECN proposal for TCP/IP networks relies on modifications to the IP header by elements inthe network [17]. When TCP runs over ATM, access to the IP header by switches can incur high bufferingand processing overheads. Hence we look to the ATM cell for a congestion indication mechanism. The ABRservice provides this through the CI field in its RM cell feedback scheme.

The EFCI bit in all ATM cell headers can be used to obtain congestion feedback at the receiver for TCPECN to operate, even when another ATM service such as UBR (Unspecified Bit Rate) is employed. Havingchosen the feedback mechanism, we are left with the problem of processing the feedback and informing theTCP receiver about congestion in the network. We achieve this through the use of an agent as discussed inthe next two sections.

4.1 Role of the Agent˝The agent bridges the gap between the ATM layer and the TCP layer in the protocol stack at the receiverend. Each RM cell received by the ABR layer results in the CI value of that cell being sent to the agent. Theend of the (AAL5) Protocol Data Unit (PDU) indicator received at the ATM layer is also signalled to theagent. The agent can then decide if the network has experienced congestion during the newly receivedpacket’s life. If it is deduced that congestion was experienced, the agent informs the TCP receiver and thenext outgoing ACK will have its ECN-Echo bit set. Procedures in [17] can then be used to respond to thecongestion at the TCP sender.

The implementation proposed requires that changes be made at the receiver’s ATM and TCP layers. This canbe further simplified if the agent sets the IP header’s CE bit, thus making TCP totally unaware about thecongestion feedback source which could have come from an ATM network or a TCP/IP network. Figure 1

5

shows the location of the agent in the protocol stack and the logical ECN link between the TCP receiver andTCP sender.

Figure 1: Location of Agent and ECN Operation

4.2 AlgorithmsVariations in ABR’s ACR occur at a very high rate (small time scales) while TCP’s window adjustmentsoperate over longer time scales. Initial efforts in developing an algorithm to decide if congestion wasexperienced in the network centred on calculating a value for ACR at the receiver based on CI, NI and ERfrom FRM cells. The goal was to observe trends in the ATM cell rate and attempt to adjust TCP’s congestionwindow accordingly.

A weighted scheme was tried with more recent ACR values being given higher importance: from thebeginning until the detection of the end of a packet, ACRi in RM celli received at time ti has a weight (wi)greater than ACRj in RM cellj received at tj for ti > tj. Several reasons why this failed are: (1) the ERICAalgorithm resulted in ER variations due to fair sharing of bandwidth and not just congestion; (2) variations inATM’s ACR occur at a very high rate (small time scales) while TCP’s window adjustments operate overlonger time scales, and (3) TCP’s ECN window adjustment causes a drastic reduction in packet flow whileATM makes fine adjustments to its flow rate.

It was then decided that only CI feedback will be used to determine if the network is experiencingcongestion. The following criteria has been adopted: if half or more of the RM cells in a packet have the CIbit set, then the packet is said to have experienced congestion.

Alternative is to set CE bit inIP header. Network will be

transparent to TCP.

Indication ofCongestion

CI informationand end of

PDU indication

ECN-Echo IndicatesCongestion

Application

IP

AAL5 Sender

ABR Source

Application

IP

AAL5 Receiver

ABR Sink

Agent

TCP Sender TCP Receiver

ATMNetwork

6

Since a relative and not absolute measurement is used, this rule is fair to both large and small packets. Incases where the last cell of the packet is lost, the agent will perform the measurement over two packets. Thiscannot be avoided because the agent does not know the maximum packet size. One interesting observationwas that during congestion, RM cells tend to arrive in batches and often without data cells in between them.This is due to preferential treatment given to RM cells by switches.

The AgentThe agent counts the number of RM cells received since the end of the last packet. It also accumulates thenumber of CI bits observed for the current packet. The end of the packet (Payload-Type field (PTI) set to 1)is indicated by the ABR sink through a trigger mechanism without any data transfer. On detecting the end ofthe packet, the agent sends a signal to the TCP receiver to indicate congestion based on the following simpletest:

if (CI_received ≥ RM\_cells\_received / 2 and RM\_cells\_received > 0)Send a signal to the TCP receiver to indicate congestion.

The CI and RM cell counters (CI_received and RM_cells_received respectively) are then reset to prepare fora measurement on the next packet.

TCP ReceiverOn receiving a congestion signal from the agent, the TCP receiver begins setting ECN-Echo bits in outgoingACKs. This continues until a data segment with the CWR bit set to one is received. Special care is taken toensure that if a segment carrying the CWR flag experienced congestion, it is taken as a new instance ofcongestion in the network and hence the ECN-Echo bit is set again.

TCP SenderThe TCP sender’s response to the ECN-Echo bit is complex, as it needs to merge ECN’s window adjustmentswith TCP’s existing congestion control algorithms. Detailed testing was done to ensure compliance with theproposal in [17]. Flags are used in the source code to prevent window adjustments due to FastRetransmit/Fast Recovery or RTO from operating at the same time as ECN’s window adjustments. Theindication of congestion is treated just as a congestion loss in non-ECN-Capable TCP. That is, the TCPsender halves the congestion window CWND and reduces the slow start threshold SSTHRESH.

A variable records when the ECN action should terminate, which is approximately one RTT after an ECN-Echo is received and the window reduced. To inform the TCP receiver about responses to congestion, theCWR bit is set when the TCP sender reduces CWND. Finally, the TCP sender does not increment CWNDfor acknowledgements that have the ECN-Echo bit set.

5 Experimental Design

5.1 Simulation Experiments and ObjectivesWe describe in this section the network configurations and parameters used in the simulation. Experimentswere performed on two network configurations (LAN and WAN). A comparison between between LAN andWAN sized networks was achieved by varying delays between senders, switches and receivers. With eachscenario, simulations were run for: (1) constant number of sources and varying buffer size, and (2) constantbuffer size and varying number of sources. The objective of the experiments is to observe the behaviour ofTCP ECN relative to that of TCP Reno in different circumstances, to do an analysis over the set ofexperiments when running simulations with TCP Reno and TCP ECN, and to show that the Agent Basedalgorithm (TCP ECN) under various scenarios exhibits a better performance in the evaluation of thesimulation tests.

7

5.2 Topologies

LAN ModelSimulation is performed on the N source configuration consisting of N identical TCP sources that send datawhenever allowed by the window (Figure 2). All traffic is unidirectional (discounting the TCP ACKs andthe BRM cells in the reverse direction). N TCP sources and N TCP destinations are connected to the switch.

Figure 2: LAN Simulation Model (configuration 1)

WAN Model

Figure 3: WAN Simulation Model (configuration 2)

The WAN simulation model, borrowed from [11], is a multistage wide area network with four switches SWi(i=0..3) (Figure 3). The sources are divided into four groups. The purpose of choosing such a complex modelis to make it as realistic as possible:

Group Source 1 to Destination 1: end-to-end trafficGroup Source 2 to Destination 2: two-hop trafficGroup Source 3 to Destination 3: two-hop traffic

Buffer

Switch

ABRMux

DataDemux

Source 1

Source N

Destination 1

Destination N

Switch 0 Switch 1 Switch 2 Switch 3

Grp 1SRC

Grp 2SRC

Grp 3SRC

Grp 4SRC

Grp 2REC

Grp 1REC

Grp 3REC

Grp 4REC

8

Group Source 4 to Destination 4: one-hop traffic

5.3 Simulation Tool

The YATS simulation package is used in the simulation study [4]. YATS is a detailed simulator for TCPover ATM. Most of common features of TCP such as slow-start, congestion detection, congestionavoidance, fast retransmit and fast recovery are included in the simulator. Several objects were modified toincorporate TCP ECN into YATS. The agent was developed from scratch while major changes were made tothe TCP receiver and TCP sender according to the proposal in [17]. Minor changes to the ABR sink objectallowed CI information and the end of packet indication to be sent to the agent. An ECN-Echo bit was addedto the TCP ACK data unit and the CWR bit to the TCP data frame. Traffic is generated by YATS specificobjects using a TCP/IP/AAL5/ATM protocol stack.

5.4 Simulation Scenario

Experiments are performed using the basic ERICA algorithm implemented at the ATM switch. The defaultmaximum TCP window is 64kB which is sufficient to fill the network pipe in LAN. The TCP sources aregreedy sources with infinite supply of data and always have data ready to send as and when permitted by theTCP flow control. A large bulk data transfer application runs on top of TCP sources. and recognises thestart-stop protocol. The effects of TCP packet size are studied using sources that generate 512 and 9180bytes frames. We take into account 20 bytes of TCP header and 20 bytes IP header. This was implementedby altering the Maximum Transfer Unit (MTU) parameter of the TCP sender. The timestamp option(provided by YATS) appears in any data or ACK segment, adding 12 bytes to the 20 bytes TCP header, andgives the sender an accurate RTT measurement for every ACK segment [13].

To study the congestion behaviours at ATM and TCP layers, we consider that another VBR (greedy)application is running in the background. VBR application has higher priority than ABR. We assume theATM switch has a buffer shared by all VCs passing through. The scheduling policy at the buffer is FIFO. Inorder to induce variable congestion in the switches, the size of the shared buffer is varied for the simulationcases. The buffer sizes chosen are in the range [50 ... 2400] cells. The earlier part of the range offers verytight buffer constraint, while on the other hand the latter part of the range offers much more relaxed bufferconstraints. The switches are configured with ERICA and binary feedback (CI) but with no specific cell droppolicy. A threshold at the ATM switch is set to half the buffer size to turn congestion indication on andchange the CI field along the VC or the destination before the corresponding BRM cell returns to the source.All links run at 155.52 Mbps.

We consider a LAN scenario (Scen.LAN) where the delay (dLAN + dSWITCH + dLAN) is set to a typical

value of 100 µs (Figure 2). The number of sources N is set in the range [10..200]. The values of N arearbitrarily selected, but nonetheless deemed sufficient for producing traffic flow required in the simulationstudy of high-speed networking. In the WAN model (Scen.WAN), the TCP sources each establish a singleconnection with the similarly numbered destination across the ATM switches and the bottleneck links(Figure 3). As in the LAN model, the internal delay in each ATM switch is set to 10 µs. We consider delay(dWAN*5 + dSWITCH*4) is set to a typical value of 10 ms. Finally, the number of sources in each group isset in the range [20..50]. In both scenarios all the sources start transmitting at the same time. The analysis ofthe performance results is based on long-term congestion behaviour of the network.

6 Results and Evaluation

9

6.1 Performance MetricsFour performance metrics were used to compare TCP Reno and TCP ECN for ATM. It was desirable toobserve performance at both the TCP and ATM layers. The metrics chosen for ATM are the Cell Loss Ratio(CLR) and the average buffer utilisation whereas TCP’s metrics are the Packet Retransmitted Ratio (PRR)and the average Throughput per connection. The average buffer utilisation is the percentage of the bufferthat has been occupied on average. It is measured in each switch to show if the RM cell feedbackmechanism and TCP ECN are able to keep queue lengths below the threshold specified (50% lower and 75%upper). The TCP window size also affects queue lengths since it is able to regulate data flow over longerdurations when compared to ATM. A larger buffer utilisation results in poorer delay characteristics forconnections with cells in the buffer.

6.2 LAN Configuration

The following sections discuss simulation variables and their effects on the congestion control methods.From the simulation results, it was observed that segment size had minimal influence on the performance ofTCP Reno and TCP ECN.

On the whole, TCP ECN produced favourable results for this configuration with regards to CLR, PRR andswitch buffer utilisation. Improvements or degradation in throughput was less noticeable especially in theWAN configurations. It must be noted that throughput for varying number of sources spans several orders ofmagnitude and small differences between TCP ECN and TCP Reno may not be visible.

6.2.1 Varying the Number of Sources

CLR and PRR TCP ECN consistently achieved lower cell loss and packet retransmissionsregardless of the number of sources present. Simulations with fewer than140 sources recorded zero cell loss. PRR exhibited similar characteristics toCLR as expected. The following graph shows the relative performance ofthe two methods for a LAN-512byte environment.

ERI C A - Buf f e r S i z e 13 10 6 9 - TCP P a c k e t S i z e 5 12

0.00E+00

2.00E-04

4.00E-04

6.00E-04

8.00E-04

1.00E-03

1.20E-03

1.40E-03

1.60E-03

1.80E-03

3 5 10 15 20 100 120 140 160 180 200

Number of Sources

CL

R (

%)

RE NO

ECN

The ECN feedback mechanism appears to reduce CLR and PRR by makingthe source window-limited when congestion is noticed on the network.

Throughput Both congestion control methods gave comparable throughputmeasurements. Closer examination of larger source simulations shows thisto be true as illustrated in the following graph for the WAN-9128bytescenario.

10

ERI CA - Buf f e r S i z e 13 10 6 9 - TCP P a c k e t S i z e 9 12 8

0.00E+00

2.00E+05

4.00E+05

6.00E+05

8.00E+05

1.00E+06

1.20E+06

1.40E+06

100 120 140 160 180 200

Number of Sources

Th

rou

gh

pu

t (b

ps)

RE NO

ECN

Buffer Utilisation In almost all the experiments, TCP ECN maintained lower average bufferlengths. The graph below shows a LAN-9128byte environment with TCPECN following the general trend of TCP Reno but with a bias downwards.

ERI C A - B uf f e r S i z e 13 10 6 9 - TCP P a c k e t S i z e 9 12 8

0.00E+00

1.00E+01

2.00E+01

3.00E+01

4.00E+01

5.00E+01

6.00E+01

7.00E+01

3 5 10 15 20 100 120 140 160 180 200

Number of Sources

Sw

itch

Ave

rag

e B

uff

er

Uti

lisat

ion

(%

)

RE NO

E CN

Changes in window size due to the TCP ECN mechanism provides a long-term congestion control strategy when compared to ATM’s ACR control.This has resulted in the lower buffer lengths observed here. Maintaining alower buffer utilisation is desirable because it reduces the chances of cellloss and gives the traffic better delay characteristics.

TCP ECN is able to provide significant improvements in PRR, CLR and buffer utilisation with minimalimpact on throughput. This was found to be true in all the scenarios investigated.

6.2.2 Varying the Buffer Size

CLR and PRR As expected, the results show that cell loss decreases when larger buffersizes are used. CLR and PRR measurements were significantly lower inTCP ECN simulations when compared to TCP Reno. This is shown in theresults for a WAN-512byte environment below.

11

ERI C A - S our c e s 12 0 - TCP P a c k e t S i z e 5 12

0.00E+00

5.00E-04

1.00E-03

1.50E-03

2.00E-03

2.50E-03

3.00E-03

3.50E-03

4081 8162 16377 32754 65508 131069 262138

Buffer Size

PR

R (

%)

RE NO

ECN

Throughput Throughput was seen to improve with larger buffer sizes due to theincreased ability of the switch to cope with congestion. LAN simulations forTCP ECN showed noticeable improvements in throughput for medium sizedbuffers (32kB to 128kB) as shown in the graph below (LAN-512byte).

ERI CA - S our c e s 3 0 - TCP P a c k e t S i z e 5 12

3.30E+06

3.35E+06

3.40E+06

3.45E+06

3.50E+06

3.55E+06

3.60E+06

3.65E+06

3.70E+06

3.75E+06

4081 8162 16377 32754 65508 131069 262138

Buffer Size

Th

rou

gh

pu

t (b

ps)

RE NO

ECN

Buffer Utilisation The graph shows TCP ECN’s ability to maintain lower queue lengths in aWAN-9128byte environment. Similar results were achieved for LAN.

ERI C A - S our c e s 12 0 - TCP P a c k e t S i z e 9 12 8

0.00E+00

1.00E+01

2.00E+01

3.00E+01

4.00E+01

5.00E+01

6.00E+01

7.00E+01

8.00E+01

4081 8162 16377 32754 65508 131069 262138

Buffer Size

Sw

itch

Ave

rag

e B

uff

er

Uti

lisat

ion

(%

)

RE NO

E CN

12

By matching CLR, throughput and buffer utilisation graphs the following can be seen:

• Cell loss is greatest for small buffer sizes.

• Buffer utilisation is below the 10% mark for a buffer size of 4kB and increases after that. Note that thelower threshold is set at 50%.

• The lower throughput at small buffer sizes (number of sources constant) indicates longer link idle times.

• In most cases, TCP ECN showed a significantly lower CLR/PRR for small buffer sizes.

All these points indicate bursty traffic with long idle times and that the TCP ECN advantage is most strikingfor such traffic characteristics. The 9128 bytes scenario achieved 10\% higher throughput when comparedwith 512 bytes simulations.

6.2 WAN Configuration

Analysis of results in the four switch configuration is more complex due to the number of switches andgroups involved. The vast difference in characteristics of the groups makes comparisons between differentgroups difficult. A brief general comparison of results between the various groups is given below:

CLR and PRR WAN scenarios resulted in groups 1 and 2 having similar results. Group 3encountered the highest losses and retransmissions while group 4 had thelowest.

In LAN simulations, groups 1 and 3 had similar results while group 2 hadthe lowest and group 4 the highest losses.

Throughput In all experiments, throughput measurements were similar for group 1 andgroup 3. Groups 2 and 4 had higher values. This is reasonable becausegroups 1 and 3 both experience bottlenecks at switch 2 and again at switch3.

Buffer Utilisation Switch 0 consistently encountered a zero average queue length and switch 2had the highest. This indicates that switch 0 is not experiencing congestionand switch 2 is most likely a bottleneck.

TCP ECN and TCP Reno metrics did not differ significant in most cases. The following sections willconcentrate on major differences that were observed between the two methods. The few instances where 512byte simulations exhibit different characteristics from 9128 byte simulations will also be noted.

6.3.1 Varying the Number of Sources

CLR and PRR LAN simulation results showed little sign of cell loss or packetretransmission except for very large source numbers.

In WAN simulations, TCP ECN out performed TCP Reno for 512bytesimulations. TCP ECN failed to perform well when using 9128bytesegments.

Throughput Both methods achieved identical throughputs in all cases.

13

Buffer Utilisation No significant differences in performance were observed between TCPECN and TCP Reno except in switch 2 (WAN-512byte) when utilisationexceeded 20%. The follow graph shows TCP ECN maintaining averagebuffer utilisation below 15%.

ERI C A - Buf f e r S i z e 8 16 2 - TCP P a c k e t S i z e 5 12

0.00E+00

5.00E+00

1.00E+01

1.50E+01

2.00E+01

2.50E+01

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

Sw

itch

2 A

vera

ge

Bu

ffer

Uti

lisat

ion

(%

)

RE NO

E CN

6.3.2 Varying the Buffer Size

CLR and PRR The LAN scenarios did not generate significant losses except for very smallbuffer sizes.

TCP ECN consistently outperformed TCP Reno in the WAN-512byteenvironment. TCP ECN results for WAN-9128byte simulations are mixedas shown in the group 1 PRR graph below.

ER I CA - S our c e s 5 0 - TCP P a c k e t S i z e 9 12 8

0.00E+00

2.00E-03

4.00E-03

6.00E-03

8.00E-03

1.00E-02

1.20E-02

1.40E-02

4081 8162 16377 32754 65508 131069 262138

Buffer Size

PR

R (

%)

RE NO

ECN

Throughput The LAN environment shows throughput increasing with buffer size forgroups 1,3 and 4. Throughput for group 2 decreases as the buffers increase.No conclusion can be drawn on the relative performance of TCP ECN andTCP Reno.

In WAN scenarios, TCP ECN achieved better throughput for groups 2 and 4while results for the remaining groups were poorer. The following graphshows improvements to group 2 in the WAN-9128byte scenario.

14

ERI CA - S our c e s 5 0 - TCP P a c k e t S i z e 9 12 8

8.30E+05

8.40E+05

8.50E+05

8.60E+05

8.70E+05

8.80E+05

8.90E+05

9.00E+05

9.10E+05

9.20E+05

4081 8162 16377 32754 65508 131069 262138

Buffer Size

Th

rou

gh

pu

t (b

ps)

RE NO

ECN

Buffer Utilisation TCP ECN lowered buffer utilisation in switches 2 and 3 for WANexperiments. The graph below shows TCP ECN maintaining the averagebuffer length of switch 2 in the WAN-512byte scenario under the 50%mark.

ER I CA - S our c e s 5 0 - TCP P a c k e t S i z e 5 12

0.00E+00

1.00E+01

2.00E+01

3.00E+01

4.00E+01

5.00E+01

6.00E+01

7.00E+01

4081 8162 16377 32754 65508 131069 262138

Buffer Size

Sw

itch

2 A

vera

ge

Bu

ffer

Uti

lisat

ion

(%

)

RE NO

E CN

7 Summary and Conclusion

In this paper the enhancement of TCP’s congestion control mechanisms using Explicit CongestionNotification over ATM networks has been overviewed. In addition to packet losses as an indication ofcongestion, an agent at the network’s edge that bridges the gap between the ATM layer and the TCP layer inthe protocol stack notifies congestion which results in an adjustment of the credit-based window size ofTCP. To do so, the agent uses ABR’s rate-based flow control.

The simulation environment and simulation results over LAN as well as WAN distances have beenpresented. An overall comparison of the LAN results indicates that TCP ECN performs better, relative toTCP Reno. This is clearly seen from all CLR, PRR and in throughput results for varying switch buffer size.Buffer utilisation was kept below 50% in the LAN scenario. Average switch buffer lengths were alwaysmaintained at lower levels in TCP ECN implementation thus giving the traffic better delay characteristics.However throughput achieved while varying the number of sources was almost identical for TCP Reno andTCP ECN. Performance differences between TCP ECN and TCP Reno were more pronounced for WANsimulations with TCP ECN outperforming TCP Reno. Further work must be done in investigating the

15

fairness of this explicit notification scheme more thoroughly as various studies [19] have shown that usingfor example binary congestion notification results in a bad fairness behaviour.

The EFCI bits (available in every ATM cell header) can be used to generalise the ECN response to theATM UBR service. UBR is of particular interest since it does not have any ATM guarantees or feedbackmechanism. Through the use of an agent at the network’s edge, TCP’s congestion control may becomesufficiently effective to overcome UBR’s shortcomings [7]. Unlike the use of the CI bit in ABR RM cells todetermine congestion, EFCI provides more up to date information on the network since it is received withevery ATM cell.

References

[1] Proceedings of the 3rd International Conference on Autonomous Agents (Agents’99), Seattle,Washington, May 1999. ACM.http://www.cs.washington.edu/research/agents99/.

[2] M. Allman, V. Paxson and W. Stevens. TCP Congestion Control. RFC 2581, Apr 1999.

[3] ATM Forum Technical Committee. Traffic Management Specification. Version 4.1, AFTM-0121.000, Mar 1999

[4] M. Baumann. Yet Another Tiny Simulator – Version 0.3. Communications Laboratory,Dresden University of Technology, Jan 1998.http://www.ifn.et.tu-dresden.de/TK/yats/yats.html

[5] B. Braden, D. Clark, J. Crowcroft, B. Davie, S. Deering, Estrin, S. Floyd, V. Jacobson, G.Minshall, C. Partridge, L. Peterson, K. Ramakrishnan, S. Shenker, J. Wroclawski and L.Zhang. Recommendations on Queue Management and Congestion Avoidance in the Internet.RFC 2309, Apr 1998.

[6] L.S. Brakmo, S.W. O'Malley, and L.L. Peterson. TCP Vegas: New techniques for congestiondetection and avoidance. In Proceedings of the SIGCOMM ’94 Symposium, Aug. 1994.http://www.cs.unc.edu/~clark/research/papers/vegas.ps.

[7] K. Djemame, M. Kara. An Agent Based Congestion Control and Notification Scheme for TCPover UBR. (In preparation).

[8] K. Djemame and M. Kara. Proposals for a Coherent Approach to Cooperation between TCPand ATM Congestion Control Algorithms. In J.T. Bradley and N.J. Davies, editors,Proceedings of the 15th Annual UK Performance Engineering Workshop (UKPEW’99), pages273-284, Bristol, UK, Jul 1999. http://www.scs.leeds.ac.uk/atm-mm/papers/ukpew99.ps.gz.

[9] S. Floyd. TCP and Explicit Congestion Notification. ACM Computer CommunicationReview, 24(5):8-23, Oct 1994. http://www.aciri.org/floyd/papers/tcp_ecn.4.ps.Z.

[10] S. Floyd and V. Jacobson. Random Early Detection gateways for CongestionAvoidance. IEEE/ACM Transactions on Networking, 1(4):397-413, Aug 1993.ftp://ftp.ee.lbl.gov/papers/early.pdf.

[11] M. Hashmani, K. Kawahara, H. Sunahara and Y. Oie. Issues of Congestion Control

16

and Notification Schemes in ATM Networks and Proposal of EPRCAM. InProceedings of ICC98, pp.291-298, Atlanta, GA, Jun 1998. IEEE.http://yen.cse.kyutech.ac.jp/~kawahara/study/Icc98.ps.gz.

[12] V. Jacobson. Congestion Avoidance and Control. ACM Computer CommunicationReview, 18:314-329, August 1988. Proceedings of SIGCOMM’88 Symposium,Stanford, CA. ftp://ftp.ee.lbl.gov/papers/congavoid.ps.Z.

[13] V. Jacobson, R. Braden and D. Borman. TCP Extensions for High Performance. RFC 1323,May 1992.

[14] S. Kalyanaraman, R. Jain, S. Fahmy, R. Goyal, and B. Vandalore. The ERICA SwitchAlgorithm for ABR Traffic Management in ATM Networks. Submitted to IEEE/ACMTransactions on Networking, Nov1997.http://www.cis.ohio-state.edu/~jain/papers/erica.htm

[15] M. Kara and M.A. Rahin. Towards a Framework for Performance Evaluation of TCPBehaviour over ATM. In Proceedings of the 13th IFIP International Conference onComputer Communication (ICCC’97), pages 49-60, Nov 1997.http://www.scs.leeds.ac.uk/atm-mm/papers/1997/TCPATMPerfFrameWork.ps.gz.

[16] M. Mathis, J. Mahdavi, S. Floyd and A. Romanow. TCP Selective AcknowledgementOptions. RFC 2018, April 1996.

[17] K. Ramakrishnan, S. Floyd. A Proposal to add Explicit Congestion Notification (ECN)to IP. RFC 2481, January 1999.

[18] A. Romanow and S. Floyd. Dynamics of TCP Traffic over ATM Networks. IEEEJournal on Selected Areas in Communications, 13(4):633-641, May 1995.http://www.aciri.org/floyd/papers/tcp_atm.ps.Z.

[19] D. Sisalem and H. Schulzrinne. Binary Congestion Notification in TCP. In Conference Recordof the International Conference on Communications (ICC), Dallas, Texas, Jun 1996. IEEE.http://www.cs.columbia.edu/~hgs/papers/Sisa9606_Binary.ps.gz.

[20] W.R. Stevens. TCP/IP Illustrated, Vol.1. Addison Wesley, 1994.

[21] W. Stallings. High-Speed Networks. TCP/IP and ATM Design Principles. Prentice-Hall, 1998.

17

Acronyms

AAL5 ATM Adaptation Layer 5ABR Available Bit RateACK AcknowledgementACR Allowed Cell RateATM Asynchronous Transfer ModeBCN Binary Congestion NotificationBRM Cell Backward Resource Management CellCE Congestion ExperiencedCI Congestion IndicationCLR Cell Loss RatioCWND Congestion WindowCWR Congestion Window ReducedECN Explicit Congestion NotificationEFCI Explicit Forward Congestion IndicationEPD Early Packet DiscardER Explicit RateERICA Explicit Rate Indication for Congestion AvoidanceFRM Cell Forward Resource Management CellLAN Local Area NetworkIP Internet ProtocolMCR Minimum Cell RateMTU Maximum Transfer UnitNI No IncreasePCR Peak Cell RatePPD Partial Packet DiscardPRR Packet Retransmission RatioPTI Payload-TypeRDF Rate Decrease FactorRIF Rate Increase FactorRTT Round Trip TimeRTO Retransmission Time OutSSTHRESH Slow Start ThresholdTCP Transmission Control ProtocolUBR Unspecified Bit RateVBR Variable Bit RateVC Virtual CircuitWAN Wide Area Network

18

6.4

Configuration 1 - LAN - Varying Buffer Size

Configuration 1 - LAN - Varying Number of Sources

ERICA - Sources 30 - TCP Packet Size 512

0.00E+00

2.00E-03

4.00E-03

6.00E-03

8.00E-03

1.00E-02

1.20E-02

4081 8162 16377 32754 65508 131069 262138

Buffer Size

CL

R (

%)

RENOECN

ERICA - Sources 30 - TCP Packet Size 512

0.00E+00

1.00E+01

2.00E+01

3.00E+01

4.00E+01

5.00E+01

6.00E+01

4081 8162 16377 32754 65508 131069 262138

Buffer Size

Sw

itch

Ave

rag

e B

uff

er

Uti

lisat

ion

(%

)

RENOECN

ERICA - Sources 30 - TCP Packet Size 512

3.30E+06

3.35E+06

3.40E+06

3.45E+06

3.50E+06

3.55E+06

3.60E+06

3.65E+06

3.70E+06

3.75E+06

4081 8162 16377 32754 65508 131069 262138

Buffer Size

Th

rou

gh

pu

t (b

ps)

RENOECN

ERICA - Sources 30 - TCP Packet Size 512

0.00E+00

5.00E-05

1.00E-04

1.50E-04

2.00E-04

2.50E-04

3.00E-04

3.50E-04

4.00E-04

4.50E-04

4081 8162 16377 32754 65508 131069 262138

Buffer Size

PR

R (

%)

RENOECN

ERICA - Buffer Size 131069 - TCP Packet Size 512

0.00E+00

2.00E-04

4.00E-04

6.00E-04

8.00E-04

1.00E-03

1.20E-03

1.40E-03

1.60E-03

1.80E-03

3 5 10 15 20 100 120 140 160 180 200

Number of Sources

CL

R (

%)

RENOECN

ERICA - Buffer Size 131069 - TCP Packet Size 512

0.00E+00

1.00E+01

2.00E+01

3.00E+01

4.00E+01

5.00E+01

6.00E+01

7.00E+01

3 5 10 15 20 100 120 140 160 180 200

Number of Sources

Sw

itch

Ave

rag

e B

uff

er

Uti

lisat

ion

(%

)

RENOECN

ERICA - Buffer Size 131069 - TCP Packet Size 512

0.00E+00

2.00E+06

4.00E+06

6.00E+06

8.00E+06

1.00E+07

1.20E+07

1.40E+07

3 5 10 15 20 100 120 140 160 180 200

Number of Sources

Th

rou

gh

pu

t (b

ps)

RENOECN

ERICA - Buffer Size 131069 - TCP Packet Size 512

0.00E+00

5.00E-06

1.00E-05

1.50E-05

2.00E-05

2.50E-05

3.00E-05

3.50E-05

4.00E-05

3 5 10 15 20 100 120 140 160 180 200

Number of Sources

PR

R (

%)

RENOECN

19

Configuration 1 - LAN - Varying Buffer Size

Configuration 1 - LAN - Varying Number of Sources

ERICA - Sources 30 - TCP Packet Size 9128

0.00E+00

2.00E-03

4.00E-03

6.00E-03

8.00E-03

1.00E-02

1.20E-02

4081 8162 16377 32754 65508 131069 262138

Buffer Size

CL

R (

%)

RENOECN

ERICA - Sources 30 - TCP Packet Size 9128

0.00E+00

1.00E+01

2.00E+01

3.00E+01

4.00E+01

5.00E+01

6.00E+01

7.00E+01

4081 8162 16377 32754 65508 131069 262138

Buffer Size

Sw

itch

Ave

rag

e B

uff

er

Uti

lisat

ion

(%

)

RENOECN

ERICA - Sources 30 - TCP Packet Size 9128

3.70E+06

3.75E+06

3.80E+06

3.85E+06

3.90E+06

3.95E+06

4.00E+06

4.05E+06

4.10E+06

4.15E+06

4081 8162 16377 32754 65508 131069 262138

Buffer Size

Th

rou

gh

pu

t (b

ps)

RENOECN

ERICA - Sources 30 - TCP Packet Size 9128

0.00E+00

5.00E-04

1.00E-03

1.50E-03

2.00E-03

2.50E-03

3.00E-03

3.50E-03

4.00E-03

4081 8162 16377 32754 65508 131069 262138

Buffer Size

PR

R (

%)

RENOECN

ERICA - Buffer Size 131069 - TCP Packet Size 9128

0.00E+00

2.00E-04

4.00E-04

6.00E-04

8.00E-04

1.00E-03

1.20E-03

1.40E-03

1.60E-03

1.80E-03

3 5 10 15 20 100 120 140 160 180 200

Number of Sources

CL

R (

%)

RENOECN

ERICA - Buffer Size 131069 - TCP Packet Size 9128

0.00E+00

1.00E+01

2.00E+01

3.00E+01

4.00E+01

5.00E+01

6.00E+01

7.00E+01

3 5 10 15 20 100 120 140 160 180 200

Number of Sources

Sw

itch

Ave

rag

e B

uff

er

Uti

lisat

ion

(%

)

RENOECN

ERICA - Buffer Size 131069 - TCP Packet Size 9128

0.00E+00

5.00E+06

1.00E+07

1.50E+07

2.00E+07

2.50E+07

3.00E+07

3.50E+07

4.00E+07

4.50E+07

3 5 10 15 20 100 120 140 160 180 200

Number of Sources

Th

rou

gh

pu

t (b

ps)

RENOECN

ERICA - Buffer Size 131069 - TCP Packet Size 9128

0.00E+00

1.00E-04

2.00E-04

3.00E-04

4.00E-04

5.00E-04

6.00E-04

3 5 10 15 20 100 120 140 160 180 200

Number of Sources

PR

R (

%)

RENOECN

20

Configuration 2 - Group 1 - WAN - Varying Buffer Size

Configuration 2 - Group 1 - WAN - Varying Number of Sources

ERICA - Sources 50 - TCP Packet Size 512

0.00E+00

2.00E-03

4.00E-03

6.00E-03

8.00E-03

1.00E-02

1.20E-02

1.40E-02

1.60E-02

1.80E-02

2.00E-02

4081 8162 16377 32754 65508 131069 262138

Buffer Size

CL

R (

%)

RENOECN

ERICA - Sources 50 - TCP Packet Size 512

0.00E+00

1.00E-01

2.00E-01

3.00E-01

4.00E-01

5.00E-01

6.00E-01

7.00E-01

8.00E-01

9.00E-01

1.00E+00

4081 8162 16377 32754 65508 131069 262138

Buffer Size

Sw

itch

0 A

vera

ge

Bu

ffer

U

tilis

atio

n (

%)

RENOECN

ERICA - Sources 50 - TCP Packet Size 512

6.80E+05

6.85E+05

6.90E+05

6.95E+05

7.00E+05

7.05E+05

7.10E+05

7.15E+05

7.20E+05

7.25E+05

7.30E+05

7.35E+05

4081 8162 16377 32754 65508 131069 262138

Buffer Size

Th

rou

gh

pu

t (b

ps)

RENOECN

ERICA - Sources 50 - TCP Packet Size 512

0.00E+00

5.00E-04

1.00E-03

1.50E-03

2.00E-03

2.50E-03

4081 8162 16377 32754 65508 131069 262138

Buffer Size

PR

R (

%)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 512

0.00E+00

5.00E-03

1.00E-02

1.50E-02

2.00E-02

2.50E-02

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

CL

R (

%)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 512

0.00E+00

1.00E-01

2.00E-01

3.00E-01

4.00E-01

5.00E-01

6.00E-01

7.00E-01

8.00E-01

9.00E-01

1.00E+00

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

Sw

itch

0 A

vera

ge

Bu

ffer

U

tilis

atio

n (

%)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 512

0.00E+00

2.00E+06

4.00E+06

6.00E+06

8.00E+06

1.00E+07

1.20E+07

1.40E+07

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

Th

rou

gh

pu

t (b

ps)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 512

0.00E+00

5.00E-04

1.00E-03

1.50E-03

2.00E-03

2.50E-03

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

PR

R (

%)

RENOECN

21

Configuration 2 - Group 2 - WAN - Varying Buffer Size

Configuration 2 - Group 2 - WAN - Varying Number of Sources

ERICA - Sources 50 - TCP Packet Size 512

0.00E+00

5.00E-03

1.00E-02

1.50E-02

2.00E-02

2.50E-02

4081 8162 16377 32754 65508 131069 262138

Buffer Size

CL

R (

%)

RENOECN

ERICA - Sources 50 - TCP Packet Size 512

0.00E+00

1.00E-01

2.00E-01

3.00E-01

4.00E-01

5.00E-01

6.00E-01

7.00E-01

4081 8162 16377 32754 65508 131069 262138

Buffer Size

Sw

itch

1 A

vera

ge

Bu

ffer

U

tilis

atio

n (

%)

RENOECN

ERICA - Sources 50 - TCP Packet Size 512

7.40E+05

7.50E+05

7.60E+05

7.70E+05

7.80E+05

7.90E+05

8.00E+05

8.10E+05

8.20E+05

4081 8162 16377 32754 65508 131069 262138

Buffer Size

Th

rou

gh

pu

t (b

ps)

RENOECN

ERICA - Sources 50 - TCP Packet Size 512

0.00E+00

5.00E-04

1.00E-03

1.50E-03

2.00E-03

2.50E-03

4081 8162 16377 32754 65508 131069 262138

Buffer Size

PR

R (

%)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 512

0.00E+00

5.00E-03

1.00E-02

1.50E-02

2.00E-02

2.50E-02

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

CL

R (

%)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 512

0.00E+00

5.00E-01

1.00E+00

1.50E+00

2.00E+00

2.50E+00

3.00E+00

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

Sw

itch

1 A

vera

ge

Bu

ffer

U

tilis

atio

n (

%)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 512

0.00E+00

2.00E+06

4.00E+06

6.00E+06

8.00E+06

1.00E+07

1.20E+07

1.40E+07

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

Th

rou

gh

pu

t (b

ps)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 512

0.00E+00

5.00E-04

1.00E-03

1.50E-03

2.00E-03

2.50E-03

3.00E-03

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

PR

R (

%)

RENOECN

22

Configuration 2 - Group 3 - WAN - Varying Buffer Size

Configuration 2 - Group 3 - WAN - Varying Number of Sources

ERICA - Sources 50 - TCP Packet Size 512

0.00E+00

1.00E-02

2.00E-02

3.00E-02

4.00E-02

5.00E-02

6.00E-02

4081 8162 16377 32754 65508 131069 262138

Buffer Size

CL

R (

%)

RENOECN

ERICA - Sources 50 - TCP Packet Size 512

0.00E+00

1.00E+01

2.00E+01

3.00E+01

4.00E+01

5.00E+01

6.00E+01

7.00E+01

4081 8162 16377 32754 65508 131069 262138

Buffer Size

Sw

itch

2 A

vera

ge

Bu

ffer

U

tilis

atio

n (

%)

RENOECN

ERICA - Sources 50 - TCP Packet Size 512

5.80E+05

6.00E+05

6.20E+05

6.40E+05

6.60E+05

6.80E+05

7.00E+05

7.20E+05

7.40E+05

4081 8162 16377 32754 65508 131069 262138

Buffer Size

Th

rou

gh

pu

t (b

ps)

RENOECN

ERICA - Sources 50 - TCP Packet Size 512

0.00E+00

5.00E-04

1.00E-03

1.50E-03

2.00E-03

2.50E-03

3.00E-03

3.50E-03

4.00E-03

4.50E-03

5.00E-03

4081 8162 16377 32754 65508 131069 262138

Buffer Size

PR

R (

%)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 512

0.00E+00

1.00E-02

2.00E-02

3.00E-02

4.00E-02

5.00E-02

6.00E-02

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

CL

R (

%)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 512

0.00E+00

5.00E+00

1.00E+01

1.50E+01

2.00E+01

2.50E+01

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

Sw

itch

2 A

vera

ge

Bu

ffer

U

tilis

atio

n (

%)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 512

0.00E+00

2.00E+06

4.00E+06

6.00E+06

8.00E+06

1.00E+07

1.20E+07

1.40E+07

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

Th

rou

gh

pu

t (b

ps)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 512

0.00E+00

5.00E-04

1.00E-03

1.50E-03

2.00E-03

2.50E-03

3.00E-03

3.50E-03

4.00E-03

4.50E-03

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

PR

R (

%)

RENOECN

23

Configuration 2 - Group 4 - WAN - Varying Buffer Size

Configuration 2 - Group 4 - WAN - Varying Number of Sources

ERICA - Sources 50 - TCP Packet Size 512

0.00E+00

5.00E-04

1.00E-03

1.50E-03

2.00E-03

2.50E-03

3.00E-03

3.50E-03

4.00E-03

4081 8162 16377 32754 65508 131069 262138

Buffer Size

CL

R (

%)

RENOECN

ERICA - Sources 50 - TCP Packet Size 512

0.00E+00

5.00E+00

1.00E+01

1.50E+01

2.00E+01

2.50E+01

3.00E+01

4081 8162 16377 32754 65508 131069 262138

Buffer Size

Sw

itch

3 A

vera

ge

Bu

ffer

U

tilis

atio

n (

%)

RENOECN

ERICA - Sources 50 - TCP Packet Size 512

7.50E+05

7.55E+05

7.60E+05

7.65E+05

7.70E+05

7.75E+05

7.80E+05

7.85E+05

7.90E+05

7.95E+05

8.00E+05

4081 8162 16377 32754 65508 131069 262138

Buffer Size

Th

rou

gh

pu

t (b

ps)

RENOECN

ERICA - Sources 50 - TCP Packet Size 512

0.00E+00

5.00E-05

1.00E-04

1.50E-04

2.00E-04

2.50E-04

3.00E-04

3.50E-04

4.00E-04

4.50E-04

4081 8162 16377 32754 65508 131069 262138

Buffer Size

PR

R (

%)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 512

0.00E+00

1.00E-03

2.00E-03

3.00E-03

4.00E-03

5.00E-03

6.00E-03

7.00E-03

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

CL

R (

%)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 512

0.00E+00

1.00E+00

2.00E+00

3.00E+00

4.00E+00

5.00E+00

6.00E+00

7.00E+00

8.00E+00

9.00E+00

1.00E+01

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

Sw

itch

3 A

vera

ge

Bu

ffer

U

tilis

atio

n (

%)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 512

0.00E+00

2.00E+06

4.00E+06

6.00E+06

8.00E+06

1.00E+07

1.20E+07

1.40E+07

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

Th

rou

gh

pu

t (b

ps)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 512

0.00E+00

1.00E-04

2.00E-04

3.00E-04

4.00E-04

5.00E-04

6.00E-04

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

PR

R (

%)

RENOECN

24

Configuration 2 - Group 1 - WAN - Varying Buffer Size

Configuration 2 - Group 1 - WAN - Varying Number of Sources

ERICA - Sources 50 - TCP Packet Size 9128

0.00E+00

1.00E-03

2.00E-03

3.00E-03

4.00E-03

5.00E-03

6.00E-03

7.00E-03

8.00E-03

9.00E-03

4081 8162 16377 32754 65508 131069 262138

Buffer Size

CL

R (

%)

RENOECN

ERICA - Sources 50 - TCP Packet Size 9128

0.00E+00

1.00E-01

2.00E-01

3.00E-01

4.00E-01

5.00E-01

6.00E-01

7.00E-01

8.00E-01

9.00E-01

1.00E+00

4081 8162 16377 32754 65508 131069 262138

Buffer Size

Sw

itch

0 A

vera

ge

Bu

ffer

U

tilis

atio

n (

%)

RENOECN

ERICA - Sources 50 - TCP Packet Size 9128

7.40E+05

7.50E+05

7.60E+05

7.70E+05

7.80E+05

7.90E+05

8.00E+05

8.10E+05

8.20E+05

4081 8162 16377 32754 65508 131069 262138

Buffer Size

Th

rou

gh

pu

t (b

ps)

RENOECN

ERICA - Sources 50 - TCP Packet Size 9128

0.00E+00

2.00E-03

4.00E-03

6.00E-03

8.00E-03

1.00E-02

1.20E-02

1.40E-02

4081 8162 16377 32754 65508 131069 262138

Buffer Size

PR

R (

%)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 9128

0.00E+00

1.00E-03

2.00E-03

3.00E-03

4.00E-03

5.00E-03

6.00E-03

7.00E-03

8.00E-03

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

CL

R (

%)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 9128

0.00E+00

1.00E-01

2.00E-01

3.00E-01

4.00E-01

5.00E-01

6.00E-01

7.00E-01

8.00E-01

9.00E-01

1.00E+00

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

Sw

itch

0 A

vera

ge

Bu

ffer

U

tilis

atio

n (

%)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 9128

0.00E+00

5.00E+06

1.00E+07

1.50E+07

2.00E+07

2.50E+07

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

Th

rou

gh

pu

t (b

ps)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 9128

0.00E+00

2.00E-03

4.00E-03

6.00E-03

8.00E-03

1.00E-02

1.20E-02

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

PR

R (

%)

RENOECN

25

Configuration 2 - Group 2 - WAN - Varying Buffer Size

Configuration 2 - Group 2 - WAN - Varying Number of Sources

ERICA - Sources 50 - TCP Packet Size 9128

0.00E+00

1.00E-03

2.00E-03

3.00E-03

4.00E-03

5.00E-03

6.00E-03

7.00E-03

8.00E-03

9.00E-03

4081 8162 16377 32754 65508 131069 262138

Buffer Size

CL

R (

%)

RENOECN

ERICA - Sources 50 - TCP Packet Size 9128

0.00E+00

5.00E-02

1.00E-01

1.50E-01

2.00E-01

2.50E-01

3.00E-01

3.50E-01

4081 8162 16377 32754 65508 131069 262138

Buffer Size

Sw

itch

1 A

vera

ge

Bu

ffer

U

tilis

atio

n (

%)

RENOECN

ERICA - Sources 50 - TCP Packet Size 9128

8.30E+05

8.40E+05

8.50E+05

8.60E+05

8.70E+05

8.80E+05

8.90E+05

9.00E+05

9.10E+05

9.20E+05

4081 8162 16377 32754 65508 131069 262138

Buffer Size

Th

rou

gh

pu

t (b

ps)

RENOECN

ERICA - Sources 50 - TCP Packet Size 9128

0.00E+00

2.00E-03

4.00E-03

6.00E-03

8.00E-03

1.00E-02

1.20E-02

1.40E-02

4081 8162 16377 32754 65508 131069 262138

Buffer Size

PR

R (

%)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 9128

0.00E+00

1.00E-03

2.00E-03

3.00E-03

4.00E-03

5.00E-03

6.00E-03

7.00E-03

8.00E-03

9.00E-03

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

CL

R (

%)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 9128

0.00E+00

5.00E-01

1.00E+00

1.50E+00

2.00E+00

2.50E+00

3.00E+00

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

Sw

itch

1 A

vera

ge

Bu

ffer

U

tilis

atio

n (

%)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 9128

0.00E+00

5.00E+06

1.00E+07

1.50E+07

2.00E+07

2.50E+07

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

Th

rou

gh

pu

t (b

ps)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 9128

0.00E+00

2.00E-03

4.00E-03

6.00E-03

8.00E-03

1.00E-02

1.20E-02

1.40E-02

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

PR

R (

%)

RENOECN

26

Configuration 2 - Group 3 - WAN - Varying Buffer Size

Configuration 2 - Group 3 - WAN - Varying Number of Sources

ERICA - Sources 50 - TCP Packet Size 9128

0.00E+00

5.00E-03

1.00E-02

1.50E-02

2.00E-02

2.50E-02

4081 8162 16377 32754 65508 131069 262138

Buffer Size

CL

R (

%)

RENOECN

ERICA - Sources 50 - TCP Packet Size 9128

0.00E+00

1.00E+01

2.00E+01

3.00E+01

4.00E+01

5.00E+01

6.00E+01

7.00E+01

4081 8162 16377 32754 65508 131069 262138

Buffer Size

Sw

itch

2 A

vera

ge

Bu

ffer

U

tilis

atio

n (

%)

RENOECN

ERICA - Sources 50 - TCP Packet Size 9128

0.00E+00

1.00E+05

2.00E+05

3.00E+05

4.00E+05

5.00E+05

6.00E+05

7.00E+05

8.00E+05

9.00E+05

4081 8162 16377 32754 65508 131069 262138

Buffer Size

Th

rou

gh

pu

t (b

ps)

RENOECN

ERICA - Sources 50 - TCP Packet Size 9128

0.00E+00

5.00E-03

1.00E-02

1.50E-02

2.00E-02

2.50E-02

3.00E-02

4081 8162 16377 32754 65508 131069 262138

Buffer Size

PR

R (

%)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 9128

0.00E+00

5.00E-03

1.00E-02

1.50E-02

2.00E-02

2.50E-02

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

CL

R (

%)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 9128

0.00E+00

2.00E+00

4.00E+00

6.00E+00

8.00E+00

1.00E+01

1.20E+01

1.40E+01

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

Sw

itch

2 A

vera

ge

Bu

ffer

U

tilis

atio

n (

%)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 9128

0.00E+00

5.00E+06

1.00E+07

1.50E+07

2.00E+07

2.50E+07

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

Th

rou

gh

pu

t (b

ps)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 9128

0.00E+00

1.00E-02

2.00E-02

3.00E-02

4.00E-02

5.00E-02

6.00E-02

7.00E-02

8.00E-02

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

PR

R (

%)

RENOECN

27

Configuration 2 - Group 4 - WAN - Varying Buffer Size

Configuration 2 - Group 4 - WAN - Varying Number of Sources

ERICA - Sources 50 - TCP Packet Size 9128

0.00E+00

2.00E-04

4.00E-04

6.00E-04

8.00E-04

1.00E-03

1.20E-03

1.40E-03

1.60E-03

4081 8162 16377 32754 65508 131069 262138

Buffer Size

CL

R (

%)

RENOECN

ERICA - Sources 50 - TCP Packet Size 9128

0.00E+00

5.00E+00

1.00E+01

1.50E+01

2.00E+01

2.50E+01

3.00E+01

3.50E+01

4081 8162 16377 32754 65508 131069 262138

Buffer Size

Sw

itch

3 A

vera

ge

Bu

ffer

U

tilis

atio

n (

%)

RENOECN

ERICA - Sources 50 - TCP Packet Size 9128

8.20E+05

8.40E+05

8.60E+05

8.80E+05

9.00E+05

9.20E+05

9.40E+05

4081 8162 16377 32754 65508 131069 262138

Buffer Size

Th

rou

gh

pu

t (b

ps)

RENOECN

ERICA - Sources 50 - TCP Packet Size 9128

0.00E+00

5.00E-04

1.00E-03

1.50E-03

2.00E-03

2.50E-03

4081 8162 16377 32754 65508 131069 262138

Buffer Size

PR

R (

%)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 9128

0.00E+00

1.00E-03

2.00E-03

3.00E-03

4.00E-03

5.00E-03

6.00E-03

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

CL

R (

%)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 9128

0.00E+00

1.00E+00

2.00E+00

3.00E+00

4.00E+00

5.00E+00

6.00E+00

7.00E+00

8.00E+00

9.00E+00

1.00E+01

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

Sw

itch

3 A

vera

ge

Bu

ffer

U

tilis

atio

n (

%)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 9128

0.00E+00

5.00E+06

1.00E+07

1.50E+07

2.00E+07

2.50E+07

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

Th

rou

gh

pu

t (b

ps)

RENOECN

ERICA - Buffer Size 8162 - TCP Packet Size 9128

0.00E+00

1.00E-03

2.00E-03

3.00E-03

4.00E-03

5.00E-03

6.00E-03

2 3 4 5 20 25 30 35 40 45 50

Number of Sources

PR

R (

%)

RENOECN

28