• No results found

Routers with small buffers: impact of packet size on performance for mixed TCP and UDP traffic.

N/A
N/A
Protected

Academic year: 2021

Share "Routers with small buffers: impact of packet size on performance for mixed TCP and UDP traffic."

Copied!
91
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

by

Md. Mohsinul Jahid

B.Sc., Bangladesh University of Engineering and Technology, 2007

A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of

Master of Science

in the Department of Computer Science

c

Md. Mohsinul Jahid, 2012 University of Victoria

All rights reserved. This thesis may not be reproduced in whole or in part, by photocopying or other means, without the permission of the author.

(2)

Routers with small buffers: Impact of packet size on performance for mixed TCP and UDP traffic

by

Md. Mohsinul Jahid

B.Sc., Bangladesh University of Engineering and Technology, 2007

Supervisory Committee

Dr. Sudhakar Ganti, Supervisor (Department of Computer Science)

Dr. Yvonne Coady, Departmental Member (Department of Computer Science)

(3)

Supervisory Committee

Dr. Sudhakar Ganti, Supervisor (Department of Computer Science)

Dr. Yvonne Coady, Departmental Member (Department of Computer Science)

ABSTRACT

Recent research results on buffer sizing challenged the widely used assumption that routers should buffer millions of packets. These new results suggest that when smooth Transmission Control Protocol (TCP) traffic goes through a single tiny buffer of size O(logW ), then close-to-peak throughput can be achieved where W is the maximum window size of TCP flows. But the current routers have the buffer size much larger than that. It is shown that, we can reduce the buffer size by a factor of √N when the traffic is somehow smooth, where N is the number of flows. So, the main goal of this thesis is to show some directions on how the buffer size can be reduced in Internet routers. In this research, we adopted some measures like different packet sizes, different network scenarios, different buffer sizes, various link delays to see the performance of small buffers with the presence of both TCP and UDP traffic.

(4)

Contents

Supervisory Committee ii

Abstract iii

Table of Contents iv

List of Tables vii

List of Figures viii

Acknowledgements xi

Dedication xii

1 Introduction 1

1.1 Contribution . . . 1

1.1.1 Different Packet Sizes . . . 2

1.1.2 Different Link Delays . . . 2

1.1.3 Mixed TCP and UDP Traffic . . . 2

1.1.4 Different Network Scenarios . . . 2

1.2 Importance . . . 3

1.3 Thesis Outline . . . 4

2 Background and Related Work 5 2.1 Problem Statement . . . 5

2.2 Advantages of Smaller Buffer in Routers . . . 6

2.3 Related Work . . . 7

2.4 FEC Approach . . . 10

2.5 Well-paced TCP . . . 10

(5)

2.5.2 Traffic Shaping . . . 12

2.6 IRIS Router Architecture . . . 12

2.7 Adaptive Queue Management . . . 14

2.8 ECN and RED with Small Buffer . . . 14

2.8.1 Random Early Detection (RED) . . . 14

2.8.2 Explicit Congestion Notification (ECN) . . . 15

2.9 Impact of packet sizes . . . 17

3 Architecture of the Topology 19 3.1 Input and Output Queues in a Router . . . 20

3.2 TCP and UDP Packet Arrivals . . . 20

3.3 Simple Network topology with Packet Sizes . . . 21

3.3.1 Markov Model for Mixed TCP and UDP Traffic . . . 22

3.4 Complex Network Topology . . . 23

3.4.1 Linear Parking-lot Network Model . . . 24

3.4.2 n-link Parking-lot Network Model . . . 24

3.5 Droptail and Random Early Detection (RED) . . . 25

3.6 Proposed Router Buffer Sizes . . . 27

3.6.1 Near-100% Utilization (MegaByte Buffers) . . . 27

3.6.2 80-90% Utilization (KiloByte Buffers) . . . 27

4 Performance Evaluation, Analysis and Comparisons 29 4.1 Simulation Environment . . . 29

4.2 Performance Metrics . . . 30

4.3 Different Parameters Used in the Simulations . . . 31

4.3.1 RED Parameters Used in the Simulations . . . 31

4.3.2 CBR Parameters Used in the Simulations . . . 31

4.3.3 VBR Parameters Used in the Simulations . . . 32

4.4 Simulation Scenarios . . . 32

4.5 Single Bottleneck: Dumbbell Topology . . . 33

4.5.1 Performance of Single Bottleneck Link: Dumbbell Topology with Constant Bit Rate (CBR) UDP source . . . 33

4.5.2 Dumbbell topology: Analysis of Variable Bit Rate (VBR) UDP traffic . . . 36

(6)

4.5.3 Dumbbell topology with fixed TCP packet size and variable UDP packet sizes . . . 39 4.6 Multiple bottleneck links: Parking-lot model . . . 40 4.6.1 Multiple bottleneck link: Linear parking-lot model with CBR 40 4.6.2 Multiple bottleneck link: Linear parking-lot model with VBR

traffic . . . 44 4.6.3 Multiple bottleneck: n-link parking-lot model with CBR UDP

traffic . . . 48 4.6.4 Multiple bottleneck link: n-link parking-lot model with VBR

UDP traffic . . . 51 4.7 Dumbbell topology (12 TCP sources and a single UDP source):

Anal-ysis with link delays . . . 55 4.8 Parking-lot topology: Analysis with link delays . . . 57 4.9 Difference of packet loss (UDP and TCP) between fixed and variable

buffer size . . . 60

5 Conclusions and future work 62

5.1 Further Research Issues . . . 64

A Abbreviations 66

B Tables with simulation data 68

(7)

List of Tables

4.1 RED parameters and values . . . 31 4.2 CBR traffic generator parameters and values . . . 32 4.3 VBR traffic generator parameters and values . . . 32 4.4 Difference of packet loss (UDP and TCP) between fixed buffer

and variable buffer . . . 61 B.1 Dumbbell topology (CBR) simulation data . . . 69 B.2 Dumbbell topology (VBR) simulation data . . . 70 B.3 Linear parking-lot (3rd bottleneck link) CBR simulation data . . 71 B.4 Linear parking-lot (3rd bottleneck link) VBR simulation data . . 72 B.5 n-link parking-lot (four-flow bottleneck link) CBR simulation data 73 B.6 n-link parking-lot (four-flow bottleneck link) VBR simulation data 74 B.7 Dumbbell link delay topology (12 TCP and VBR) simulation data 75

(8)

List of Figures

2.1 Small buffer problem general topology. . . 5

2.2 Positions of core routers in the network topology. . . 9

2.3 Token bucket regulator. . . 11

2.4 Token bucket regulator simulation result by [1]. . . 12

2.5 Rate control operations for IRIS router [2] . . . 13

3.1 General network topology for small buffer model with access and bottleneck links. . . 19

3.2 Input and output queues on switching architecture for routers. . 20

3.3 The leading edge of a burst is sharper with smaller packet size, thus filling up the buffer faster than with larger packets [3]. . . 21

3.4 Markov model to discuss the mixed traffic characteristics. . . . 22

3.5 Parking lot model. . . 23

3.6 Linear parking-lot network model. . . 24

3.7 n-link parking-lot network model. . . 24

3.8 Random Early Detection packet drop probability. . . 25

3.9 Random Early Detection algorithm [4]. . . 26

3.10 Random Early Detection gateways. . . 26

4.1 Simple network topology for simulation. . . 33

4.2 UDP packet loss: Dumbbell topology with CBR . . . 34

4.3 TCP packet loss: Dumbbell topology with CBR . . . 35

4.4 UDP throughput: Dumbbell topology with CBR . . . 36

4.5 TCP throughput: Dumbbell topology with CBR . . . 36

4.6 UDP packet loss: Dumbbell topology with VBR . . . 37

4.7 TCP packet loss: Dumbbell topology with VBR . . . 37

4.8 UDP throughput: Dumbbell topology with VBR . . . 38

(9)

4.10 Dumbbell topology with packet losses (buffer size is 20 KB and

only UDP packet size is varying) . . . 40

4.11 Linear parking-lot network model for simulation. . . 41

4.12 Linear parking lot topology with packet losses (50 KB buffer with CBR) . . . 41

4.13 Linear parking lot topology with packet losses (20 KB buffer with CBR) . . . 42

4.14 Linear parking lot topology with packet losses (7 KB buffer with CBR) . . . 42

4.15 Linear parking-lot throughput for different hops (50 KB buffer with CBR) . . . 43

4.16 Linear parking lot throughput for different hops (20 KB buffer with CBR) . . . 44

4.17 Linear parking lot throughput for different hops (7 KB buffer with CBR) . . . 44

4.18 Packet loss: Linear parking-lot (50 KB buffer with VBR) . . . . 45

4.19 Packet loss: Linear parking-lot (20 KB buffer with VBR) . . . . 45

4.20 Packet loss: Linear parking-lot (7 KB buffer with VBR) . . . . 46

4.21 Throughput: Linear parking-lot (50 KB buffer with VBR) . . . 46

4.22 Throughput: Linear parking-lot (20 KB buffer with VBR) . . . 47

4.23 Throughput: Linear parking-lot (7 KB buffer with VBR) . . . . 47

4.24 n-link parking-lot network topology for simulation. . . 48

4.25 n-link parking-lot: packet loss- 50 KB buffer with CBR. . . 49

4.26 n-link parking-lot: packet loss- 20 KB buffer with CBR. . . 49

4.27 n-link parking-lot: packet loss- 7 KB buffer with CBR. . . 50

4.28 n-link parking-lot: throughput- 50 KB buffer with CBR. . . 50

4.29 n-link parking-lot: throughput- 20 KB buffer with CBR. . . 51

4.30 n-link parking-lot: throughput-7 KB buffer with CBR. . . 51

4.31 n-link parking-lot: packet loss- 50 KB buffer with VBR. . . 52

4.32 n-link parking-lot: packet loss- 20 KB buffer with VBR. . . 52

4.33 n-link parking-lot: packet loss- 7 KB buffer with VBR. . . 53

4.34 n-link parking-lot: throughput-50 KB buffer with VBR. . . 53

4.35 n-link parking-lot: throughput-20 KB buffer with VBR. . . 54

4.36 n-link parking-lot: throughput-7 KB buffer with VBR. . . 54

(10)

4.38 TCP Throughput: Many TCP (12 sources) and VBR for simu-lation. . . 56 4.39 UDP Throughput: Many TCP (12 sources) and VBR for

simu-lation. . . 56 4.40 TCP packet loss: Many TCP (12 sources) and VBR for simulation. 57 4.41 UDP packet loss: Many TCP (12 sources) and VBR for simulation. 57 4.42 n-link parking-lot: packet loss- 20 KB buffer with VBR (5 ms

link delay). . . 58 4.43 n-link parking-lot: packet loss- 20 KB buffer with VBR (20ms

link delay). . . 58 4.44 n-link parking-lot: throughput-20 KB buffer with VBR (5 ms

link delay). . . 59 4.45 n-link parking-lot: throughput-20 KB buffer with VBR (20 ms

(11)

ACKNOWLEDGEMENTS

I would like to thank Dr. Sudhakar Ganti, whose encouragement, support and patience from the initial to the final level enabled me to develop an understanding of the subject.

I also want to thank my thesis committee member, Dr. Yvonne Coady, for her time and valuable suggestions.

Lastly, I want to thank my family members for their understanding and endless support, throughout the duration of my studies.

(12)

DEDICATION

(13)

Introduction

Network congestion control is a serious issue in the modern Internet. Though the assumption is that routers should buffer millions of packets, recent models of networks have been suggested that these large buffers could be replaced with much smaller ones [5]. Unfortunately, it turns out that these congestion control models are not valid anymore in networks with small buffers, and therefore cannot predict how these small-buffer networks will behave. In recent years network researchers have suggested different approaches to minimize the router buffer with traffic pacing, different queue management algorithms etc [6]. So, it is expected that buffer sizing in Internet routers could be small and the main goal of this research is to study the performance analysis of buffer size reduction in Internet routers.

1.1

Contribution

In this thesis, we first studied various buffer scenarios and techniques that maintain small buffers in routers. We found that packet size is an important factor regarding this study in almost all of cases of the small buffer scenarios. Depending upon the packet size, the performance of a small buffer router varied a lot. This performance analysis is studied for mixed TCP and UDP traffic. So far, only a few papers have analyzed the effect of packet size on small buffers. We also found an issue with some of the research presented in previous papers [3] that, they only fixed the number of packets in a buffer in their analysis. So when the size of the packet is changed, the size of the buffer will also change in these cases. This implies that the small buffer case is not with fixed buffer size at all for various packet sizes. In our study, we always

(14)

fixed the size of the buffer (say N kilobytes) and changed the packet sizes to conduct the performance analysis.

1.1.1

Different Packet Sizes

In our model, we analyze the impact of various packet sizes of mixed TCP and UDP of both Constant Bit Rate (CBR) and Variable Bit Rate (VBR) traffic. In this model, comparisons were made by changing the TCP packet sizes as well as the UDP packet sizes. So far, we found two papers [7, 3] similar to our work but in [7] they considered only varying UDP packet sizes and with a fixed TCP packet size. So in their model the small buffer is not fixed if we consider both TCP and UDP packet sizes. The most relevant paper so far in this topic is [3]. They have measured the size of the buffer with number of packets but not with fixed bytes. They assume that the router buffer size is measured in packets and that the flows first traverse a high capacity link before entering the bottleneck link, as is typical for any flow in a real network.

1.1.2

Different Link Delays

Link delay is another important factor to incorporate different real life network sce-narios and so we have analyzed the impact of bottleneck link delays on the router buffer sizes for mixed TCP and UDP traffic. Here we have fixed the packet size and covered a variety of link delays with different buffer sizes to study the impact.

1.1.3

Mixed TCP and UDP Traffic

All of our cases have been analyzed with mixed TCP and UDP traffic. In one of our simulation setup, we have a scenario with many TCP sources and with many flows to see the impact of varying delays with TCP and UDP traffic. We observed the effect of Variable Bit Rate (VBR) and Constant Bit Rate (CBR) UDP sources in our simulation models and compared the performance of the network.

1.1.4

Different Network Scenarios

Different network scenarios are also considered in this work. First, a small dumb-bell network topology is analyzed. Then the same topology with many TCP sources (more than twelve) is studied. Finally, a parking-lot [8] (both linear and n-link) network topology is analyzed with many sources and sinks.

(15)

1.2

Importance

There are several advantages of small buffers in optical routers. If big electronic routers required only a few dozen packet buffers, it could reduce their complexity, making them easier to build and easier to scale equipment and networks. Gener-ally, the BDP(Bandwidth-Delay Product) rule is a common method employed for buffer size calculation. Twice the bandwidth-delay product of the network is widely accepted as the sufficient queue length for both Droptail and Active Queue Man-agement (AQM) schemes. These schemes are used to control congestion at packet buffering points in the network. Packet buffers in routers play a major role in con-gestion control of the existing Internet. They compensate for the incoming traffic bursts transmitted by aggressive applications. The appropriate sizing of buffers is important for providing equilibrium between the high link utilization, the loss ratio and the queuing delay.

Prior studies on small router buffer size problem have largely ignored the presence of real-time UDP traffic, which is increasing in importance as a source of revenue for Internet service providers. In this thesis, we study the interaction that happens between real-time (open-loop) and TCP (closed-loop) traffic when they are multi-plexed at buffers of small size (few tens of packets). We use extensive simulations to explore the interaction between TCP and UDP with different packet sizes and various types of heterogeneity, as well as the impact of such interaction on the user-perceived end-to-end performance.

It must be mentioned that the arguments considered so far deal only with closed-loop TCP traffic, since nearly 90-95% of Internet traffic today is carried by TCP. All previous studies on buffer sizing have largely ignored the impact of open-loop (real-time) traffic, notably UDP. As real-time multimedia applications such as online gaming, audio-video services, IPTV, VoIP etc., continue to become more prevalent in the Internet, increasing the fraction of Internet traffic that is UDP, it seems ap-propriate for the study of router buffer sizing to consider the presence of real-time traffic, and not ignore it completely [6].

There are several hardware reasons for wanting smaller buffers according to the Stanford team [5] who has introduced the small buffer problem. First, the memory in

(16)

a router line card has to be as fast as the line rate. This requirement complicates the design of high-speed routers, leading to higher power consumption, more board space, and lower density. Second, while switching speeds double every 18 months according to Moore’s Law, memory access speeds double only every 10 years. Therefore memory requirements will increasingly become a limiting aspect of router design. Third, all-optical routers will perhaps be able to buffer 100 packets or so (using e.g., fiber delay lines). If this amount of buffering is sufficient to get good performance, an all-optical packet switched core network is feasible.

1.3

Thesis Outline

The rest of this thesis is organized as follows. Chapter 2 presents the details of the small buffer problem. There are various approaches to investigate the problem and different articles discuss various solutions. We touched many of these approaches. In Chapter 3, fundamentals of the architecture and topology used for simulation study have been discussed. The results obtained by simulation have been discussed in Chapter 4 along with different performance metrics and different constraints to point the problem have also been discussed with figures. It mainly presents the small buffer performance in different scenarios, and provides numerical analysis based on the simulation results. In Chapter 5, the contributions of this thesis will be summarized, conclusion drawn and future work discussed. Appendix A has the abbreviations used in this thesis and appendix B has the simulation data shown in tables.

(17)

Chapter 2

Background and Related Work

There are many approaches to solve or discuss the problem of small buffers in router architectures. Here the main ideas to solve the problem are discussed.

2.1

Problem Statement

The buffer size in the routers should be small. To achieve this phenomenon we have to apply some constraints to the incoming traffic like making sure that the traffic is smooth enough for this small buffer (Figure 2.1). In this case, we can use well-paced (generally by slow access links) TCP for the incoming traffic or we can change the TCP and UDP packet sizes. So, our main purpose is to make sure that the buffer size of routers is small given that some constraints are applied to the traffic.

(18)

2.2

Advantages of Smaller Buffer in Routers

The commonly used rule-of-thumb requires that the packet buffers be of size RT T ×C, where RT T is the round trip time of a TCP flow and C is the line rate. As line rates have been growing exponentially, this rule requires core routers to have huge amount of buffers. It is very difficult to build large buffers that are fast enough for today’s line rates. Besides simplifying existing router designs, smaller buffers would also allow for the creation of all-optical routers, which can theoretically switch packets much faster and more efficiently than current electronic routers [5]. Only very recently it was shown that we can reduce the buffer size by a factor of √N when the traffic is somehow made smooth [9], where N is the number of flows flowing through the buffering point. And the new results suggest that when smooth transmission control protocol (TCP) traffic goes through a single tiny buffer of size O(logW ), then close-to-peak throughput can be achieved where W is the maximum window size of TCP flows [10].

In the articles [11] and [12] Wischik et al., describe recent work on buffer sizing for core Internet routers. This work suggests that the widely-used rule of thumb leads to buffers which are much larger than needed. For example, the buffer in a backbone router could be reduced from 1,000,000 packets to 10,000 without loss in performance. It could be reduced even further, perhaps to 10 to 20 packets, at the cost of a small amount of bandwidth utilization. This trade-off is worth considering, for example for a possible future all-optical router.

When there is a large disparity between the access speeds and the core router speeds, buffering is required primarily to reduce the small variance of the rate of a packet arrival process and the system is rarely in congestion. Therefore, small buffers are sufficient in these networks. However, as a network designer, it is important to study the impact of small buffers in networks that get congested very often. Typically, these are the networks where there are no access speed limitations and each flow can potentially use a large fraction of the capacity of the link [13].

Using analysis and simulations, the authors in [14] show that there exists a certain continuous region of buffer size (typically in the range of about 8-25 packets) wherein the performance of real-time traffic degrades with increasing buffer size. This region

(19)

is called an ‘anomalous region’ with respect to the real-time traffic. The anomaly has a lot of practical implications. First, it underscores the belief that the study of router buffer sizing should not ignore the presence of real-time traffic. Second, in this regime of tiny buffers, it is prudent to size router buffers at a value that balances the performance of both TCP and UDP traffic appropriately. Operating the router buffers at a very small value can adversely impact the performance of both TCP and UDP traffic. Furthermore, operating it in the ‘anomalous region’ can result in increased UDP packet loss, with only a marginal improvement in end-to-end TCP throughput. Third, building an all-optical packet router and buffering of packets in the optical domain is a rather complex and expensive operation.

2.3

Related Work

It is important to understand that the Stanford models [5] are only applicable when the target link is “saturable”. A link is “saturable” when the offered load in the link is sufficiently high to saturate the capacity of that link, given a sufficiently large buffer space B. Note that a link may be saturable for limited period of time (e.g, during the afternoon hours). In that case, the buffer space of the target link could be provisioned based on those peak load periods. There are links however that can never be saturable due to constraints on maximum rate of their input flows.

In their work, Barman et al. [15] study the effect of the IP router buffer size on the throughput of HighSpeed TCP (HSTCP). They are motivated by the fact that in high speed routers, the buffer size is important but a large buffer size might be a constraint. They first derive an analytical model for HighSpeed TCP and they show that for small buffer size equal to 10 percent of the bandwidth-delay product, HighSpeed TCP can achieve more than 90 percent of the bottleneck capacity. They also show that setting the buffer size equal to 20 percent can increase the utilization of HighSpeed TCP up to 98 percent. On the contrary, setting the buffer size to less than 10 percent of the bandwidth-delay product can decrease HighSpeed TCP’s throughput significantly. They also study the performance effects under both DropTail and RED AQM. Analytical results obtained using a fixed-point approach are compared to those obtained by simulation.

(20)

The use of small buffers in the core of future networks raises the question of how to ensure that traffic bursts do not lead to degradation in network performance due to packet losses. [16] proposes the use of an adaptive pacing system at the edge of these small-buffer networks. Their pacer is simple to implement due to its O(1) complexity. Their analysis shows that the delay introduced by the proposed pacing is bounded. They also show that the throughput that can be achieved for TCP using their pacing algorithm exceeds that of end-system based TCP pacing. Even for a small buffer size, their system can achieve near 100 percent link utilization in these networks. They believe this pacing system provides an important solution to the burstiness problem and makes it practical to deploy small buffer networks in the future Internet.

In another paper [17], the authors investigate how different FTP packets affect the handover delay by using the packet spacing implementation in TCP Vegas [18]. The different packet sizes used in the simulation are 128 bytes, 256 bytes, 512 bytes, and 1024 bytes. Since 1500 bytes is the maximum transfer unit (MTU) of high speed Ethernet LAN, the FTP packet size is varied within this boundary. The buffer size is set to the default value of 50 packets.

Recent models of networks with large buffers have suggested that these large buffers could be replaced with much smaller ones. Unfortunately, it turns out that these models are not valid anymore in networks with small buffers, and therefore cannot predict how these small-buffer networks will behave. In the paper [19], the authors introduce a new model that provides a complete statistical description of small-buffer Internet networks. First, they present novel models of the distributions of several network components, such as the line occupancies of each flow, the instantaneous arrival rates to the bottleneck queues, and the bottleneck queue sizes. Then, they combine all these models in a single fixed-point algorithm that forms the key to the global statistical small-buffer network model. In particular, given some QoS requirements, this new model can be used to precisely size small buffers in backbone router designs.

As workload requirements grow and connection bandwidths increase, the interac-tion between the congesinterac-tion control protocol and small buffer routers produce link utilizations that tend to be zero. This is a simple consequence of the inverse square root dependence of TCP throughput on loss probability. The paper [20] presents a

(21)

new congestion controller that avoids this problem by allowing a TCP connection to achieve arbitrarily large bandwidths without demanding the loss probability to go to zero. They show that this controller produces stable behavior and, through simulation, they show it’s performance to be superior to TCP NewReno in a variety of environments. Lastly, because of its advantages in high bandwidth environments, they compare their controller’s performance to some of the recently proposed high performance versions of TCP including HSTCP [21], STCP [22], and FAST [23]. If we think about the whole network scenario then these core routers (in Area 0) can be seen in the Figure 2.2. Simulations illustrate the superior performance of the proposed controller in a small buffer environment.

Figure 2.2: Positions of core routers in the network topology.

The paper [13] developed simple models to provide buffer sizing guidelines for today’s high-speed routers. Their analysis points out that the core-to-access speed ratio is the key parameter which determines the buffer sizing guidelines. In particular, this parameter along with the buffer size determines the typical number of flows in the network. Thus, an important message in this paper is that the number of flows and buffer size should not be treated as independent parameters in deriving buffer sizing guidelines. Further, they also point out that link utilization is not a good measure of

(22)

congestion level at a router. In fact, they show that even at 98 percent utilization, the core router may contribute very little to the overall packet loss probability seen by a source if the core to access speed ratio is large.

2.4

FEC Approach

Internet traffic is expected to grow phenomenally over the next five to ten years, and to cope with such large traffic volumes, core networks are expected to scale to capacities of terabits-per-second and beyond. Increasing the role of optics for switching and transmission inside the core network seems to be the most promising way forward to accomplish this capacity scaling. Unfortunately, unlike electronic memory, it remains a formidable challenge to build even a few packets of integrated all-optical buffers. In the context of envisioning a bufferless (or near-zero buffer) core network, the contribution of Vishwanath et al. [24] in their paper is threefold: First, they propose a novel edge-to-edge based packet-level Forward Error Correction (FEC) framework as a means of combating high core losses, and investigate via analysis and simulation the appropriate FEC strength for a single core link. Second, they consider a realistic multi-hop network and develop an optimization framework that adjusts the FEC strength on a per-flow basis to ensure fairness between single and multi-hop flows. Third, they study the efficacy of FEC for various system parameters such as relative mixes of short-lived and long-lived TCP flows, and average offered link loads. Their study is the first to show that packet-level FEC, when tuned properly, can be very effective in mitigating high core losses, thus opening the doors to a bufferless core network in the future.

2.5

Well-paced TCP

It is well understood from queuing theory that bursty traffic produces higher queuing delays, more packet losses, and lower throughput. At the same time, it has been ob-served that TCP’s congestion control mechanisms can produce bursty traffic flows on high bandwidth and highly multiplexed networks. Consequently, several researchers have proposed smoothing the behavior of TCP traffic by evenly spacing, or ‘pacing’ data transmissions across a round-trip time. Pacing is a hybrid between pure rate control and TCP’s use of acknowledgments to trigger new data to be sent into the network [25].

(23)

2.5.1

(σ, ρ) Bound

Figure 2.3: Token bucket regulator.

Previously, it was observed that in case of (σ, ρ) constrained traffic, the number of arrivals in time interval T is at most ρT + σ. To generate (σ, ρ)-constrained traffic, a traffic shaper or traffic regulator may be used. A popular implementation of such a regulator is a token bucket. Some of the simulation results from [1] are shown in Figure 2.4. A token bucket is a control mechanism that decides when traffic can be transmitted. Specifically, as depicted in Figure 2.3, the token bucket with token generation rate ρ and token bucket depth σ works as follows:

1. The bucket can hold σ tokens and is initially full of tokens. In Figure 2.3 it is in the top-middle.

2. A token is added to the bucket every 1/ρ seconds. When a token arrives and the bucket is full, the token is discarded.

3. When a packet of length l bits arrives, if the number of tokens in the bucket is not smaller than l , then l tokens are removed from the bucket and the packet is immediately sent out of the token bucket.

4. When the packet arrives, if there are fewer than l tokens in the bucket, then the packet may either be dropped or queued until there are enough tokens in the bucket, in which case step (3) will be repeated [26].

(24)

Figure 2.4: Token bucket regulator simulation result by [1].

2.5.2

Traffic Shaping

The Differentiated Services architecture [27] allows providing scalable Quality-of-Service by means of aggregating flows to a small number of traffic classes. Among these classes a Premium Service is defined, for which end-to-end delay guarantees are of particular interest. However, in aggregate scheduling networks such delay bounds suffer significantly from effects that are due to multiplexing of flows to aggregates. A way to minimize the impacts of interfering flows is to shape incoming traffic, so that bursts are smoothed. Doing so reduces the queuing delay within the core of a domain, whereas an additional shaping delay at the edge is introduced.

The paper [27] addressed the impacts of traffic shaping on aggregate scheduling net-works. They applied the notation of dual leaky bucket constrained arrival curves to extend the analytical framework of Network Calculus to cover relevant traffic shaping issues. A general per-flow service curve has been derived for FIFO aggregate schedul-ing rate-latency service elements. This equation has been solved for the special case of dual leaky bucket constrained flows and dual leaky bucket output constraints have been derived.

2.6

IRIS Router Architecture

In [2] results suggest that the rate control framework can enable the use of routers with bare minimum buffering. In particular, they evaluated the all-optical IRIS router

(25)

architecture using traffic and topological models representative of the Internet back-bone. Without the rate control framework in place, the routers sustained substantial losses, even at low load conditions, making them impractical for use in real-world networks. This motivated the need for network-level modifications to overcome the limitations imposed by limited buffering and the high loss rate. The rate control framework was evaluated and shown to offer substantially better performance (Fig-ure: 2.5).

Figure 2.5: Rate control operations for IRIS router [2]

The summary of the key findings from this reference are as follows: (i) With as little as eight cells of buffering at the VOQs and output FIFOs, IRIS routers can operate loss-free at utilizations as high as 90 percent. (ii) Even while the links are operating at peak load conditions, the losses suffered have negligible effect on the aggregate throughput of end-to-end TCP flows and little degradation in QoS for the VBR flows. (iii) They find that losses at the second stage VOQs are extremely sensitive to buffer depth, where an increase of just a few cells can eliminate all losses. (iv) The output FIFOs represent the primary congestion point where losses occur. Here they show that marginally deeper FIFOs can remove a significant fraction of the losses, however, significantly larger FIFOs are necessary to eliminate all losses. (v) Fortunately, losses at the FIFOs are sensitive to link utilization levels; therefore, network operators can trade off some bandwidth for improved QoS. (vi) Shaped traffic does not exhibit burstiness as it passes through multiple IRIS routers. Results show that arrivals become uncorrelated and in fact exhibit negative correlation at small lags. (vii) Traffic shaping performed by the rate control framework also serves to significantly limit the effects of out-of-order arrivals. Furthermore, the amount of spacing necessary to eliminate cell displacement all together may be reasonably small given the small VOQ depths within IRIS routers.

(26)

2.7

Adaptive Queue Management

Queue management mechanisms, namely Droptail and various AQM (Active Queue Management) schemes [28], are employed by the routers to avoid a serious throughput decrease in the case of a mild congestion. The AQM mechanisms allow adjusting dynamically target queue length and packet drop probability and thus outperform Droptail FIFO mechanism in case of light and moderate congestions. However, their performance may vary significantly in different network scenarios. The usage of both Drop Tail and AQM schemes rises many new issues one of which is sizing the buffers.

2.8

ECN and RED with Small Buffer

In this section the impact of Random Early Detection (RED) and Explicit Congestion Notification (ECN) are discussed.

2.8.1

Random Early Detection (RED)

Random Early Detection (RED) [4] is the recommended active queue management scheme for rapid deployment throughout the Internet. As a result, there have been considerable research efforts in studying the performance of RED. However, previous studies have often focused on relatively homogeneous environment. The effects of RED in a heterogeneous environment are not thoroughly understood. In the paper [29], they use extensive simulations to explore the interaction between RED and various types of heterogeneity, as well as the impact of such interaction on the user-perceived end-to-end performance. Their results show that overall RED improves performance at least for the types of heterogeneity they have considered. In this paper, they address these limitations by conducting extensive simulations to explore the effects of RED (with the ‘gentle’ modification) on the user-perceived end-to-end performance in a heterogeneous environment.

Overall Le et al. [30] conclude that AQM can improve application and network performance for Web or Web-like workloads. If arbitrarily high loads on a network are possible then the control theoretic designs PI and REM give the best performance but only when deployed with ECN-capable end systems and routers. In this case the performance improvement at high loads may be substantial. Whether or not the

(27)

improvement in response times with AQM is significant (when compared to drop-tail FIFO), depends heavily on the range of round-trip times (RTTs) experienced by flows. As the variation in flow’s RTT increases, the impact of AQM and ECN on response-time performance is reduced. If network saturation is not a concern then ARED in byte-mode, without ECN, gives the best performance. Combined, these results suggest that with the appropriate choice of AQM, providers may be able to operate links dominated by Web traffic at load levels as high as 90% of link capacity without significant degradation in application or network performance.

2.8.2

Explicit Congestion Notification (ECN)

Since Explicit Congestion Notification (ECN) [31] scheme uses RED gateways for explicit congestion notification, we describe the RED scheme first. RED gateway maintains two queue thresholds, min, and max, and continuously updates average queue size. When number of packets is less than min, no action is taken. When num-ber of packets is greater than max, every arriving packet is marked or dropped. When number of packets is between min and max, RED gateway calculates a probability that the packet should be dropped or marked, proportional to the connection’s share of the gateway’s bandwidth. In addition, if gateway measures the queue size in bytes rather than packets, the probability packet is to be marked is also proportional to the packet size in bytes. Finally, if gateway’s output queue is empty for some period of time, the algorithm estimates how many packets could have been transmitted during that time, and updates the average queue size accordingly [32].

The goal of ECN scheme is to define response of the transport layer of the sender node to receipt of congestion notification from RED router. When ECN scheme is active, RED gateway is configured to mark rather than drop packets. On one side, the ECN scheme used in this report uses a single message as an indication of network congestion. On the other side the scheme tries to make sure TCP does not respond too frequently by reacting to congestion notification at most once per round trip time (this includes triple-acks). Following receipt of a packet with ECN bit set, the sender will halve the congestion window and the slow-start threshold. The protocol does not halve those parameters again in response to triple-ack or another packet with ECN bit, until all packets outstanding at the time of response to ECN have been acked [32].

(28)

In another paper [33], the authors developed a model to analyze the performance of ECN mechanism in RED gateways. Their main contribution is that they derive approximate expressions for the maximum buffer size requirement and the maximum threshold of a RED gateway to minimize packet loss. The significance of their study is that the buffer size, and consequently the queuing delay, could be much smaller than what has been proposed by previous researchers.

Le et al. [30] present an empirical study of the effects of active queue management (AQM) and explicit congestion notification (ECN) on the distribution of response times experienced by users browsing the Web. Three prominent AQM designs are considered: the Proportional Integral (PI) controller, the Random Exponential Mark-ing (REM) controller, and Adaptive Random Early Detection (ARED). The effects of these AQM designs were studied with and without ECN. Their primary measure of performance is the end-to-end response time for HTTP request-response exchanges. Their major results are as follows.

1. If ECN is not supported, ARED operating in byte-mode was the best performing design, providing better response time performance than drop-tail queuing at offered loads above 90% of link capacity. However, ARED operating in packet-mode (with or without ECN) was the worst performing design, performing worse than drop-tail queuing.

2. ECN support is beneficial to PI and REM. With ECN, PI and REM were the best performing designs, providing significant improvement over ARED oper-ating in byte-mode. In the case of REM, the benefit of ECN was dramatic. Without ECN, response time performance with REM was worse than drop-tail queuing at all loads considered.

3. ECN was not beneficial to ARED. Under current ECN implementation guide-lines, ECN had no effect on ARED performance. However, ARED performance with ECN improved significantly after reversing a guideline that was intended to police unresponsive flows. Overall, the best ARED performance was achieved without ECN.

4. Whether or not the improvement in response times with AQM is significant, depends heavily on the range of round-trip times (RTTs) experienced by flows.

(29)

As the variation in flow’s RTT increases, the impact of AQM and ECN on response-time performance is reduced.

The authors conclude that AQM can improve application and network perfor-mance for Web or Web-like workloads. In particular, it appears likely that with AQM and ECN, provider links may be operated at near saturation levels without significant degradation in user-perceived performance.

2.9

Impact of packet sizes

Traditionally, packet sizes in the Internet have been defined as a result of the require-ments from the used link layers, such as the Ethernet. This has resulted in using 1500 bytes as the de-facto maximum packet size. Several applications, such as VoIP applications, however, use a stream of small packets to minimize the packetization delay. On the other hand, applications transmitting elastic data using TCP could easily use other packet sizes than the commonly used 1500 bytes. However, there is not much information available in the literature on the impact of different packet sizes on TCP fairness and performance [3].

The importance of packet size to the anomalous loss performance also has an impli-cation for TCP ACK packets that are typically 40 bytes long. Reference [7] therefore undertook a simulation study of whether TCP ACK packets will also exhibit similar anomalous behavior. They simulated 1000 bidirectional TCP flows (without UDP) on the dumbbell topology and recorded the ACK packet drops at routers. The sim-ulation parameters are identical to the setup described earlier. They plot the ACK packet loss probability as a function of core-link buffer size. Clearly, ACK packets also suffer from the anomaly and indeed match well with the analytical estimate.

Wireless local area networks (WLANs) support a wide range of applications, with various packet sizes. This diversity is set to increase in 802.11e WLANs which ef-fectively allow very large packets controlled by a transmission opportunity (TxOP) parameter. This paper [34] demonstrates a new phenomenon which occurs as a result of this diversity: When a network carries some large packets and many small packets, the collision probability after a large packet is much larger than predicted by previous models. This can be important because collision probability determines the number of packet transmissions, and hence the energy consumption.

(30)

In another paper [17], the authors present the result of simulation experiment on wireless IPv6 network by implementing TCP Vegas with the packet spacing adaption. Packet spacing improves the performance of TCP Vegas. The burstiness of the traffic at the bottleneck link is minimized by evenly spacing out the traffic. In the simulation experiment, different FTP packet sizes are sent over the wireless IPv6 environment. The simulation result shows that among the different packet sizes, the packet size of 512 bytes is the most suitable size to send the FTP packet in terms of small loss rate, low delay, and high bottleneck link utilization. Thus, they propose that packet size is be 512 bytes when it is sent to the wireless IPv6 network.

(31)

Chapter 3

Architecture of the Topology

In this chapter, the architecture of the small buffer model with different constraints is discussed. In Figure 3.1 the basic scenario of the small buffer problem is shown. There are different senders like S1, S2, ...SN and different receivers like R1, R2, ...RN. These

senders and receivers are connected through the access links with the bottleneck links. The input router in the bottleneck link has the input and output queues which are discussed next.

Figure 3.1: General network topology for small buffer model with access and bottle-neck links.

(32)

3.1

Input and Output Queues in a Router

Input queues absorb transient forwarding subsystem saturation and output queues holds burst of packets directed to one interface. Generally, queues hold a given number of packets (not bytes). In figure 3.2 the position of input and output queues is shown.

Figure 3.2: Input and output queues on switching architecture for routers.

A widely-used rule-of-thumb states that, because of the dynamics of TCP con-gestion control mechanism, a router needs a bandwidth-delay product of buffering, B = RT T × C, in order to fully utilize bottleneck links. Here, C is the capacity of the bottleneck link, B is the size of the buffer in the bottleneck router, and RT T is the average round-trip propagation delay of a TCP flow through the bottleneck link. Recently, Appenzeller et al. proposed using the rule B = RT T ×C√

N instead, where N is

the number of flows through the bottleneck link [3]. In a backbone network today, N is often in the thousands or the tens of thousands, and so the sizing rule B = RT T ×C√

N

results in significantly fewer buffers [5].

3.2

TCP and UDP Packet Arrivals

Reference [35] shows that the bottleneck link buffers have a large influence on the aggregate TCP arrival process: large buffers can induce synchronization amongst TCP flows, thus creating significant burstiness, but as buffers become smaller, the TCP aggregate can be well approximated by a Poisson process. As, in our case the buffer is generally small so we assume that the incoming TCP packet arrivals are Poisson.

(33)

3.3

Simple Network topology with Packet Sizes

Our objective is to model a large network with small buffers. We will first formally reduce the problem to a simpler dumbbell topology problem. The large network model can be decomposed into subnetworks with a single bottleneck buffer in each subnetwork, each subnetwork being modeled using a dumbbell topology around its bottleneck buffer.

Figure 3.3: The leading edge of a burst is sharper with smaller packet size, thus filling up the buffer faster than with larger packets [3].

In [3] a simple, approximate model was developed to characterize the behavior. Assume an input link with a link rate of rin, and an output link with rate of rout. In

between, there is a router with buffer size of C packets. The incoming traffic consists of bursts of packets of size p bytes, with a total burst size of B bytes. After a burst, there is sufficient time for the buffer to become empty (figure 3.3). Consider a single burst, starting at time t = 0. The buffer starts to fill, and becomes full at time

tf ull =

pC rin− rout

. (3.1)

The burst length in time is tburst = rBin. The approximate packet loss during the

burst is l = ( t burst−tf ull tburst . rin−rout rin if tf ull < tburst, 0 otherwise (3.2)

Let t0 denote the time buffers start emptying, and ts the start of the next burst.

After a few mathematical derivations from [3], the capacity of the buffer at time ts is

(34)

The time when the buffer reaches again full capacity is tref illed= ts+ pb rin− rout = ts+ (ts− t0) rout rin− rout (3.3)

From there [3] concludes that if the bursts follow each other with very short periods of time, packet size difference affects packet loss only for the first burst, and for a long series of such bursts, the packet losses approaches to the same value.

In another work, Shifrin et al. [19] compared their bursty model against a fluid model, in which packets are distributed uniformly on all the links. According to this fluid model, the number of packets present on access link i at time t is thus equal to Li(t) = RT TTi i× Wi(t). The maximum number of packets on the access link, therefore,

is bounded by Ti

RT Ti × Wmax.

3.3.1

Markov Model for Mixed TCP and UDP Traffic

In [7] a Markov model (shown in figure 3.4) was developed to discuss the characteris-tics of mixed TCP and UDP traffic. They have some claims like TCP and UDP packet arrivals are Poisson, UDP packets are generally smaller than TCP packets and aggre-gate TCP rate increases exponentially with bottleneck-link buffer size. They refine the M/M/1/B model by relaxing the assumption that packet sizes are exponentially distributed. It has been recently observed that Internet packet sizes have a bimodal distribution ([36] and [37]) with peaks at large packets (1500-byte TCP data) and small packets (typically 40-byte TCP ACK). Real-time and other streams generate intermediate packet sizes (200 to 500 bytes). To develop a model that is tractable yet reflective of these dominant modes, they employ an M/D/1/B model in which packet sizes are bimodal.

(35)

3.4

Complex Network Topology

In Figure 3.5 a parking-lot[8] model which is a classical multiple links scenario is shown. Almost all of the models so far we discussed are of the small buffer model along with the dumbbell topology. But here we also discuss the small buffer scenario for the parking-lot model to see any performance differences with the dumbbell topology.

Figure 3.5: Parking lot model.

By using the parking-lot network topology we intend to test TCP and UDP per-formance for packets under a scenario with multiple bottlenecks and multiple queue sizes. Also, many authors have used this network topology for fairness tests after dumbbell topology.

(36)

3.4.1

Linear Parking-lot Network Model

Figure 3.6: Linear parking-lot network model.

A linear parking-lot network [8] is one the of the most used network topologies in fairness studies. A linear parking-lot network, depicted in Figure 3.6, consists of L = n links with capacities Ci and K = n + 1 flow classes, where class 0 flows traverse

through all the links and class k flows traverse only through link k, k = 1, . . . , n.

3.4.2

n-link Parking-lot Network Model

Figure 3.7: n-link parking-lot network model.

An n-link parking lot network [8] configuration is illustrated in Figure 3.7. An n-link parking lot network is a special case of tree network topology. In n-link parking lot network, there are L = n links with capacities C1 ≤ . . . ≤ Cnand K = n flow classes,

(37)

3.5

Droptail and Random Early Detection (RED)

The main result using the drop-tail scheme is that while aggregate throughput (link utilization) is largely independent of the router architecture, the buffer size and the offered load, other metrics such as loss and delay are much more sensitive. Some results also indicate that metrics such as throughput, delay, and loss on a per flow basis can show a high degree of dependency on the buffer size and the offered load. This sheds further light on some of the concerns raised earlier regarding why link utilization is not a very useful metric when sizing router buffers. RED is better than drop-tail when computing throughput and delay, both from an aggregate and per flow point of view. The packet drop probability with the RED algorithm is shown in Figure 3.8 and the algorithm to determine the packet drop for Random Early Detection is shown in figure 3.9.

Figure 3.8: Random Early Detection packet drop probability.

For offered loads up to 80% of bottleneck link capacity, no AQM scheme provides better response time performance than the simple drop-tail FIFO queue management. Further, the response times achieved on a 100 Mbps link are not substantially different from the response times on a 1 Gbps link with the same number of active users that generate this load. This result is not changed by combining any of the AQM schemes with ECN [30]. The maximum and minimum thresholds along with the average length of the queue are shown in figure 3.10.

(38)

Figure 3.9: Random Early Detection algorithm [4].

Figure 3.10: Random Early Detection gateways.

An alleged weakness of Random Early Detection (RED) is that it does not take into consideration the number of flows sharing a bottleneck link. Given TCP’s congestion control mechanism, a packet mark or drop reduces the offered load by a factor of (10.5n−1) where n is the number of flows sharing the bottleneck link. Thus, RED is not effective in controlling the queue length when n is large. On the other hand, RED can be too aggressive and can cause under-utilization of the link when n is

(39)

small. So RED needs to be tuned for the dynamic characteristics of the aggregate traffic on a given link. The reference [30] proposed a self-configuring algorithm for RED by adjusting drop probability (maxp) every time the average queue length falls

out of the target range between minimum threshold (minth) and maximum threshold

(maxth). When the average queue length is smaller than minth, maxp is decreased

multiplicatively to reduce RED’s aggressiveness in marking or dropping packets; when the queue length is larger than maxth, maxp is increased multiplicatively. Floyd et

al. [4] improved upon this original adaptive RED proposal by replacing the MIMD (multiplicative increase multiplicative decrease) approach.

3.6

Proposed Router Buffer Sizes

Architectures for different sizes of router buffers are discussed below based on the link utilization.

3.6.1

Near-100% Utilization (MegaByte Buffers)

Researchers from Stanford University [5] showed in 2004 that when a large number N of long-lived TCP flows share a bottleneck link in the core of the Internet, the absence of synchrony among the flows permits a central limit approximation to the buffer occupancy. This result assumes that there are sufficiently large number of TCP flows so that they are asynchronous and independent of each other. In addition, it assumes that the buffer size is largely governed by long-lived TCP flows only. Thus if this result holds true, a core router carrying 10,000 TCP flows needs only 12.5 MB of buffering instead of 1.25 GB as governed by the earlier rule-of-thumb.

3.6.2

80-90% Utilization (KiloByte Buffers)

More recently, using control theory, differential equations, and extensive simulation different papers have argued in favor of further reducing the buffer size and recom-mend that as few as 20-50 packets of buffering suffice at core routers for TCP traffic to realize acceptable link capacities. This model has been referred to as the tiny-buffer model in the literature [6]. The use of this model however comes with a trade-off. Reducing buffers to only a few dozen KB can lead to a 10-20% drop in the link utiliza-tion. The model relies on the fact that TCP flows are not synchronized and network

(40)

traffic is not bursty. Such a traffic scenario can happen in two ways. First, since core links operate at much higher speeds than access links, packets from the source node are automatically spread out and bursts are broken. Second, if the TCP stack running at end-hosts is altered such that it can space out packet transmissions (also called TCP pacing). The slight drop in link utilization resulting from the tiny-buffer model seems worthwhile since core links are typically over-provisioned, and it pays to sacrifice a bit of link capacity if this permits a move to either an all-optical packet switch or more efficient electronic router design.

The main conclusions from [6] are as follows. First, at the core of the Internet where there are a large number of TCP flows at any given time, buffers can be safely reduced by a factor of ten without affecting the network performance. Second, care should be exercised when directly employing the small-buffer model since it may not hold in all parts of the network, particularly on the access side. Third, the use of tiny buffers is justifiable in a future all-optical network, since bandwidth will be abundant, but technological challenges limit the buffer size to a few dozen packets. Thus, the 10-20% reduction in link utilization may be acceptable.

According to the Stanford research team [5] the most obvious way to measure the impact of buffer size is by seeing how it affects bandwidth utilization. The naive view is that larger the buffer, the higher is the utilization. There are two problems with this: First, utilization is not necessarily the right metric. It is a useful metric when capacity is scarce/expensive and the network operator wants to be sure that all the capacity can be used. But core networks today are run significantly below 100% utilization, and the need for higher utilization is not nearly as strong as it once was. Other quality-of-service metrics like latency and jitter must be considered, and these will definitely be improved with smaller buffers. Furthermore, small buffers may enable cheaper/faster routers, so even if utilization is lower throughput may still be higher. Second, it is not always the case that larger buffers give higher utilization. This paradox arises because of synchronization. They argue here that if TCP flows are synchronized then they need a large buffer to get high utilization, and that if they are desynchronized then we can get away with much smaller buffers.

(41)

Chapter 4

Performance Evaluation, Analysis

and Comparisons

In this chapter, simulation environment, performance metrics are discussed and anal-ysis of simulation data is presented. Difference between different approaches to min-imize the buffer size is also a focus of this chapter.

4.1

Simulation Environment

In our simulations, we choose the network simulator, ns-2 [38]. The ns-2 network simulator is a widely used platform for networking researchers. It is large and cum-bersome, yet very useful and good starting point for many projects [39]. Here are some of the reasons to take ns-2 as the simulation tool for this architecture:

1. Models for hosts, links, routers, buffers. 2. Detailed protocol models for TCP. 3. Usable for configuring topology. 4. Usable for configuring network traffic. 5. Tracing and visualization support.

(42)

4.2

Performance Metrics

The main three performance metrics that we focused for our simulations results are: 1. UDP percent of packet losses.

2. TCP percent of packet losses. 3. UDP throughput of bottleneck link. 4. TCP throughput of bottleneck link. 5. Link utilization of bottleneck link.

In most of our cases, we have some TCP as well as UDP sources in our simulations. The main performance metrics for the UDP traffic from the small buffer is to calculate the number of UDP packet losses. To calculate the performance metrics, we used awk scripts to find the total number of UDP packets going through the link and the total number of packets discarded from this link. Then we took the percentage of this discarded packets.

P ercent of U DP packet loss = N umber of U DP lost packets

T otal number of U DP packets× 100%

In case of TCP traffic, we also calculated the percent of TCP packet lost. To find this, we calculated the total number of TCP packets going through the link and the total number of packets discarded from this link. Then we took the percentage of these discarded packets.

P ercent of T CP packet loss = N umber of T CP lost packets

T otal number of T CP packets× 100%

We have also calculated the UDP/TCP throughput for bottleneck link. To mea-sure this, we have calculated the number of total UDP/TCP packets going through the bottleneck link. Then we calculated the total number of UDP/TCP packets dis-carded from this link. We multiply this subtracted result with the packet size and number of bits in a byte. So the ultimate result is in bits per second (bps).

(43)

T hroughput in bps = {P kts sent − P kts lost} × pkt size × 8/duration Link utilization is calculated from throughput as follows:

Link U tilization = T hroughput

Link capacity × 100%

4.3

Different Parameters Used in the Simulations

Different parameters for traffic generators and RED algorithm that are used in our study are shown below.

4.3.1

RED Parameters Used in the Simulations

The default RED parameters shown in Table 4.1 are one set of parameters from one of the Sally Floyd’s simulations (this is from tcpsim, the older simulator). Note that these default values have been overwritten where it was necessary to fix the buffer sizes and packet sizes.

Parameter Value q-weight 0.002 thresh 5 maxthresh 15 dropmech random-drop plot-file none bytes false doubleq false dqthresh 50 wait true

Table 4.1: RED parameters and values

4.3.2

CBR Parameters Used in the Simulations

(44)

Parameter Value

Sending rate in bps 360 × 103

Random time to the inter-burst transmission interval. 0 (false) Maximum number of payload packet that CBR can send 167

Table 4.2: CBR traffic generator parameters and values

4.3.3

VBR Parameters Used in the Simulations

In the simulation, the parameters for the Variable Bit Rate (VBR) sources are shown in table 4.3. Note that the average data rate of this VBR source is the same as the CBR source mentioned in Table 4.2.

Parameter Value

Sending rate in bps during an ON period (rate) 600 × 103

Average ON period in seconds (burst time) 150 ms Average OFF period in seconds (idle time) 100 ms Table 4.3: VBR traffic generator parameters and values

4.4

Simulation Scenarios

In our simulations, we use various network models to study the performance of a small buffer router. The various buffer sizes used in the simulations are: 50 Kilobytes, 20 Kilobytes and 7 Kilobytes. All these simulations are performed with mixed TCP and UDP traffic. The simulations are also performed with various packet sizes for both TCP and UDP to study the packet and buffer size impacts on the system performance. They are:

1. Dumbbell topology with TCP and Constant Bit Rate (CBR) UDP traffic. 2. Dumbbell topology with TCP and Variable Bit Rate (VBR) UDP traffic. 3. Linear parking-lot model with TCP and Constant Bit Rate (CBR) UDP traffic. 4. Linear parking-lot model with TCP and Variable Bit Rate (VBR) UDP traffic.

(45)

5. n-link parking-lot model with TCP and Constant Bit Rate (CBR) UDP traffic. 6. n-link parking-lot model with TCP and Variable Bit Rate (VBR) UDP traffic. 7. 12 TCP source (each having 100 flows) and 1 UDP (VBR) source to study link

delays.

8. n-link parking-lot model to compare link delays.

For each of these scenarios, 4 simulation runs are performed with different seeds. Due to time constraints, only 4 simulation runs are done for different models discussed above. We have found out that for each data set the difference among data for different seeds is not statistically significant as they fall within a 95% confidence interval.

4.5

Single Bottleneck: Dumbbell Topology

To study the performance of the basic small buffer scenario, a simple dumbbell net-work scenario in Figure 4.1 is chosen for the simulations. One UDP source and two TCP sources have been deployed. The bottleneck link has 1.5 Mb bandwidth and 5 ms of delay. So, in this case the bandwidth-delay product (BDT) is 1.5×1086×0.01 = 1875 bytes.

Figure 4.1: Simple network topology for simulation.

4.5.1

Performance of Single Bottleneck Link: Dumbbell

Topol-ogy with Constant Bit Rate (CBR) UDP source

In the first graph (fig. 4.2) the data collected for UDP packet losses for 7KB, 20KB and 50KB buffer sizes are shown. TCP packet loss graph for the same network is

(46)

shown in figure 4.3. Here the 7 Kilobytes buffer is considered as tiny buffer, 20 Kilobytes buffer is considered as small buffer and the 50 Kilobytes buffer as medium sized buffer. In all of these cases the link utilization is calculated to be between 80-90%. From these two packet loss graphs it can be found that TCP packet loss is always increasing with larger packet sizes for 20 KB and 50 KB buffers. The UDP packet loss is also increasing upto 1000 bytes for 20 KB and 50 KB buffers though it is not that much significant in compare to TCP packet losses. For a tiny buffer like 7 KB TCP and UDP packet loss curves are not much regular compared to the 20 KB and 50 KB buffers.

(47)

Figure 4.3: TCP packet loss: Dumbbell topology with CBR

Individual throughputs for UDP and TCP are shown in (Fig. 4.4) and (Fig. 4.5). It can be seen from these two figures that the performance of TCP for small and medium sized buffer is very similar for almost all packet sizes although for 20 KB small buffer the TCP performance suffers for larger packet sizes. Similar to the packet loss graphs the performance of tiny 7 KB buffer is not much regular for UDP throughput but comparable with other buffer sizes for TCP throughput. Overall, from this set of simulations it can be concluded that if routers have to use tiny buffers, then it is better to use a small data packet size to obtain a reasonable performance.

(48)

Figure 4.4: UDP throughput: Dumbbell topology with CBR

Figure 4.5: TCP throughput: Dumbbell topology with CBR

4.5.2

Dumbbell topology: Analysis of Variable Bit Rate (VBR)

UDP traffic

In the previous section, the case of dumbbell topology is studied with Constant Bit Rate (CBR) UDP source traffic. To see the impact of variable bit rate traffic, we simulate the same network using a Variable Bit Rate (VBR) traffic generator for the

(49)

UDP source and plotted the graphs with packet losses in Fig. 4.6 and Fig. 4.7. For the TCP packet loss the graphs in case of 20 KB and 50 KB are very similar but in case of UDP packet loss there are less packet losses for 50 KB buffer compared with 20 KB buffer. For tiny buffer like 7 KB the packet losses in case of TCP and UDP are significant compared with 20 KB and 50 KB buffer.

Figure 4.6: UDP packet loss: Dumbbell topology with VBR

Figure 4.7: TCP packet loss: Dumbbell topology with VBR

(50)

in figures 4.8 and 4.9. From the UDP throughput graph it can be seen that 7 KB tiny buffer performance in case of throughput is less than larger buffer sizes in most of the packet sizes. The performance of TCP and UDP for 20 KB small buffer and 50 KB medium sized buffer is almost similar for packet sizes less than 1000 bytes. Overall, similar conclusions can be drawn from this set of simulations that if routers have to use tiny buffers, then it is better to use a small data packet size to obtain a reasonable performance. As a VBR sources is used here which has “On” and “Off” durations, the TCP source can get some relief from UDP traffic during the “Off” periods. Hence there is slight performance improvement when compared to the CBR simulation cases.

(51)

Figure 4.9: TCP throughput: Dumbbell topology with VBR

4.5.3

Dumbbell topology with fixed TCP packet size and

variable UDP packet sizes

To see the impact of fixed TCP packet size and variable UDP packet sizes, buffer size is fixed at 20 KB and TCP packet size of 1000 bytes. From the graph (Figure 4.10), it can be seen that the UDP packet loss is increasing drastically with larger packet size. In case of TCP packets the curve is almost flat line and the overall throughput for TCP and UDP is always near 1.4 Mbps.

(52)

Figure 4.10: Dumbbell topology with packet losses (buffer size is 20 KB and only UDP packet size is varying)

4.6

Multiple bottleneck links: Parking-lot model

In this section, performance analysis of linear and n-link parking-lot models are pre-sented.

4.6.1

Multiple bottleneck link: Linear parking-lot model with

CBR

As a complex network topology, the linear parking-lot model is chosen to see the performance of small buffer routers in networks. In case of the linear parking-lot model (figure 4.11), there is one UDP CBR source connected with node n6 and the sink for this source is node n13. i.e., the CBR source traverses through all bottleneck links in the network. There are four TCP sources connected as follows: source n0 connected to sink n5, source n4 connected to sink n9, source n8 connected to sink n12 and source n11 connected to sink n1. Here we have chosen the link n2-n3 as 1st bottleneck link, n3-n7 as 2nd bottleneck link and n7-n10 as 3rd bottleneck link. The graphs are shown based upon these three bottleneck links.

(53)

Figure 4.11: Linear parking-lot network model for simulation.

As before, the buffer size is varied and buffer sizes of 50 Kilobytes, 20 Kilobytes and 7 Kilobytes with different TCP and UDP packet sizes are used in the simulations. The bottleneck links have 1.5 Mb bandwidth and 5 ms of delay. So, in this case for each bottleneck link in the parking-lot model the bandwidth-delay product (BDT) is

1.5×106×0.01

8 = 1875 bytes.

In the graph of Fig 4.12, the performance of the network for medium sized buffer with 50 Kilobytes is shown. At a particular point where the packet size is near 1000-1100 bytes the UDP packet loss is going down a bit. In case of TCP packets, the packet loss is always increasing with larger size of packets.

Figure 4.12: Linear parking lot topology with packet losses (50 KB buffer with CBR) From the graph in Fig. 4.13 it can be seen that the UDP packet loss is steadily increasing like TCP packet loss up to 1100 bytes, after that the UDP packet losses

(54)

are significant for 1200, 1300 and 1400 bytes.

Figure 4.13: Linear parking lot topology with packet losses (20 KB buffer with CBR)

From the graph in Fig. 4.14 for 7 Kilobytes tiny buffer size, the UDP packet loss is not synchronously increasing or decreasing like the case for the dumbbell topology.

Figure 4.14: Linear parking lot topology with packet losses (7 KB buffer with CBR)

In Fig. 4.15 the throughput for 50 KB buffer, in Fig. 4.16 the throughput for 20 KB buffer and in Fig. 4.17 the throughput for 7 KB buffer are plotted. From all these figures, it can be concluded that performance of tiny buffer is reasonable for small

(55)

packet sizes, but deteriorates significantly as the packet size increases. Though the performance seems reasonable for larger buffer sizes, the packet loss is still hugely a function of packet sizes. Thus ultimately it is the perceived performance of the applications that will have a huge impact.

Figure 4.15: Linear parking-lot throughput for different hops (50 KB buffer with CBR)

Referenties

GERELATEERDE DOCUMENTEN

Modeling dune morphology and dune transition to upper stage plane bed with an extended dune evolution model including the transport of bed sediment in suspension showed

A 4-element LO-phase shifting phased-array system with 8-phase passive mixers terminated by baseband capacitors is realized in 65nm CMOS.. The passive mixers upconvert

 For example, operational policies which guide decision making are easier to implement than 

University of Twente Two studies were conducted to facilitate the development of feasible support for the process of integrating digital video making activities in the primary

Geconcludeerd kan worden dat sport geen significante moderator bleek te zijn, voor de negatieve relatie tussen stress en slaap, ondanks de verwachtingen. Aan de volgende

De grote verschillen in mi- neralenoverschotten tussen melkveebedrijven die niet samenhangen met de intensiteit van de bedrijven, geven aan dat er nog kansen voor een verdere

By using twelve Poisson structures with high-degree polynomial coefficients as explicit counter- examples, we show that both the above claims are false: neither does the first

Bodemkundig  gezien  bevindt  het  onderzoeksgebied  zich  in  de  overgangszone  van  de  zand‐ leemstreek  en  de  leemstreek.  Grosso  modo  komen