Characterizing Chord, Kelips and Tapestry algorithms in P2P streaming applications over wireless...

6
Characterizing Chord, Kelips and Tapestry algorithms in P2P streaming applications over wireless network Hung Nguyen Chan 1 , Khang Nguyen Van 1 , Giang Ngo Hoang 2 , 1 Faculty of Electronics and Telecommunications Hanoi University of Technology R410, C9 Building, 1 Dai Co Viet Str, Hanoi, Vietnam. 2 Library and Information Network Center Hanoi University of Technology. Ta Quang Buu Building, 1 Dai Co Viet Str, Hanoi, Vietnam. Email: {chanhung, khangnv, nhgianglinc}@mail.hut.edu.vn Abstract. In the last few years, there is a tendency of shifting P2P applications toward multimedia services, especially P2P streaming applications. The reason behind this is the clear advantages of flexibility, efficiency and selfscalability of P2P networks, which greatly reduce the infrastructure cost of service providers. However, supporting P2P streaming over wireless environment is a very challenging task due to the intermittent nature of wireless link and the energysaving mechanisms of mobile devices. As a result, the P2P streaming over wireless will be largely different from wired P2P application due to the high frequency of node joining & leaving the network, namely churn rate. Since most of third generation of P2P applications implement Distributed Hash Table algorithm (DHT), it is crucial to deeply investigate and improve DHTs in order to adapt them in the harsh wireless environments. Our study use simulation approach to characterize and compare the performance of three popular DHT algorithms Chord, Kelips and Tapestry under high churn rate. The major contribution of this paper is the identification of the most important parameters of DHTs to their performance and the comparisons between these mechanisms under extreme conditions of wireless environment. We observed several interesting behaviors of DHTs including: 1) Tapestry works better than Chord and Kelips in terms of successful lookup rate at very high churn rate less than 120s but Chord achieves the best performance among three DHTs when churn rate is more than 300s. 2) Tapestry performance is more sensitive to RTT than Kelips and Chord. 3) Both Chord and Tapestry shows high scalability under high churn (except for some extreme cases when Chord fails at very high churn rate). 4) Churn rate strongly affects to the successful lookup ratio, but shows very slight effect on medium lookup latency of all three protocols. 5) We also found Chord is inferior to the two other DHT in terms of performance optimization. Keywords: peertopeer, overlay network, DHT, performance evaluation, churn, distributed hash table, P2P Streaming, wireless. I. INTRODUCTION Along with the fast grow of P2P applications, there are successful attempts to adopt filesharing P2P mechanism in order to create large scale streaming video platform serving both online (such as PPLive [3] and UUSee [2]) and offline contents [6]. By measuring the realworld Coolstreaming system, in [1], Li et al showed that churn is the most critical factor that affects the overall performance. However, all these studies only focus on the network of longlive nodes, which do not join and leave the P2P network very frequently. In the wireless environment, where communication links is heavily affected by high packet loss rate and bandwidth fluctuations, the situation is much more complicated. In addition, various energysaving mechanisms try to save battery life of mobile equipments by turning off unnecessary data transmission which shortens node online time. As a result, churn rate of mobile nodes are much higher than that of fixed node in wired networks, which require significant modifications in P2P mechanisms. On the other hand, third generation P2P networks implement a globally consistent protocol to ensure that any node can efficiently route a search query to some peer that has the desired chunk of data. Distributed Hash Table or DHT is currently known as the most common mechanism for third generation P2P network. In DHTbased P2P systems, data blocks are associated to keys created from their unique attributes. Each peer is assigned an ID and is responsible for storing a range of keys which is closest to its ID. When a node lookups for a key, the network will return the ID of the node where file associated with that key is stored. DHT allows nodes to exchange data blocks based on their keys. Since this model has proved to be effective in large scale networks, most modern P2P systems bases on DHT mechanism.[17] is a good example of adopting DHT in P2P Video on Demand system. Some wellknown DHT are Chord [5], Kelips [9], Tapestry [11], Kademlia [8], etc. Chord [5] uses consistent hashing to assign peers and data key to mbit identifiers. A peer identifier is obtained by hashing peer’s IP address while a key identifier is made by hashing the data key. Chord structures its identifiers space in a circle of numbers from 0 to 2m – 1. Key k is assigned to the first node clockwise from k whose identifier is equal to or follows the identifier of k in the identifier space. This node is called the successor node of key k, denoted by successor (k). A node in b base Chord keeps (b1)log b(n) fingers whose its IDs lies at exponentially increasing fractions of the ID space away from itself. Each node keeps a successor list of n successor nodes. Lookup for a key return the successor of node whose ID most closely precedes the key. The Chord protocol uses a stabilization protocol running periodically in the background to update the successor list (every t finger ) and the entries in the finger table (every t succ ). Successor list and finger table are stabilized separately. Kelips [9] divides its identifier space into k groups, which k approximate square root of n (n is number of node) [6]. Group of a node is its ID mod k. Each node has following entries: group view contains a set of nodes lying in the same group; contacts contain entries for a few of node for each other groups; filetuples contain a (partial) set of tuples, each detailing a file 1-4244-2426-9/08/$20.00 ©2008 IEEE 126 Authorized licensed use limited to: National Dong Hwa University. Downloaded on October 15, 2008 at 01:30 from IEEE Xplore. Restrictions apply.

Transcript of Characterizing Chord, Kelips and Tapestry algorithms in P2P streaming applications over wireless...

Characterizing Chord, Kelips and Tapestry algorithms in P2P streaming applications

over wireless network Hung Nguyen Chan 1 , Khang Nguyen Van 1 , Giang Ngo Hoang 2 ,

1 Faculty of Electronics and Telecommunications ­ Hanoi University of Technology R410, C9 Building, 1 Dai Co Viet Str, Hanoi, Vietnam.

2 Library and Information Network Center ­ Hanoi University of Technology. Ta Quang Buu Building, 1 Dai Co Viet Str, Hanoi, Vietnam.

Email: chanhung, khangnv, nhgiang­[email protected] U H

Abstract. In the last few years, there is a tendency of shifting P2P applications toward multimedia services, especially P2P streaming applications. The reason behind this is the clear advantages of flexibility, efficiency and self­scalability of P2P networks, which greatly reduce the infrastructure cost of service providers. However, supporting P2P streaming over wireless environment is a very challenging task due to the intermittent nature of wireless link and the energy­saving mechanisms of mobile devices. As a result, the P2P streaming over wireless will be largely different from wired P2P application due to the high frequency of node joining & leaving the network, namely churn rate. Since most of third generation of P2P applications implement Distributed Hash Table algorithm (DHT), it is crucial to deeply investigate and improve DHTs in order to adapt them in the harsh wireless environments. Our study use simulation approach to characterize and compare the performance of three popular DHT algorithms Chord, Kelips and Tapestry under high churn rate. The major contribution of this paper is the identification of the most important parameters of DHTs to their performance and the comparisons between these mechanisms under extreme conditions of wireless environment. We observed several interesting behaviors of DHTs including: 1) Tapestry works better than Chord and Kelips in terms of successful lookup rate at very high churn rate less than 120s but Chord achieves the best performance among three DHTs when churn rate is more than 300s. 2) Tapestry performance is more sensitive to RTT than Kelips and Chord. 3) Both Chord and Tapestry shows high scalability under high churn (except for some extreme cases when Chord fails at very high churn rate). 4) Churn rate strongly affects to the successful lookup ratio, but shows very slight effect on medium lookup latency of all three protocols. 5) We also found Chord is inferior to the two other DHT in terms of performance optimization.

Keywords: peer­to­peer, overlay network, DHT, performance evaluation, churn, distributed hash table, P2P Streaming, wireless.

I. INTRODUCTION Along with the fast grow of P2P applications, there are

successful attempts to adopt file­sharing P2P mechanism in order to create large scale streaming video platform serving both online (such as PPLive [3] and UUSee [2]) and offline contents [6]. By measuring the real­world Cool­streaming system, in [1], Li et al showed that churn is the most critical factor that affects the overall performance.

However, all these studies only focus on the network of long­live nodes, which do not join and leave the P2P network very frequently. In the wireless environment, where communication links is heavily affected by high packet loss rate and bandwidth fluctuations, the situation is much more

complicated. In addition, various energy­saving mechanisms try to save battery life of mobile equipments by turning off unnecessary data transmission which shortens node online time. As a result, churn rate of mobile nodes are much higher than that of fixed node in wired networks, which require significant modifications in P2P mechanisms.

On the other hand, third generation P2P networks implement a globally consistent protocol to ensure that any node can efficiently route a search query to some peer that has the desired chunk of data. Distributed Hash Table or DHT is currently known as the most common mechanism for third generation P2P network.

In DHT­based P2P systems, data blocks are associated to keys created from their unique attributes. Each peer is assigned an ID and is responsible for storing a range of keys which is closest to its ID. When a node lookups for a key, the network will return the ID of the node where file associated with that key is stored. DHT allows nodes to exchange data blocks based on their keys. Since this model has proved to be effective in large scale networks, most modern P2P systems bases on DHT mechanism.[17] is a good example of adopting DHT in P2P Video on Demand system. Some well­known DHT are Chord [5], Kelips [9], Tapestry [11], Kademlia [8], etc.

Chord [5] uses consistent hashing to assign peers and data key to m­bit identifiers. A peer identifier is obtained by hashing peer’s IP address while a key identifier is made by hashing the data key. Chord structures its identifiers space in a circle of numbers from 0 to 2m – 1. Key k is assigned to the first node clockwise from k whose identifier is equal to or follows the identifier of k in the identifier space. This node is called the successor node of key k, denoted by successor (k). A node in b­ base Chord keeps (b­1)log b(n) fingers whose its IDs lies at exponentially increasing fractions of the ID space away from itself. Each node keeps a successor list of n successor nodes. Lookup for a key return the successor of node whose ID most closely precedes the key. The Chord protocol uses a stabilization protocol running periodically in the background to update the successor list (every tfinger) and the entries in the finger table (every tsucc). Successor list and finger table are stabilized separately.

Kelips [9] divides its identifier space into k groups, which k approximate square root of n (n is number of node) [6]. Group of a node is its ID mod k. Each node has following entries: group view contains a set of nodes lying in the same group; contacts contain entries for a few of node for each other groups; file­tuples contain a (partial) set of tuples, each detailing a file

1-4244-2426-9/08/$20.00 ©2008 IEEE 126

Authorized licensed use limited to: National Dong Hwa University. Downloaded on October 15, 2008 at 01:30 from IEEE Xplore. Restrictions apply.

name and host IP address of the node storing the file (called the file’s home node). A node stores a file­tuple only if the file’s home node lies in this node’s group. Node routing table includes entries in group view and contacts. Kelips does not define an explicit mapping of a given key to its responsible node. Kelips replicates key/value pairs among all nodes within a key’s group. A lookup request is sent to the topologically closest contact among those it knows for that group corresponding to key. A received lookup request is resolved by searching among the file­tuples maintained at the node, and returning to the querying node the address of the home node storing the file.

Tapestry [11] structures its identifiers space as a tree. A Tapestry node ID can be viewed as a sequence of l base­b digits. A Tapestry node has a routing table which has k levels, each with b entries. Nodes in the m th level share a prefix of length m ­1 digits, but differ in m th digit. Each entry may contain up to c nodes, sorted by latency. The closest of these nodes is the entry’s primary neighbor; the others serve as backup neighbors. In Tapestry, nodes forward a lookup message for a key by resolving successive digits in the key (prefix­based routing). When no more digits can be resolved, algorithm known as surrogate routing determines exactly which node is responsible for the key. Routing in Tapestry is recursive. For lookups to be correct, at least one neighbor in each routing table entry must be alive. Tapestry periodically checks the status of each primary neighbor, and if the node is found to be dead, the next closest backup in that entry (if one exists) becomes the primary. When a node declares a primary neighbor dead, it contacts some number of other neighbors asking for a replacement.

Since DHT algorithms try to establish and maintain certain overlay topology (such as the ring topology in Chord), high frequency of node join and leave will greatly affect topology stability, causing increase in both communication cost to reestablish partnership between neighboring nodes and time of successful lookup. Even though several studies tried to mitigate the effect of churn on DHTs [7], but to our knowledge, no study focus on comparing the performance of DHTs under extreme condition of wireless environment.

In our study three popular DHTs including Chord, Kelips, Tapestry are characterized using simulation. We run more than 12.000 simulations on P2PSim [16] on Linux OS to simulate networks of various size ranging from 100 nodes to 1000 nodes implementing these DHTs under high and very high churn rate from 10 to 600 seconds.

This paper is structured as follows: after the giving introduction, background and review of related studies, we present experimental setup parameters, followed by the discussion on the results obtained from simulation work. Finally, the fourth section concludes the paper, and raises open issues for future studies.

II. EXPERIMENTAL PARAMETERS

P2Psim [16] was chosen as the simulation platform for this study due to its strong support for newest DHT algorithms. Our simulation process includes two steps. In the first step, wide ranges of parameter values of Chord, Kelips and Tapestry were

used as input. Based on the results obtained from the first step, second step involves only a subset of parameter values, which produced the best performance in first step. In both steps, the simulated network consists of 100, 250, 500, 750 and 1000 nodes. The average RTT of 2 seconds was set between any pair of nodes. In order to setup the network topology, a Perl script was used to randomly generate geographical position of nodes. To simulate high churn rate, nodes are alternately crashed and re­joined the network. The interval between successive events for each node is exponentially distributed with various mean values from 10 seconds to 600 seconds (10 minutes). Each node generates lookup requests for randomly selected keys, which are exponentially distributed with the mean of 60s. TABLE I. to TABLE III. summarize the parameters of Chord, Kelips and Tapestry used in the simulation.

The output data includes a very large number of log files produced by P2PSIM was first pre­processed by several UNIX bash scripts and then was processed in MS Excel.

TABLE I. SIMULATION PARAMETERS OF CHORD

Parameters Scenario 1 (1 st step) Scenario 2 (2 nd step)

Base 2,4,8, 16,32, 64,128 16,32 Finger stabilization interval (sec) 1, 3, 6, 9, 18, 36, 72,

144 18, 36

Pnstimer (sec) 1, 3, 6, 9, 18, 36, 72, 144

18, 36

Number of successors 16 16

TABLE II. SIMUATION PARAMETERS OF TAPESTRY

Parameters Scenario 1 (1 st step) Scenario 2 (2 nd step)

Base 2,4,8, 16,32, 64,128 2,4 Stabilization interval (sec) 1, 3, 6, 9, 18, 36, 72,

144 72, 144, 288

Number of backup nodes 2,3,4 4 Number of nodes contacted during repair

1,3,5,10 10

TABLE III. SIMULATION PARAMETERS OF KELIPS

Parameters Scenario 1 (1 st step) Scenario 2 (2 nd step)

Gossip interval (sec) 1, 3, 6, 9, 18, 36, 72, 144

72,144

Group ration 8,16,32 32 Contact ration 8,16,32 32 Times a new item is gossiped 2,8 2 Routing entry timeout (sec) 5, 30, 60, 150, 300 30, 60, 150, 300

III. RESULTS AND DISCUSSION

Fig. 1 to Fig. 5 show simulation results in scenario 1, in which the x­axis shows the average bytes per second sent by live nodes and the y­axis indicates lookup performance either in median lookup latency or failure rate.

In Fig. 1, each point represents the fraction of failed lookups and average bytes per second sent by live nodes for a unique set of parameter values. The solid lines named “convex hull” [10] in these figures represent the best combination between fraction of failed lookups and average live bandwidth sent by live nodes.

127

Authorized licensed use limited to: National Dong Hwa University. Downloaded on October 15, 2008 at 01:30 from IEEE Xplore. Restrictions apply.

Figure 1. Node join/leave with interval = 600s in 100 node Chord network

Using the Performance Versus Cost (PVC) framework proposed by [10], the “overall convex hull” is defined as the overall chart lines represent the best achievable combination between DHT performance parameters (i.e. fraction of failed lookups, medium of lookup latency) and bandwidth cost (average live bandwidth) when all parameters are varied. Likewise, the line represents the best achievable combination when a parameter is fixed and others are varied is termed as “parameter convex hull”.

Figure 2. The fraction of failed rate and average live bandwidth in 1000 node networks when node joins/leaves with interval of 120s (top) and 600s

(bottom).

Fig.2 shows the overall convex hull of Chord, Kelips and Tapestry in 1000 node networks where nodes leave/join with the interval of 120s and 600s. The results show that the fraction of failed lookup of Tapestry is smaller than that of Kelips and

Chord at very high churn rate (when node joins/leaves every 120s). But at lower churn rate (600s); the failure rate of Chord is smaller than that of Kelips and Tapestry.

Fig.3 shows the effect of average roundtrip times to Chord, Kelips and Tapestry. The fraction of failed lookups of Tapestry largely drops when the average RTT decreases from 2s to 0.5s while the fraction of failed lookups and the medium of successful lookup latency of Chord and Kelips are almost not affected.

Figure 3. The effect of average roundtrip time to fraction of failed lookups (top) and medium of successful lookup latency (bottom) in 100 node networks (which have average roundtrip times of 0.5s and 2s) when node joins/leaves

with interval of 600s.

Fig.4 and Fig.5 explore several Chord, Kelips and Tapestry parameters. In Fig. 4, the overall convex hull of Tapestry (top) coincides with the convex hull of the Tapestry “base” value of 2 and the overall convex hull of Kelips (bottom) coincides with the convex hull of Kelips “gossip_interval” value of 144s.

On the other hand, Fig. 5 shows that there is no single best value of successor stabilization interval for Chord, since there is no convex hull which is completely coincides with the Chord overall convex hull. Moreover, this overall convex hull is made up of several segments of convex hulls of Chord “basictimer” parameter corresponding to successor stabilization interval of 3s, 9s, 18s, and 36s. The similar conclusion can be given from the Chord finger stabilization (pnstimer). These results suggest that, for some DHTs, such as Kelips and Tapestry, some parameters can be chosen to achieve the best performance and for other DHTs, (e.g. Chord), several parameters must be tuned together based on the application requirements of tradeoffs among successful lookup ratio, latency and bandwidth.

128

Authorized licensed use limited to: National Dong Hwa University. Downloaded on October 15, 2008 at 01:30 from IEEE Xplore. Restrictions apply.

Figure 4. The effect of “base” in Tapestry (top) and the effect of “gossip interval” in Kelips (bottom) in 1000 nodes network with node joins/leaves with

interval of 600s

Figure 5. The convex hulls of successor stabilization interval (top) and finger stabilization interval (bottom) in 1000 nodes Chord network where node

joins/leaves every 600s

From the results of the first step, a selective set of parameter values of Chord, Kelips and Tapestry was chosen for input in the second step (or the scenario 2), as listed in Tables 1 to 3. The average results were used to evaluate the effect of the churn rate and network size on the performance of Chord, Kelips and Tapestry.

Failed lookup vs. churn rate

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 100

200

300

400

500

600

Node join/leave interval (s) Frac

tion of lo

okup

failed

Chord100 Kelips100 Tapestry100 Chord500 Kelips500 Tapestry500 Chord1000 Kelips1000 Tapestry1000

Med. successful lookup latency vs. churn rates

700

800

900

1000

1100

1200

1300

1400

1500

0 100

200

300

400

500

600

Node join/leave interval (s)

Med

. suc

cess

ful loo

kup latenc

y (m

s)

Chord500 Kelips500 Tapestry500 Chord100 Kelips100 Tapestry100 Chord1000 Kelips1000 Tapestry1000

Figure 6. The effect of churn rate to fraction of failed lookups (6a) and the medium of successful lookup latency (6b) in network of various sizes

Fig. 6 presents the effect of churn rate to fraction of failed lookups (6a) and the medium of successful lookup latency (6b) in networks of 100, 500 and 1000 nodes. In this figure, ChordN, KelipsN and TapestryN represent according to networks running Chord, Kelips and Tapestry of size N, respectively.

When the churn rate is very high and the node join/leave interval is smaller than 120s – Tapestry has the smallest fraction of failed lookup while this parameter of Chord is the highest. But when the churn rate decreases, Chord shows much better performance when its fraction of failed lookup decreases faster than that of Kelips and Tapestry. When churn rate is higher than 300s, the fraction of failed lookup of Chord is lowest while that of Kelips is highest.

(6a)

(6b)

129

Authorized licensed use limited to: National Dong Hwa University. Downloaded on October 15, 2008 at 01:30 from IEEE Xplore. Restrictions apply.

The Fig.6b shows that the medium of successful lookup latency of Chord is smaller than those of Kelips and Tapestry in low churn rate.

Failed lookup vs. node numbers

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 100

200

300

400

500

600

700

800

900 1000

Node numbers

Fraction of failed lookup

Chord60 Kelips60 Tapestry60 Chord120 Kelips120 Tapestry120 Chord600 Kelips600 Tapestry600

Med. successful lookup latency vs. node numbers

700

800

900

1000

1100

1200

1300

1400

1500

0 100

200

300

400

500

600

700

800

900 1000

Node numbers

Med. successful lookup latency (ms)

Chord60 Kelips60 tapestry60 Chord120 Kelips120 Tapestry120 Chord600 Kelips600 Tapestry600

Figure 7. The fraction of failed lookups (7a) and the medium of successful lookup latency (7b) in various node networks which have node join/leave with

interval of 600s

Fig.7 presents the effect of network size on fraction of failed lookups (7a) and the medium of successful lookup latency (7b) in networks with churn rate approximates to 60s, 120s and 600s. From Fig.7, it can be seen that Chord have both fraction of failed lookup and medium of successful lookup latency that are smallest for various node numbers while these parameters of Kelips is highest. The figure 7a also shows that when network size grows, both fraction of failed lookup and medium of successful lookup latency of three DHTs also increase but these parameters of Chord and Tapestry increase much slower than Kelips parameters. This is the proof for better scalability of Chord and Tapestry over Kelips. For reader convenience, the observed results were summarized in Table IV.

TABLE IV. THE SUMMARY OF SIMULATION RESULTS

Chord Kelips Tapestry Remarks

Performance parameters (1/ Very high churn rate 2/ High and medium churn rate)

Live Bandwidth vs. Fail lookup rate

Poor

Best

1­ Chord < and < Tapestry

2­ Chord < and < Tapestry

Best

Worse

Live Bandwidth vs. latency

Best

Best

Worse

Worse

Medium

Medium

Scalability (1/ Very high churn rate 2/ High and medium churn rate)

1­ N/A

2­ Good

1­ Medium

2­ Medium

1­ Good

2­ Good

In case 1 of Chord, most lookups are failed

Tunability for optimized states

Hard Easy Easy

Effect of RTT on performance

Low Low High Low RTT increase Tapestry performa nce

IV. CONCLUDING REMARKS

This paper presents the result of a performance study on three candidate algorithms to be implemented in P2P streaming system over wireless environment, namely Chord, Kelips and Tapestry. We successfully characterize these algorithms under high and very high churn rate, caused by the nature of harsh wireless environments.

Base on comprehensive simulations and the “Performance versus cost” framework proposed by J. Li et al [10], we found several interesting results: 1) Tapestry works better than Chord and Kelips in terms of successful lookup rate at very high churn rate (when node joins/leaves the network with interval is less than 120s) but Chord gives the best performance when churn rate is lower (node join/leave interval is more than 300s). 2) Tapestry performance is more sensitive to RTT than Kelips and Chord when low RTT produces good performance in Tapestry case. 3) Our study also proved the high scalability of both Chord and Tapestry under churn; except some extreme cases when Chord fails at very high churn rate. 4) Another interesting result is that while churn rate strongly affects successful lookup ratio, it has very slight effect on medium lookup latency for all three protocols. 5) We also found Chord is inferior to the two other DHTs in terms of performance optimization since there is no single best parameter setting for Chord and its parameters should be always tuned to achieve the best balance among the successful lookup ratio, latency and the bandwidth consumption, depending on application requirements. This differs from the cases of Kelips and Tapestry, in which the best parameter settings to achieve highest performance does exist.

Our future work will focus on two issues: extending the study to less popular DHTs such as Accordion and Koorde, etc, and improving the performance of a selected DHT in wireless P2P networks.

ACKNOWLEDGEMENTS

The authors would like to express their gratitude to VLIR­ HUT Institutional co­operation program (project number AP06\Prj3\Nr01) and Panasonic Singapore Laboratories for providing supports for this project.

(7b)

(7a)

130

Authorized licensed use limited to: National Dong Hwa University. Downloaded on October 15, 2008 at 01:30 from IEEE Xplore. Restrictions apply.

REFERENCES [1] B. Li, S. Xie, G. Y. Keung, J. Liu, I. Stoica, H. Zhang, and X. Zhang. ’An Empirical Study of the Coolstreaming+ System’. pp1627, VOL. 25, NO. 9, Journal on Selected Area of Communication, Dec 2007. [2] C. Wu, B. Li, and S. Zhao, ‘Characterizing Peer­to­Peer Streaming Flows’, pp1612, Journal on Selected Area of Communication, Dec 2007. [3] X. Hei, Y. Liu, and K. W. Ross, ‘Inferring Network­Wide Quality in P2P Live Streaming Systems’, pp1640, Journal on Selected Area of Communication, Dec 2007. [4] Djamal­Eddine Meddour, Mubasher Mushtaq, Toufik Ahmed, "Open Issues in P2P Multimedia Streaming", MULTICOMM 2006 [5] Ion Stoica, Robert Morris, David Karger, Frans Kaashoek, and Hari Balakrishnan. “Chord: A scalable Peer­To­Peer lookup service for internet applications”. In Proceedings of the 2001 ACM SIGCOMM Conference, pages 149–160, 2001. [6] K. Suh, C. Diot, J. Kurose, L. Massoulié, C. Neumann, D. Towsley and M. Varvello, “[6]Push­to­Peer Video­on­Demand System: Design and Evaluation” pp1706, Journal on Selected Area of Communication, Dec 2007. [7] S. Rhea, D. Geels, T. Roscoe, and J. Kubiatowicz, “Handling churn in a DHT,” in roceedings of the 2004 USENIX Technical Conference, June 2004. [8] P. Maymounkov and D. Mazieres, “Kademlia: A peer­to­peer information system based on the XOR metric,” in Proceedings of the 1st IPTPS, Mar. 2002. [9] I. Gupta, K. Birman, P. Linga, A. Demers, and R. van Renesse, “Kelips: Building an efficient and stable P2P DHT through increased memory and background overhead,” in Proceedings of the 2nd IPTPS, 2003.

[10] Jinyang Li, Jeremy Stribling, Robert Morris, M. Frans Kaashoek and Thomer M. Gil. “A performance vs. cost framework for evaluating DHT design tradeoffs under churn”, Proceedings of 24th IEEE Infocom, March 2005. [11] B. Y. Zhao, L. Huang, J. Stribling, S. C. Rhea, A. D. Joseph, and J. D. Kubiatowicz, “Tapestry: A resilient global­scale overlay for service deployment,” IEEE Journal on Selected Areas in Communications, vol. 22, no. 1, pp. 41–53, Jan. 2004. [12] K.P.Gummadi,R.Gummadi,S.Gribble,S.Ratnasamy,S.Shenker, and I. Stoica, “The impact of DHT routing geometry on resilience and proximity,” in Proceedings of the 2003 ACM SIGCOMM, Aug. 2003. [13] F. Dabek, M. F. Kaashoek, J. Li, R. Morris, J. Robertson, and E. Sit, “Designing a DHT for low latency and high throughput,” in Proceedings of the 1st NSDI, March 2004. [14] A. Gupta, B. Liskov, and R. Rodrigues, “Efficient routing for peer­to­ peer overlays,” in Proceedings of the 1st NSDI, Mar. 2004. [15] Eng Keong Lua, Jon Crowcroft, Marcelo Pias, Ravi Sharma and Steven Lim, “A Survey and Comparison of Peer­to­Peer Overlay Network Schemes” IEEE COMMUNICATIONS SURVEY AND TUTORIAL, MARCH 2004 [16] P2PSIM home page, H Uhttp://pdos.csail.mit.edu/p2psim/U H

[17] W.­P. K. Yiu, X. Jin, and S.­H. G. Chan, “VMesh: Distributed Segment Storage for Peer­to­Peer Interactive Video Streaming”, pp1717, Journal on Selected Area of Communication, Dec 2007.

131

Authorized licensed use limited to: National Dong Hwa University. Downloaded on October 15, 2008 at 01:30 from IEEE Xplore. Restrictions apply.