TY - GEN
T1 - A cooperative model for wide area content delivery applications
AU - Rashkovits, Rami
AU - Gal, Avigdor
PY - 2005
Y1 - 2005
N2 - Content delivery is a major task in wide area environments, such as the Web. Latency, the time elapses since the user sends the request until the server's response is accepted is a major concern in many applications. Therefore, minimizing latency is an obvious target of wide area environments and one of the more common solutions in practice is the use of client-side caching. Collaborative caching is used to further enhance content delivery, but unfortunately, it often fails to provide significant improvements. In this work, we explore the limitations of collaborative caching, analyze the existing literature and suggest a cooperative model for which cache content sharing show more promise. We propose a novel approach, based on the observation that clients can specify their tolerance towards content obsolescence using a simple-to-use method, and servers can supply content update patterns. The cache use a cost model to determine which of the following three alternatives is most promising: delivery of a local copy, delivery of a copy from a cooperating cache, or delivery of a fresh copy from the origin server. Our experiments reveal that using the proposed model, it becomes possible to meet client needs with reduced latency. We also show the benefit of cache cooperation in increasing hit ratios and thus reducing latency further. Specifically, we show that cache collaboration is in particular useful to users with high demands regarding both latency and consistency.
AB - Content delivery is a major task in wide area environments, such as the Web. Latency, the time elapses since the user sends the request until the server's response is accepted is a major concern in many applications. Therefore, minimizing latency is an obvious target of wide area environments and one of the more common solutions in practice is the use of client-side caching. Collaborative caching is used to further enhance content delivery, but unfortunately, it often fails to provide significant improvements. In this work, we explore the limitations of collaborative caching, analyze the existing literature and suggest a cooperative model for which cache content sharing show more promise. We propose a novel approach, based on the observation that clients can specify their tolerance towards content obsolescence using a simple-to-use method, and servers can supply content update patterns. The cache use a cost model to determine which of the following three alternatives is most promising: delivery of a local copy, delivery of a copy from a cooperating cache, or delivery of a fresh copy from the origin server. Our experiments reveal that using the proposed model, it becomes possible to meet client needs with reduced latency. We also show the benefit of cache cooperation in increasing hit ratios and thus reducing latency further. Specifically, we show that cache collaboration is in particular useful to users with high demands regarding both latency and consistency.
UR - http://www.scopus.com/inward/record.url?scp=33646690530&partnerID=8YFLogxK
U2 - 10.1007/11575771_26
DO - 10.1007/11575771_26
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:33646690530
SN - 3540297367
SN - 9783540297369
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 402
EP - 419
BT - On the Move to Meaningful Internet Systems 2005
T2 - OTM Confederated International Conferences, CoopIS, DOA, and ODBASE 2005 - On the Move to Meaningful Internet Systems 2005: CoopIS, DOA, and ODBASE
Y2 - 31 October 2005 through 4 November 2005
ER -