EECS technical reports

Conditions of Use

Archive Home Page

Towards a Scalable, Adaptive and Network-aware Content Distribution Network

Chen, Yan
Technical Report Identifier: CSD-03-1295

Abstract: The Internet has evolved to become a critical commercial infrastructure for service delivery. However, The Internet being an enormous, highly-dynamic, heterogeneous, and untrusted environment raises several challenges for building Internet-scale services (such as content delivery) with good scalability, efficiency, agility and security. In this thesis, we explore these issues by developing a scalable, adaptive and network-aware infrastructure for efficient content delivery, namely Scalable Content Access Network (SCAN).

SCAN has four components: object location, replica placement and update multicast tree construction, replica management, and overlay network monitoring services.

First, we propose a novel simulation-based network Denial of Service (DoS) resilience benchmark, and apply it to evaluate and compare the centralized, replicated, and emerging distributed object location services. We find that distributed hash table (DHT) based object location service has the best resilience in practice. Thus we leverage it for replica location.

Second, we propose the first algorithm that dynamically places close to optimal number of replicas while meeting client QoS and server resource constraint, with overlay network topology only. These replicas further self-organize them into an application-level multicast tree on top of a DHT, Tapestry, which enables distributed "join" so that each node (including the root) in the tree only needs to maintain states for its parent and direct children.

Third, we apply cooperative clustering-based replication to SCAN, which achieves comparable users' perceived performance to the conventional CDNs, while having only 4 - 5% of replication and update traffic, and 1 - 2% of the computation and replica management cost. Furthermore, we propose a unique online Web object popularity prediction algorithm based only on hyperlink structures, and applied it for online incremental clustering and replication to adapt to changes in users' access patterns. This scheme adds new content to the appropriate existing cluster replicas even before accessed, to improve their availability during flash crowds.

Fourth, to provide a general foundation for applications to take advantage of network awareness, we develop a scalable overlay network measurement and monitoring system with two components: Internet Iso-bar for latency estimation, and TOM (Tomography-based Overlay network Monitoring) for loss rate estimation. Internet Iso-bar clusters hosts based on the similarity of their perceived network distance to a small number of landmark sites, and chooses the centroid of each cluster as a monitor site. Evaluation using real Internet measurements shows that our scheme offers much better latency accuracy and stability than the traditional network/geographical proximity-based clustering. Furthermore, it detects 78% of congestion/failures with 32% false positive.

For mission-critical applications that require more accurate loss rate estimation, we developed TOM with a bit more measurements than Internet Iso-bar (O(n log n) instead of O(k^2 + n), but still much less than the current work O(n^2), where n is the number of end hosts and k is the number of clusters). TOM selectively monitors and measures the loss rates of a minimal basis set of O(n log n) linearly independent paths, and then applies them to estimate the loss rates of all other paths. Both extensive simulation and Internet experiments show that TOM achieves high path loss rate estimation accuracy, has good load balancing, tolerates topology measurement errors and is adaptive to topology changes.

Finally, to demonstrate the effectiveness of the monitoring services, we develop a adaptive overlay streaming media system which leverages our monitoring services for real-time path congestion/failure information, and an overlay network for adaptive packet relaying and buffering within the delivery infrastructure. Traditional streaming media systems treat the underlying network as a best-effort black box, and adaptations are performed at the transmission end-points. However, our Internet experiments show that overlay routing can often improve the loss rate and/or TCP throughput for lossy paths, and our system typically adapts to network congestions within five seconds, achieving skip-free streaming media playback.