Cybertelecom
Cybertelecom
Federal Internet Law & Policy
An Educational Project
Content Delivery Networks Dont be a FOOL; The Law is Not DIY

In the 1990s, content would be hosted on a server at one "end" of the Internet and it would be requested by an individual at another "end" of the Internet. The content would move upstream through the content service's access provider and the access provider's backbone provider until it was exchanged with the viewer's backbone provider and then to the viewer's access provider. Each time the content was requested, it would make this full trip across the Internet. If there was "flash demand" for content, the demand could overwhelm capacity and result in congestion. In 1999, catastrophically demonstrating the problem of content delivery at that time, Victoria's Secret advertised during the Superbowl that it would webcast its fashion show. 1.5 million people attempted to view the Victoria's Secrets webcast, overwhelming the server infrastructure, resulting in a poor experience. [Adler] [Borland (1.5 million hits during Victoria’s Secret show, "many users were unable to reach the site during the live broadcast because of network bottlenecks and other Internet roadblocks.”)]

Tim Berners-Lee, in 1995, went before MIT and said, "This won't scale." MIT responded by inventing content delivery networks. [Akamai History] [Berners-Lee, The MIT/Brown Vannevar Bush Symposium (1995) (raising problem of flash demand and ability of systems to handle response)] [Mitra (quoting Tom Leighton, "Tim was interested in issues with the Internet and the web, and he foresaw there would be problems with congestion. Hot spots, flash crowds, … and that the centralized model of distributing content would be facing challenges. He was right.... He presented an ideal problem for me and my group to work on.")] [Berners-Lee] [Held 149] (For discussion of flash demand, see [Jung] [Khan]) A 1998 entrepreneurship competition at MIT resulted in Patent '703 which became Akamai [Akamai History] [Khan].

At about the same time, Sandpiper Networks developed its CDN solution and filed for its Patent '598. Prof. David Farber was one of the contributors to this Patent. Sandpiper began offering its "Footprint" CDN service in 1998. 

A Content Delivery Network (CDN) is an intelligent array of cache servers. [Charter TWC Merger Order para. 96] The service analyzes demand and attempts to pre-position content on servers as close to eyeballs as possible. Instead of having content transmitted across the Internet each time it is requested, now content is delivered once, stored on a cache server at the gateway of an access network, and then served upon request.  [Akamai Techs. Inc. v. Cable & Wireless Internet Servs., Inc., 344 F.3d at 1190-91] [Limelight Networks, 134 S.Ct. at 2115] The first generation of CDNs was focused on static or dynamic web documents; the second generation focused on video on demand and audio. [Khan]

The CDN is a win-win. The CDN offers to the content provider the advantages of transit cost-avoidance. Even though the content provider is now paying to deliver the content all the way across the backbones to the access network's gateway (instead of "half way" to the backbones peering point), the content provider is paying to deliver the content once or only a few times. Further, the content provider benefits from improved quality of delivery as the server is closer to the audience, the traffic bypasses interconnection congestion, and the traffic will be mixed with less cacophony of traffic. Likewise, the CDN offers to the access provider the advantage of transit cost avoidance and quality of service. The access provider had been paying transit in order to receive the content thousands of times; the CDN offers to provide the content directly to the access provider on a settlement-free basis. The access provider's subscribers will receive the content with a higher quality of service and be happy. [Applications of Comcast Corp., Time Warner Cable Inc., Charter Communications, Inc., and SpinCo For Consent to Assign or Transfer Control of Licenses and Authorizations, Netflix Petition to Deny, MB Docket No.14-57, Declaration of Ken Florance para. 3 (Aug. 25, 2014) ("A CDN provides value to a terminating access networks because the CDN places content as close as possible to that terminating access network's customers (consumers), decreasing the distance that packets need to travel. Placing content closer to consumers results in a higher-quality consumer experience than if the consumer had to call up content that is stored further away from the terminating access network.")] [Buyya at 3]


CDN's altered the interconnection ecosystem.

The old model was the movement of traffic from one end to the other, with traffic exchanged from source network to destination network at an interconnection point between backbones. The two ends of the ecosystem never negotiated directly with each other. Ends could acquire a simple transit arrangement and have full Internet access.

Now, CDNs move the traffic across the network and pre-position it at the gateway of the access network, generally at an IXP. Traffic has evolved from hot potato routing to cold potato routing, carrying the traffic from source to destination as far as possible. Large CDNs further improve the quality of their service by bypassing interconnection at the IXP and embedding their servers within the access networks. [Netflix Open Connect ISP Partnership Options (listing embedded CDN servers as first option)] [AT&T Info Request Response at 13 ("AT&T MIS services allows customers to choose the capacity of their connections and to deliver as much traffic to AT&T's network as those connections will permit. AT&T's MIS service is used by large content providers REDACTED, content delivery networks REDACTED, enterprises, and large and small businesses.  AT&T's MIS service can be 'on-net- services or transit services. An on-net service provides access only to AT&T customers. Transit services are Internet Access Services in which AT&T will deliver traffic to virtually any point on the Internet (directly or through its peering arrangements with other ISPs). AT&T recently developed CIP service allows customers to collocate servers in AT&T's network at locations closer to the AT&T end users who will be accessing the content on those servers. CIP customers purchase the space, power, cooling, transport, and other capabilities needed to operate their servers in AT&T's network.")] CDN interconnection with BIAS providers is evolving from interconnecting at the 10 major peering cities, to closer, more regional interconnection, moving the content closer to eyeballs. [Nitin Roo, Bandwidth Costs Around the World, Cloudfare Aug 17, 2016 ("our peering has particularly grown in smaller regional locations, closer to the end visitor, leading to an improvement in performance. This could be through private peering, or via an interconnection point such as the Midwest Internet Cooperative Exchange (MICE) in Minneapolis..")] Multiple content sources negotiate directly with access networks for the right to interconnection, exchange traffic and access customers. Many interconnection transactions must occur. [Kang, Netflix CEO Q&A: Picking a Fight with the Internet Service Providers 2014 ("Then the danger is that it becomes like retransmission fees, which 20 years ago started as something little and today is huge, with blackouts and shutdowns during negotiations. Conceptually, if they can charge a little, then they can charge a lot, because they are the only ones serving the Comcast customers.")] The big question of interconnection has migrated from the core between backbones to the edge between a CDN and an access network. 

The success of CDNs has off-loaded a tremendous amount of traffic from backbones. CDN's directly deliver over peering connections the majority of traffic to large BIAS providers' subscribers, and this statistic is trending up. An Assessment of IP-interconnection in the context of Network Neutrality, Draft Report for public consultation, Body of European Regulators for Electronic Communications, p. 23, May 29, 2012 (60% of Google’s traffic is transmitted directly to tier 2 or 3 networks on a peering basis, bypassing backbones and transit); Craig Labowitz, The New Internet, Global Peering Forum, Slides 13 (April 12, 2016) (CDN as a percentage of peak ingress traffic.  “In 2009, CDN helped to offload less than ¼ traffic.  Most content delivered via peering / transit.  By 2015, the majority of traffic is CDN delivered from regional facility or provider based appliance.” 2009: ~20%; 2011: ~35%; 2013: ~51%; 2015: ~61%); [Nitin Roo, Bandwidth Costs Around the World, Cloudfare Aug 17, 2016 ("We peer around 40% of our traffic ... a significant improvement over two years ago. The share of peered traffic is expected to grow.")] Both TeleGeography and Cisco have projected robust growth for North American metro and CDN network capacity, with no projected growth for long haul traffic. [Cisco Visual Networking Index: Forecast and Methodology, 2014-2019, Cisco 7-8 (May 27, 2015) (forecasting 25% annual growth for North American metro capacity, 35% growth for CDN capacity, and 0% growth for long-haul capacity)] [IP Transit Revenues, Volumes Dependent on Peering Trends, TeleGeography (July 08, 2014) (projecting that on a global basis transit revenue will decline from $4.6B in 2013 to $4.1B in 2020)]. Backbone networks have evolved from being the heavy lifters, carrying all traffic (at some point) from one end to another - to becoming feeder networks delivering content to CDN servers once, with the CDNs doing the heavy lifting delivering requested content to eyeballs 1000s of times. [Tony Tauber, Seeking Default / Control Plane WIE 2016: 7th Workshop on Internet Economics, Slide 3 (Dec. 2016) (quoting Geoff Houston, "We have a Tier1 CDN Feeder System")]

Generally, CDNs were able to negotiate settlement free peering arrangements with access providers. As the network evolved and large access providers grew in market power, CDNs began to pay Access Paid Peering to large access providers in order to deliver their content.

Derived From:Akamai Techs. Inc. v. Cable & Wireless Internet Servs., Inc., 344 F.3d 1186, 1190-91 (Fed. Cir. 2003)

"Generally, people share information, i.e., "content," over the Internet through web pages. To look at web pages, a computer user accesses the Internet through a browser, e.g., Microsoft Internet Explorer® or Netscape Navigator®. These browsers display web pages stored on a network of servers commonly referred to as the Internet. To access the web pages, a computer user enters into the browser a web page address, or uniform resource locator ("URL"). The URL is typically a string of characters, e.g., www.fedcir.gov. This URL has a corresponding unique numerical address, e.g., 156.119.80.10, called an Internet Protocol ("IP") address. When a user enters a URL into the browser, a domain name service ("DNS") searches for the corresponding IP address to properly locate the web page to be displayed. The DNS is administered by a separate network of computers distributed throughout, and connected to, the Internet. These computers are commonly referred to as DNS servers. In short, a DNS server translates the URL into the proper IP address, thereby informing the user's computer where the host server for the web page www.fedcir.gov is located, a process commonly referred to as "resolving." The user's computer then sends the web page request to the host server, or origin server. An origin server is a computer associated with the IP address that receives all web page requests and is responsible for responding to such requests. In the early stages of the Internet, the origin server was also the server that stored the actual web page in its entirety. Thus, in response to a request from a user, the origin server would provide the web page to the user's browser. Internet congestion problems quickly surfaced in this system when numerous requests for the same web page were received by the origin server at the same time.

This problem is exacerbated by the nature of web pages. A typical web page has a Hypertext Markup Language ("HTML") base document, or "container" document, with "embedded objects," such as graphics files, sound files, and text files. Embedded objects are separate digital computer files stored on servers that appear as part of the web page. These embedded objects must be requested from the origin server individually. Thus, each embedded object often has its own URL. To receive the entire web page, including the container document and the embedded objects, the user's web browser must request the web page and each embedded object. Thus, for example, if a particular web page has nine embedded objects, a web browser must make ten requests to receive the entire web page: one for the container document and nine for the embedded objects.

There have been numerous attempts to alleviate Internet congestion, including methods commonly referred to as "caching," "mirroring," and "redirection." "Caching" is a solution that stores web pages at various computers other than the origin server. When a request is made from a web browser, the cache computers intercept the request, facilitate retrieval of the web page from the origin server, and simultaneously save a copy of the web page on the cache computer. The next time a similar request is made, the cache computer, as opposed to the origin computer, can provide the web page to the user. "Mirroring" is another solution, similar to caching, except that the origin owner, or a third party, provides additional servers throughout the Internet that contain an exact copy of the entire web page located on the origin server. This allows a company, for example, to place servers in Europe to handle European Internet traffic.

"Redirection" is yet another solution in which the origin server, upon a request from a user, redirects the request to another server to handle the request. Redirection also often utilizes a process called "load balancing," or "server selection." Load balancing is often effected through a software package designed to locate the optimum origin servers and alternate servers for the quickest and most efficient delivery and display of the various container documents and embedded objects. Load balancing software locates the optimum server location based on criteria such as distance from the requesting location and congestion or traffic through the various servers.

Load balancing software was also known prior to the '703 patent. For example, Cisco Systems, Inc. marketed and sold a product by the name of "Distributed Director," which included server selection software that located the optimum server to provide requested information. The server selection software could be placed at either the DNS servers or the content provider servers. The Distributed Director product was disclosed in a White Paper dated February 21, 1997 and in U.S. Patent No. 6,178,160 ("the '160 patent"). 1190*1190 Both the White Paper and the '160 patent are prior art to the '703 patent. The Distributed Director product, however, utilized this software in conjunction with a mirroring system in which a particular provider's complete web page was simultaneously stored on a number of servers located in different locations throughout the Internet. Mirroring had many drawbacks, including the need to synchronize continuously the web page on the various servers throughout the network. This added extra expenses and contributed to congestion on the Internet.

Massachusetts Institute of Technology is the assignee of the '703 patent directed to a "global hosting system" and methods for decreasing congestion and delay in accessing web pages on the Internet. Akamai Technologies, Inc. is the exclusive licensee of the '703 patent. The '703 patent was filed on May 19, 1999, and issued on August 22, 2000. The '703 patent discloses and claims web page content delivery systems and methods utilizing separate sets of servers to provide various aspects of a single web page: a set of content provider servers (origin servers), and a set of alternate servers. The origin servers provide the container document, i.e., the standard aspects of a given web page that do not change frequently. The alternate servers provide the often changing embedded objects. The '703 patent also discloses use of a load balancing software package to locate the optimum origin servers and alternate servers for the quickest and most efficient delivery and display of the various container documents and embedded objects.

. . .

C & W is the owner, by assignment, of the '598 patent. The '598 patent is directed to similar systems and methods for increasing the accessibility of web pages on the Internet. The '598 patent was filed on February 10, 1998, and issued on February 6, 2001. Thus the '598 patent is prior art to the '703 patent pursuant to 35 U.S.C. § 102(e). C & W marketed and sold products embodying the '598 patent under the name "Footprint." The relevant difference between the disclosure of the '598 patent and Akamai's preferred embodiment disclosed in the '703 patent is the location of the load balancing software. Akamai's preferred embodiment has the load balancing software installed at the DNS servers, while the '598 patent discloses installation of the load balancing software at the content provider, or origin, servers. The '598 patent does not disclose or fairly suggest that the load balancing software can be placed at the DNS servers. It is now understood that placement of the software at the DNS servers allows for load balancing during the resolving process, resulting in a more efficient system for accessing the proper information from the two server networks. Indeed, C & W later created a new product, "Footprint 2.0," the systems subject to the permanent injunction, in which the load balancing software was installed at the DNS servers as opposed to the content provider servers. Footprint 2.0 replaced C & W's Footprint product."

CDN Advantages

Process

Types of CDN Services

Tim Siglin, What is a Content Delivery Network (CDN)?, Streaming Media.com (March 20, 2011).

Business Model

Generally a CDN is selling content distribution service to content creators or owners, and establishing interconnection (either settlement free or paid peering) relationships with Internet access services. CDNs generally charge (1) a monthly recurring fee and (2) usage charge.

Historically CDNs offered to Internet Access Providers(1) transit cost avoidance and (2) improved quality of service in exchange for settlement free peering. [Kaufman Slide 19] [Higginbotham 2013 (discussing Sonic.net's motivation to agree to settlement free peering with Netflix in order to avoid transit costs).] Since ~2010 large BIAS providers have been able to demand paid peering from CDNs in order to access their eyeballs.

Timeline

2014

2012

2011

2010

2008

2007

2005

2004

2001

2000

1999

1998

Patents

Commercially Available CDN Services

BIAS providers offering CDN service

Proprietary CDNs

 

Government Activity

Caselaw

Papers & Presentations

Statistics

News

© Cybertelecom ::