Cybertelecom
Cybertelecom
Federal Internet Law & Policy
An Educational Project
TCP / IP Dont be a FOOL; The Law is Not DIY




The ARPANet had migrated to the NCP network protocol in 1970. However, this network protocol had its limitations. It assumed end to end connectivity between the communicating hosts and did not handle interconnecting different networks well. It also did not tolerate packet loss, coming to a halt if a packet was dropped on the floor.

Bob Kahn and Vint Cerf set to work on a new protocol that would overcome these limitations, interconnecting otherwise incompatible networks and devices. Cerf explained, "In defense settings, circumstances often prevented detailed planning for communication system deployment, and a dynamic, packet-oriented, multiple-network design provided the basis for a highly robust and flexible network to support command-and-control applications." [Cerf 1995] [See also Vanity Fair (quoting Vint Cerf, "What Bob Kahn and I did was to demonstrate that with a different set of protocols you could get an infinite number of—well, infinite is not true, but an arbitrarily large number of—different heterogeneous packet-switched nets to interconnect with each other as if it was all one big giant network. TCP is the thing that makes the Internet the Internet.")] [Abbate p 113]

In 1972, Vint Cerf (Stanford; DARPA funding; Cerf had worked on the original NCP) and Bob Kahn (ARPA) released their paper on TCP, A Protocol for Packet Network Interconnection (distributed in 1973, published IEEE Transactions of Communications Technology 1974)

The phrase "Internet" was first used in Vint Cerf, Yogen Dalal, Carl Sunshine, Specification Of Internet Transmission Control Program, RFC 675 (1974) ("This document describes the functions to be performed by the internetwork Transmission Control Program [TCP] and its interface to programs or users that require its services. Several basic assumptions are made about process to process communication and these are listed here without further justification.").[Roberts History s 6]

Further development of TCP/IP was funded by DARPA, with three contracts to Stanford, BBN, and UCL. [ISOC] Vint Cerf and others went through several versions: TCPv1; TCPv2; TCP/IPv3 (splitting TCP into TCP and IP), and, in 1978, they settled on IPversion4. [Living Internet TCP/IP]

TCP/IP was successfully used in 1977 to link together 4 networks.

IP as originally designed had an eight bit networking field which would be sufficient for at maximum 256 networks - it was believed at the time that this would be more than enough. [Nerds2.0 p 112] [Netvalle]

[Roberts, Computer Science Museum p. 27 1988 ("Roberts: Well I started that whole project of the radio network at ARPA and so on, and as he came to ARPA, he and Bob started working on this whole internet thing. And the internetting thing has always seemed to me as somewhat crazy, because if you build unrelated networks without standards, you have to do something, but if you build networks the way that they commercial world would clearly build them, there is no problem. Just interconnect them cleanly, so I've never understood where it fits into the world.")]


NCP to TCP/IP (IPv4) Transition

1980 - A Fateful Decision

With DCA managing the ARPANet now, it was decided that ARPANet would be integrated into the Defense Data Network (DDN). ARPANet would have to interconnect with different networks and in order to do so, ARPANet needed to migrate to the new TCP/IP. DCA announced in 1980 that ARPANet will migrate from NCP to TCP/IP (the second of three network protocol migrations), and that it would do so on a tight and non negotiable timetable. On Jan. 1, 1983, ARPANet would turn off NCP; if you wanted your packets to make it, the hosts had to be using TCP/IP. [Cerf 1160] (just for fun, contrast this to the Network Neutrality discussion and whether it is reasonable network management for routers to filter out specific traffic). [Great Achievements] [Netvalley]

The Internet would adopt the Internet Protocol version 4, believing that this would provide an inexaustible address space. As the inexaustible became exausted, the Internet has once again migrated logical protocols, migrating to IPv6.

RFC 760, DOD Standard: Internet Protocol (Jan. 1980) ("This document specifies the DoD Standard Internet Protocol.")

Protocol / standards / technology migrations are not always met with enthusiasm, especially when what exists is working. In addition, as seen in the sidebar, the Internet culture of consensus driven process had already developed. But this was a top-down DOD directive with a flag day deadline - resulting in a clash of cultures.

Vint Cerf, Final Report of the Stanford University TCP Project , IEN 151 (April 1, 1980)

Vint Cerf, Comments on NCP/TCP Service Transition Strategy, NWG RFC 773 (Oct. 1980)

"At no time was the controversy [with regard to the editoral function of the RFCs] worse than it was when DoD adopted TCP/IP as its official host-to-host protocols for communications networks. In March 1982, a military directive was issued by the Under Secretary of Defense, Richard DeLauer. It simply stated that the use of TCP and IP was mandatory for DoD communications networks. Bear in mind that a military directive is not something you discuss - the time for discussion is long over when one is issued. Rather a military directive is something you DO. The ARPANET and its successor, the Defense Data Network, were military networks, so the gauntlet was down and the race was on to prove whether the new technology could do the job on a real operational network. You have no idea what chaos and controversy that little 2-page directive caused on the network. (But that's a story for another time.) However, that directive, along with RFCs 791 and 793 (IP and TCP) gave the RFCs as a group of technical documents stature and recognition throughout the world. (And yes, TCP/IP certainly did do the job!) " [Jake Feinler, RFC 2555]

1981 - The Transition Plan

March 1981: Major Joseph Haughney announces ARPANet will migrate to TCP/IP on Jan. 1, 1983 [Abbate p 140]

New ARPANET Protocols

The Office of the Secretary of Defense has directed that a set of DOD Standard Protocols be used on all Department of Defense communications networks. This directive applies to the ARPANET. The ARPANET Host protocols will be replaced over the next three years with the new DOD Standard Protocol set. This has a direct impact on host operating systems and some applications programs that use the ARPANET. The ARPANET Network Control Program (NCP) will be replaced by two DOD protocols, the DOD Standard Transmission Control Protocol (TCP) and the Internet Protocol (IP). ARPANET FTP and TELNET protocols will also be updated and standardized. Planning for this transition is still under development. The NIC plans to publish for DCA a new Protocol Handbook by the end of this year, which will provide details on the new protocol specifications.

[ARPANET News-1]

In 1981, Jon Postel released RFC 801, the TCP/TCP Transition Plan that detailed a one year phase over of network assets from NCP to TCP starting in 1981. During this time, ARPANet Hosts would operate in Dual Stack mode, operating both NCP and TCP/IP.

RFC 791, Internet Protocol: DARPA Internet Program Protocol Specification , (Sept 1981) ("This document specifies the DoD Standard Internet Protocol. This document is based on six earlier editions of the ARPA Internet Protocol Specification, and the present text draws heavily from them.")

J Postel, RFC 801, NCP/TCP Transition Plan (Nov. 1981) ("The Department of Defense has recently adopted the internet concept and the IP and TCP protocols in particular as DoD wide standards for all DoD packet networks, and will be transitioning to this architecture over the next several years. All new DoD packet networks will be using these protocols exclusively. The goal is to make a complete switch over from the NCP to IP/TCP by 1 January 1983."

October 1, 1981 ARPANET did not forward NCP traffic

1982 - One Year Transition

Jon Postel,Vint Cert, and others press forward. An awareness raising campaign was engaged in which included newsletters, emails, discussions, and sometimes more drastic measures. Vint Cerf and Jon Postel turn off the NCP "network channel numbers on the ARPANET IMP's for a full day in mid 1982, so that only sites using TCP/IP could still operate" as a way of encouraging folk to prepare for the TCP/IP transition. [Living Internet TCP/IP] This was repeated in the Fall.

Those that were persuaded that the transition would in fact occured dedicated all of their resources to it. Everything would have to be written for the new protocol. Staff would have to be trained. Equipment would have to be updated. Problems would have to be debugged. This would normally take more time then the technical staffs were given - resulting in the commitment to all-transition all-the-time.

Others were not as persuaded. Bob Kahn recounts that up to the very last day he would receive emails asking whether the transition was actually going to occur - or asking what the real date of the transition was. When the transition transpired, ......

1983 - ARPANet Becomes the Internet

On January 1, 1983, ARPANET migrated to IP. Other large networks followed suit and likewise migrated to IP. NCP was turned off. [Netvalle] [ISOC] [Salus p 183] [Waldrop 85]

From the ARPANET to the Internet A Study of the ARPANET TCP/IP Digest and of the Role of Online Communication in the Transition from the ARPANET to the Internet by Ronda Hauben ~

"With the great switchover to TCP/IP, the ARPANET became the Internet." - Peter Salus [Salus p 188]

The transition was an enormous effort and was met with significant resistance and reticence. Adjectives used to describe the transition include "traumatic" and "disruptive." [Slaton and Abbate] (one participant recalled) "the transition from NCP to TCP was done in a great rush, occupying virtually everyone's time 100^ in the year 1982 . . . In was a major painful ordeal." [Slaton and Abbate]

Not every believe the transition was actually going to occur - or were prepared. "When the cutoff date arrived, only about half the sites had actually implemented a working version of TCP/IP." [Abbate p 141]

Many people hold the opinion that without the directives from DCA, the migration to IPv4 would never have been achieved voluntarily.

Additional References


See also Layered Model

TCP / IP Basics

"TCP/IP is widely used throughout the world to provide network communications. TCP/IP communications are composed of four layers that work together. When a user wants to transfer data across networks, the data is passed from the highest layer through intermediate layers to the lowest layer, with each layer adding additional information. The lowest layer sends the accumulated data through the physical network; the data is then passed up through the layers to its destination. Essentially, the data produced by a layer is encapsulated in a larger container by the layer below it." [NIST SP 800-86 Sec. 6.1]

"Application Layer. This layer sends and receives data for particular applications, such as Domain Name System (DNS), Hypertext Transfer Protocol (HTTP), and Simple Mail Transfer Protocol (SMTP).
"Transport Layer. This layer provides connection-oriented or connectionless services for transporting application layer services between networks. The transport layer can optionally ensure the reliability of communications. Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) are commonly used transport layer protocols.
"Internet Protocol Layer (also known as Network Layer). This layer routes packets across networks. IP is the fundamental network layer protocol for TCP/IP. Other commonly used protocols at the network layer are Internet Control Message Protocol (ICMP) and Internet Group Management Protocol (IGMP). "
"Hardware Layer (also known as Data Link Layer). This layer handles communications on the physical network components. The best known data link layer protocol is Ethernet."

"The four TCP/IP layers work together to transfer data between hosts. Each layer encapsulates the previous layers." [NIST SP 800-86 Sec. 6.1]

Application Layer

"The application layer enables applications to transfer data between an application server and client. An example of an application layer protocol is Hypertext Transfer Protocol (HTTP), which transfers data between a Web server and a Web browser. Other common application layer protocols include Domain Name System (DNS), File Transfer Protocol (FTP), Simple Mail Transfer Protocol (SMTP), and Simple Network Management Protocol (SNMP). There are hundreds of unique application layer protocols in common use, and many more that are not so common. 83 Regardless of the protocol in use, application data is generated and then passed to the transport layer for further processing. Section 7 focuses on application-related data collection, examination, and analysis" [NIST SP 800-86 Sec. 6.1.1]

Transport Layer

"The transport layer is responsible for packaging data so that it can be transmitted between hosts. After the transport layer has encapsulated application data, the resulting logical units are referred to as packets. (A packet can also be created without application dataófor example, when a connection is first negotiated.) Each packet contains a header, which is composed of various fields that specify characteristics of the transport protocol in use; optionally, packets may also contain a payload, which holds the application data."

"Most applications that communicate over networks rely on the transport layer to ensure reliable delivery of data. Generally, this is accomplished by using the TCP transport layer protocol, which establishes a connection between two hosts and then makes a best effort to ensure the reliable transfer of data over that connection. Each TCP packet includes a source port and a destination port. One of the ports is associated with a server application on one system; the other port is associated with a corresponding client application on the other system. Client systems typically select any available port number for application use, whereas server systems normally have a static port number dedicated to each application. Although many server ports are usually used by particular applications (e.g., FTP servers at port 21, HTTP servers at port 80), many server applications can be run from any port number, so it is unwise to assume that network traffic contains data from a certain application solely on the basis of server port number"

"When loss of some application data is not a concern (e.g., streaming audio, video), the User Datagram Protocol (UDP) is typically used. UDP involves less overhead and latency than TCP because UDP is connectionless; one host simply sends data to another host without any preliminary negotiations. UDP is also used for applications that are willing to take responsibility for ensuring reliable delivery of data, such as DNS, and applications that are intended for use only on local area networks, such as Dynamic Host Configuration Protocol (DHCP) and SNMP. As is the case with TCP, each UDP packet contains a source port and a destination port. Although UDP and TCP ports are very similar, they are distinct from each other and are not interchangeable. Some applications (such as DNS) can use both TCP and UDP ports; although such applications typically use the same number for the TCP port and the UDP port, this is not required. " [NIST SP 800-86 Sec. 6.1.2]

TCP

TCP is the error correction mechanism

  • Assumption is that Internet is a shared resource.
  • But when demand exceeds supply, need resolution. Technology community rely on technological solutions, not economic solutions.
  • In the Internet, this is TCP.
  • TPC has slow start up, ramping up packet transmission until packet lost, then backs up, then ramps up again till packet loss, and so on
  • Resulting in TCP Fair
  • Individual subscriber can have multiple TCP flows. Amount of TCP flow is not based on how much you used in the past few moments
  • RFC 5290 Comments on the Usefulness of Simple Best Effort Traffic
  • Not efficient or optimal. But a solution that has worked well for many years

"When an Internet user opens a webpage, sends an email, or shares a document with a colleague, the user’s computer usually establishes a connection with another computer (such as a server or another end user’s computer) using, for example, the Transmission Control Protocol (TCP).1 For certain applications to work properly, that connection must be continuous and reliable. Computers linked via a TCP connection monitor that connection to ensure that packets of data sent from one user to the other over the connection “arrive in sequence and without error,” at least from the perspective of the receiving computer.2 If either computer detects that “something seriously wrong has happened within the network,” it sends a “reset packet” or “RST packet” to the other, signaling that the current connection should be terminated and a new connection established “if reliable communication is to continue.”3" [Comcast Order 2008 para 3]

Transmission Control Protocol, RFC 793, Sept. 1981 ("The Transmission Control Protocol (TCP) is intended for use as a highly reliable host-to-host protocol between hosts in packet-switched computer communication networks, and in interconnected systems of such networks. ")

“TCP is the dominant transport-layer protocol on the Internet today, carrying more than 90% of all data traversing backbone links.” Letter from Jack Zinman, General Attorney, AT&T Services, Inc., to Marlene H. Dortch, Secretary, FCC, Attach. at 1 (Apr. 25, 2008) 

TCP would be broken into TCP/IP. That facilitated real time voice applications. TCP's error control protocols which caused packets to be resent was both unnecessary for real time voice and in fact got in the way. By separating TCP and IP, this allowed different error control protocols such as UDP which, if the packet is not delivered on time, just drops and does not retransmit the packet. [Vint Cerf, How the Internet Came to Be, NetValley Nov 20, 2006] "IP would be responsible for routing packets across multiple networks and TCP for converting messages into streams of packets and reassembling them into messages with few errors despite loss of packets the underlying network." [Denning 4] [Vint Cerf,TCP/IP Co Designer, Living Internet] [ISOC] [Roberts, Net Chronology]

Congestion Collapse

Van Jacobson includes Congestion Control TCP in Berkeley UNIX. Introduction to Congestion Avoidance and ControlPDF, Van Jacobson and Michael J Karels, November

Computer networks have experienced an explosive growth over the past few years and with that growth have come severe congestion problems. For example, it is now common to see internet gateways drop 10% of the incoming packets because of local buffer overflows. Our investigation of some of these problems has shown that much of the cause lies in transport protocol implementations (not in the protocols themselves): The 'obvious' ways to implement a window-based transport protocol can result in exactly the wrong behavior in response to network congestion. We give examples of 'wrong' behavior and describe some simple algorithms that can be used to make right things happen. The algorithms are rooted in the idea of achieving network stability by forcing the transport connection to obey a 'packet conservation' principle. We show how the algorithms derive from this principle and what effect they have on traffic over congested networks.

In October of '86, the Internet had the first of what became a series of 'congestion collapses'. During this period, the data throughput from LBL to UC Berkeley (sites separated by 400 yards and two IMP hops) dropped from 32 Kbps to 40 bps. We were fascinated by this sudden factor-of-thousand drop in bandwidth and embarked on an investigation of why things had gotten so bad. In particular, we wondered if the 4.3BSD (Berkeley UNIX) TCP was misbehaving or if it could be tuned to work better under abysmal network conditions. The answer to both of these questions was "yes"...

See alsoBartek Peter Wydrowski, Techniques in Internet Congestion ControlPDF February 2003 ("'The initial response to ARPANET's congestion collapse problem was to increase the capacity of the network. This helped temporarily, but the ARAPNET continued to suffer congestion collapses until a strategy to control the load of packets entering the network was developed. In 1988 Van Jackson enhanced the famous Transport control protocol (TCP) [57] so that the transmission rate was responsive to the level of network congestion. TCP was made to reduce the rate of transmission of hosts when it sensed the network load was nearing congestion collapse. Since the introduction of this enhanced TCP, congestion collapse did not reoccur.'")

IP Layer

"The IP layercan also be called the network layer, because it is responsible for handling the addressing and routing of data that it receives from the transport layer. The IP header contains a field called IP Version, which indicates which version of IP is in use. Typically this is set to 4 for IPv4; but the use of IPv6 is increasing, so this field may be set to 6 instead. 84 Other significant IP header fields are as follows:"

"The IP layeris also responsible for providing error and status information involving the addressing and routing of data; it does this with ICMP. ICMP is a connectionless protocol that makes no attempt to guarantee that its error and status messages are delivered. Because it is designed to transfer limited information, not application data, ICMP does not have ports; instead, it has message types, which indicate the purpose of each ICMP message. 87 Some message types also have message codes, which can be thought of as subtypes. For example, the ICMP message type Destination Unreachable has several possible message codes that indicate what is unreachable (e.g., network, host, protocol). Most ICMP messages are not intended to elicit a response."

"IP addresses are often used through a layer of indirection. When people need to access a resource on a network, such as a Web server or e-mail server, they typically enter the serverís name, such as www.nist.gov, rather than the serverís IP address. The name, also known as a domain name, is mapped to the IP address through the DNS application layer protocol. The primary reason for entering a domain name instead of an IP address is that the former is generally easier for people to remember. In addition, where a domain name is likely to remain the same, a hostís IP address can change over time; by referencing a host by domain name, which is then mapped to the hostís IP address, users can reach the host no matter what IP address the host is currently using." [NIST SP 800-86 Sec. 6.1.3]

Hardware Layer

"As the name implies, the hardware layer involves the physical components of the network, including cables, routers, switches, and NIC. The hardware layer also includes various hardware layer protocols; Ethernet is the most widely used of these protocols. Ethernet relies on the concept of a MAC address, which is a unique 6-byte value (such as 00-02-B4-DA-92-2C) that is permanently assigned to a particular NIC. 89 Each frame contains two MAC addresses, which indicate the MAC address of the NIC that just routed the frame and the MAC address of the next NIC to which the frame is being sent. As a frame passes through networking equipment (such as routers and firewalls) on its way between the original source host and the final destination host, the MAC addresses are updated to refer to the local source and destination. Several separate hardware layer transmissions may be linked together within a single IP layer transmission."

"In addition to the MAC addresses, each frame also contains an EtherType value, which indicates the protocol that the frameís payload contains (typically IP or Address Resolution Protocol [ARP]). 90 When IP is used, each IP address maps to a particular MAC address. (Because multiple IP addresses can map to a single MAC address, a MAC address does not necessarily uniquely identify an IP address.)" [NIST SP 800-86 Sec. 6.1.4]


Internet Design Principles

"These and other documents embody some value judgments and reflect the fundamental political and ethical beliefs of the scientists and engineers who designed the Internet: the Internet architecture reflects their desire for as much openness, sharing of computing and communications resources, and broad access and use as possible. For example, the value placed on connectivity as its own reward favors gateways and interconnections over restrictions on connectivity - but the technology can be used permissively or conservatively, and recent trends show both. Another value underlying the design is a preference for simplicity over complexity." - The Internet's Coming of Age, Computer Science and Telecommunications Board, National Research Council, p. 35 (National Academy Press 2001)

CSTB, Realizing the Info Future p. 30-31 1994 (" the Internet has given rise to a phenomenon in which services of all kinds spring up suddenly on the network without anyone directing or managing their development....Such spontaneous generation of unforeseen yet enormously popular services—which is encouraged by the Internet as a distributed information and communications system—is a constant source of pleasant surprise today and heralds future potential as we move into an era of truly interactive information via the NII.")

CSTB, Realizing the Info Future p. 34 1994 ("the Internet's openness, a characteristic that has been key to its unprecedented success. It therefore characterizes its vision in terms of an Open Data Network (ODN). A national information infrastructure should be capable of carrying information services of all kinds, from suppliers of all kinds, to customers of all kinds, across network service providers of all kinds, in a seamless accessible fashion. The long-range goal is to provide the capability of universal access to universal service, ")

CSTB, Realizing the Info Future p. 45 1994 ("Decentralized operation. If the network is composed of many different regions operated by different providers, the control, management, operation, monitoring, measurement, maintenance, and so on must necessarily be very decentralized. This decentralization implies a need for a framework for interaction among the parts, a framework that is robust and that supports cooperation among mutually suspicious providers. Decentralization can be seen as an aspect of large scale, and indeed a large system must be decentralized to some extent. But the implications of highly decentralized operations are important enough to be noted separately, as decentralization affects a number of points in this chapter.")

"Four ground rules were critical to Kahn's early thinking:

Bob Kahn: "The idea of the Internet was that you would have multiple networks all under autonomous control. By putting this box in the middle, which we eventually called a gateway, it would allow for the federation of arbitrary numbers of networks without the need for any change made to any particular network. So if BBN had one network and AT&T had another, it would be possible to just plug the two together with a [gateway] box in the middle, and they wouldn't have to do anything to make that work other than to agree to let their networks be plugged in." [Nerds p 111]

"The Internet Protocol is designed to interconnect packet-switched communication subnetworks to form an internetwork. The IP transmits blocks of data, called internet datagrams, from sources to destinations throughout the internet. Sources and destinations are hosts located on either the same subnetwork or connected subnetworks. The IP is purposely limited in scope to provide the basic functions necessary to deliver a block of data. Each internet datagram is an independent entity unrelated to any other internet datagram. The IP does not create connections or logical circuits and has no mechanism to promote data reliability, flow control, dequensing, or other services commonly found in virtual circuit protocols." Military Standard Internet Protocol MIL-STD-1777 Sec. 4.1 (DOD DISA Aug 12, 1983)

"The Internet Protocol is designed for use in interconnected systems of packet-switched computer communication networks. Such a system has been called a "catenet" [1]. The internet protocol provides for transmitting blocks of data called datagrams from sources to destinations, where sources and destinations are hosts identified by fixed length addresses. The internet protocol also provides for fragmentation and reassembly of long datagrams, if necessary, for transmission through "small packet" networks." RFC 791, Internet Protocol: DARPA Internet Program Protocol Specification, Sec. 1.1 (Sept 1981) . See also RFC 760, DOD Standard: Internet Protocol Sec. 1.1 (Jan. 1980) ("This document specifies the DoD Standard Internet Protocol.") (same)

Brian Carpenter, RFC 1958, Architectural Principles of the Internet (June 1996) " 2.1 Many members of the Internet community would argue that there is no architecture, but only a tradition, which was not written down for the first 25 years (or at least not by the IAB). However, in very general terms, the community believes that the goal is connectivity, the tool is the Internet Protocol, and the intelligence is end to end rather than hidden in the network. The current exponential growth of the network seems to show that connectivity is its own reward, and is more valuable than any individual application such as mail or the World-Wide Web. This connectivity requires technical cooperation between service providers, and flourishes in the increasingly liberal and competitive commercial telecommunications environment. The key to global connectivity is the inter-networking layer. The key to exploiting this layer over diverse hardware providing global connectivity is the "end to end argument"."
"3.1 Heterogeneity is inevitable and must be supported by design. Multiple types of hardware must be allowed for, e.g. transmission speeds differing by at least 7 orders of magnitude, various computer word lengths, and hosts ranging from memory-starved microprocessors up to massively parallel supercomputers. Multiple types of application protocol must be allowed for, ranging from the simplest such as remote login up to the most complex such as distributed databases.

© Cybertelecom ::