|TCP / IP|
© Cybertelecom ::
The ARPANet had migrated to the NCP network protocol in 1970. However, this network protocol had its limitations. It assumed end to end connectivity between the communicating hosts and did not handle interconnecting different networks well. It also did not tolerate packet loss, coming to a halt if a packet was dropped on the floor.
Bob Kahn and Vint Cerf set to work on a new protocol that would overcome these limitations, interconnecting otherwise incompatible networks and devices. Cerf explained, "In defense settings, circumstances often prevented detailed planning for communication system deployment, and a dynamic, packet-oriented, multiple-network design provided the basis for a highly robust and flexible network to support command-and-control applications." [Cerf 1995] [See also Vanity Fair (quoting Vint Cerf, "What Bob Kahn and I did was to demonstrate that with a different set of protocols you could get an infinite number of—well, infinite is not true, but an arbitrarily large number of—different heterogeneous packet-switched nets to interconnect with each other as if it was all one big giant network. TCP is the thing that makes the Internet the Internet.")] [Abbate p 113]
In 1972, Vint Cerf (Stanford; DARPA funding; Cerf had worked on the original NCP) and Bob Kahn (ARPA) released their paper on TCP, A Protocol for Packet Network Interconnection (distributed in 1973, published IEEE Transactions of Communications Technology 1974)
The phrase "Internet" was first used in Vint Cerf, Yogen Dalal, Carl Sunshine, Specification Of Internet Transmission Control Program, RFC 675 (1974) ("This document describes the functions to be performed by the internetwork Transmission Control Program [TCP] and its interface to programs or users that require its services. Several basic assumptions are made about process to process communication and these are listed here without further justification.").[Roberts History s 6]
Further development of TCP/IP was funded by DARPA, with three contracts to Stanford, BBN, and UCL. [ISOC] Vint Cerf and others went through several versions: TCPv1; TCPv2; TCP/IPv3 (splitting TCP into TCP and IP), and, in 1978, they settled on IPversion4. [Living Internet TCP/IP]
TCP/IP was successfully used in 1977 to link together 4 networks.
IP as originally designed had an eight bit networking field which would be sufficient for at maximum 256 networks - it was believed at the time that this would be more than enough. [Nerds2.0 p 112] [Netvalle]
- R. Kahn, Communications Principles for Operating Systems. Internal BBN memorandum, Jan. 1972.
- Vint Cerf & Robert Kahn, A Protocol for Packet Network Interconnection, IEEE Transactions of Communications Technology (May 1974)
- Vint Cerf, Yogen Dalal, Carl Sunshine, Specifications of Internet Transmission Control Protocol NWG RFC 675 (Dec. 1974)
[Roberts, Computer Science Museum p. 27 1988 ("Roberts: Well I started that whole project of the radio network at ARPA and so on, and as he came to ARPA, he and Bob started working on this whole internet thing. And the internetting thing has always seemed to me as somewhat crazy, because if you build unrelated networks without standards, you have to do something, but if you build networks the way that they commercial world would clearly build them, there is no problem. Just interconnect them cleanly, so I've never understood where it fits into the world.")]
NCP to TCP/IP (IPv4) Transition
1980 - A Fateful Decision
With DCA managing the ARPANet now, it was decided that ARPANet would be integrated into the Defense Data Network (DDN). ARPANet would have to interconnect with different networks and in order to do so, ARPANet needed to migrate to the new TCP/IP. DCA announced in 1980 that ARPANet will migrate from NCP to TCP/IP (the second of three network protocol migrations), and that it would do so on a tight and non negotiable timetable. On Jan. 1, 1983, ARPANet would turn off NCP; if you wanted your packets to make it, the hosts had to be using TCP/IP. [Cerf 1160] (just for fun, contrast this to the Network Neutrality discussion and whether it is reasonable network management for routers to filter out specific traffic). [Great Achievements] [Netvalley]
The Internet would adopt the Internet Protocol version 4, believing that this would provide an inexaustible address space. As the inexaustible became exausted, the Internet has once again migrated logical protocols, migrating to IPv6.
RFC 760, DOD Standard: Internet Protocol (Jan. 1980) ("This document specifies the DoD Standard Internet Protocol.")
Protocol / standards / technology migrations are not always met with enthusiasm, especially when what exists is working. In addition, as seen in the sidebar, the Internet culture of consensus driven process had already developed. But this was a top-down DOD directive with a flag day deadline - resulting in a clash of cultures.
Vint Cerf, Final Report of the Stanford University TCP Project , IEN 151 (April 1, 1980)
Vint Cerf, Comments on NCP/TCP Service Transition Strategy, NWG RFC 773 (Oct. 1980)
"At no time was the controversy [with regard to the editoral function of the RFCs] worse than it was when DoD adopted TCP/IP as its official host-to-host protocols for communications networks. In March 1982, a military directive was issued by the Under Secretary of Defense, Richard DeLauer. It simply stated that the use of TCP and IP was mandatory for DoD communications networks. Bear in mind that a military directive is not something you discuss - the time for discussion is long over when one is issued. Rather a military directive is something you DO. The ARPANET and its successor, the Defense Data Network, were military networks, so the gauntlet was down and the race was on to prove whether the new technology could do the job on a real operational network. You have no idea what chaos and controversy that little 2-page directive caused on the network. (But that's a story for another time.) However, that directive, along with RFCs 791 and 793 (IP and TCP) gave the RFCs as a group of technical documents stature and recognition throughout the world. (And yes, TCP/IP certainly did do the job!) " [Jake Feinler, RFC 2555]
1981 - The Transition Plan
March 1981: Major Joseph Haughney announces ARPANet will migrate to TCP/IP on Jan. 1, 1983
In 1981, Jon Postel released RFC 801, the TCP/TCP Transition Plan that detailed a one year phase over of network assets from NCP to TCP starting in 1981. During this time, ARPANet Hosts would operate in Dual Stack mode, operating both NCP and TCP/IP.
RFC 791, Internet Protocol: DARPA Internet Program Protocol Specification , (Sept 1981) ("This document specifies the DoD Standard Internet Protocol. This document is based on six earlier editions of the ARPA Internet Protocol Specification, and the present text draws heavily from them.")
J Postel, RFC 801, NCP/TCP Transition Plan (Nov. 1981) ("The Department of Defense has recently adopted the internet concept and the IP and TCP protocols in particular as DoD wide standards for all DoD packet networks, and will be transitioning to this architecture over the next several years. All new DoD packet networks will be using these protocols exclusively. The goal is to make a complete switch over from the NCP to IP/TCP by 1 January 1983."
October 1, 1981 ARPANET did not forward NCP traffic
1982 - One Year Transition
Jon Postel, Vint Cert, and others press forward. An awareness raising campaign was engaged in which included newsletters, emails, discussions, and sometimes more drastic measures. Vint Cerf and Jon Postel turn off the NCP "network channel numbers on the ARPANET IMP's for a full day in mid 1982, so that only sites using TCP/IP could still operate" as a way of encouraging folk to prepare for the TCP/IP transition. [Living Internet TCP/IP] This was repeated in the Fall.
Those that were persuaded that the transition would in fact occured dedicated all of their resources to it. Everything would have to be written for the new protocol. Staff would have to be trained. Equipment would have to be updated. Problems would have to be debugged. This would normally take more time then the technical staffs were given - resulting in the commitment to all-transition all-the-time.
Others were not as persuaded. Bob Kahn recounts that up to the very last day he would receive emails asking whether the transition was actually going to occur - or asking what the real date of the transition was. When the transition transpired, ......
1983 - ARPANet Becomes the Internet
On January 1, 1983, ARPANET migrated to IP. Other large networks followed suit and likewise migrated to IP. NCP was turned off. [Netvalle] [ISOC] [Salus p 183] [Waldrop 85]
From the ARPANET to the Internet A Study of the ARPANET TCP/IP Digest and of the Role of Online Communication in the Transition from the ARPANET to the Internet by Ronda Hauben ~
"With the great switchover to TCP/IP, the ARPANET became the Internet." - Peter Salus [Salus p 188]
The transition was an enormous effort and was met with significant resistance and reticence. Adjectives used to describe the transition include "traumatic" and "disruptive." [Slaton and Abbate] (one participant recalled) "the transition from NCP to TCP was done in a great rush, occupying virtually everyone's time 100^ in the year 1982 . . . In was a major painful ordeal." [Slaton and Abbate]
Not every believe the transition was actually going to occur - or were prepared. "When the cutoff date arrived, only about half the sites had actually implemented a working version of TCP/IP." [Abbate p 141]
Many people hold the opinion that without the directives from DCA, the migration to IPv4 would never have been achieved voluntarily.
- Bboard Thread about Changing the Arpanet Protocol from NCP to TCP/IP
- Ronda Hauben, From the ARPANET to the Internet: A Study of the ARPANET TCP/IP Digest and of the Role of Online Communication in the Transition from the ARPANET to the Internet
- Email from Jack Haverty to IH mailing list , NCP to TCP/IP Transition (April 27, 2009) (recounting transition)
TCP is the error correction mechanism
- Assumption is that Internet is a shared resource.
- But when demand exceeds supply, need resolution. Technology community rely on technological solutions, not economic solutions.
- In the Internet, this is TCP.
- TPC has slow start up, ramping up packet transmission until packet lost, then backs up, then ramps up again till packet loss, and so on
- Resulting in TCP Fair
- Individual subscriber can have multiple TCP flows. Amount of TCP flow is not based on how much you used in the past few moments
- RFC 5290 Comments on the Usefulness of Simple Best Effort Traffic
- Not efficient or optimal. But a solution that has worked well for many years
"When an Internet user opens a webpage, sends an email, or shares a document with a colleague, the user’s computer usually establishes a connection with another computer (such as a server or another end user’s computer) using, for example, the Transmission Control Protocol (TCP).1 For certain applications to work properly, that connection must be continuous and reliable. Computers linked via a TCP connection monitor that connection to ensure that packets of data sent from one user to the other over the connection “arrive in sequence and without error,” at least from the perspective of the receiving computer.2 If either computer detects that “something seriously wrong has happened within the network,” it sends a “reset packet” or “RST packet” to the other, signaling that the current connection should be terminated and a new connection established “if reliable communication is to continue.”3" [Comcast Order 2008 para 3]
Transmission Control Protocol, RFC 793, Sept. 1981 ("The Transmission Control Protocol (TCP) is intended for use as a highly reliable host-to-host protocol between hosts in packet-switched computer communication networks, and in interconnected systems of such networks. ")
“TCP is the dominant transport-layer protocol on the Internet today, carrying more than 90% of all data traversing backbone links.” Letter from Jack Zinman, General Attorney, AT&T Services, Inc., to Marlene H. Dortch, Secretary, FCC, Attach. at 1 (Apr. 25, 2008)
TCP would be broken into TCP/IP. That facilitated real time voice applications. TCP's error control protocols which caused packets to be resent was both unnecessary for real time voice and in fact got in the way. By separating TCP and IP, this allowed different error control protocols such as UDP which, if the packet is not delivered on time, just drops and does not retransmit the packet. [Vint Cerf, How the Internet Came to Be, NetValley Nov 20, 2006] "IP would be responsible for routing packets across multiple networks and TCP for converting messages into streams of packets and reassembling them into messages with few errors despite loss of packets the underlying network." [Denning 4] [Vint Cerf, TCP/IP Co Designer, Living Internet] [ISOC] [Roberts, Net Chronology]
Van Jacobson includes Congestion Control TCP in Berkeley UNIX. Introduction to Congestion Avoidance and Control, Van Jacobson and Michael J Karels, November
Computer networks have experienced an explosive growth over the past few years and with that growth have come severe congestion problems. For example, it is now common to see internet gateways drop 10% of the incoming packets because of local buffer overflows. Our investigation of some of these problems has shown that much of the cause lies in transport protocol implementations (not in the protocols themselves): The 'obvious' ways to implement a window-based transport protocol can result in exactly the wrong behavior in response to network congestion. We give examples of 'wrong' behavior and describe some simple algorithms that can be used to make right things happen. The algorithms are rooted in the idea of achieving network stability by forcing the transport connection to obey a 'packet conservation' principle. We show how the algorithms derive from this principle and what effect they have on traffic over congested networks.
In October of '86, the Internet had the first of what became a series of 'congestion collapses'. During this period, the data throughput from LBL to UC Berkeley (sites separated by 400 yards and two IMP hops) dropped from 32 Kbps to 40 bps. We were fascinated by this sudden factor-of-thousand drop in bandwidth and embarked on an investigation of why things had gotten so bad. In particular, we wondered if the 4.3BSD (Berkeley UNIX) TCP was misbehaving or if it could be tuned to work better under abysmal network conditions. The answer to both of these questions was "yes"...
See also Bartek Peter Wydrowski, Techniques in Internet Congestion Control February 2003 ("'The initial response to ARPANET's congestion collapse problem was to increase the capacity of the network. This helped temporarily, but the ARAPNET continued to suffer congestion collapses until a strategy to control the load of packets entering the network was developed. In 1988 Van Jackson enhanced the famous Transport control protocol (TCP)  so that the transmission rate was responsive to the level of network congestion. TCP was made to reduce the rate of transmission of hosts when it sensed the network load was nearing congestion collapse. Since the introduction of this enhanced TCP, congestion collapse did not reoccur.'")
Internet Design Principles
"These and other documents embody some value judgments and reflect the fundamental political and ethical beliefs of the scientists and engineers who designed the Internet: the Internet architecture reflects their desire for as much openness, sharing of computing and communications resources, and broad access and use as possible. For example, the value placed on connectivity as its own reward favors gateways and interconnections over restrictions on connectivity - but the technology can be used permissively or conservatively, and recent trends show both. Another value underlying the design is a preference for simplicity over complexity." - The Internet's Coming of Age, Computer Science and Telecommunications Board, National Research Council, p. 35 (National Academy Press 2001)
CSTB, Realizing the Info Future p. 30-31 1994 (" the Internet has given rise to a phenomenon in which services of all kinds spring up suddenly on the network without anyone directing or managing their development....Such spontaneous generation of unforeseen yet enormously popular services—which is encouraged by the Internet as a distributed information and communications system—is a constant source of pleasant surprise today and heralds future potential as we move into an era of truly interactive information via the NII.")
CSTB, Realizing the Info Future p. 34 1994 ("the Internet's openness, a characteristic that has been key to its unprecedented success. It therefore characterizes its vision in terms of an Open Data Network (ODN). A national information infrastructure should be capable of carrying information services of all kinds, from suppliers of all kinds, to customers of all kinds, across network service providers of all kinds, in a seamless accessible fashion. The long-range goal is to provide the capability of universal access to universal service, ")
CSTB, Realizing the Info Future p. 45 1994 ("Decentralized operation. If the network is composed of many different regions operated by different providers, the control, management, operation, monitoring, measurement, maintenance, and so on must necessarily be very decentralized. This decentralization implies a need for a framework for interaction among the parts, a framework that is robust and that supports cooperation among mutually suspicious providers. Decentralization can be seen as an aspect of large scale, and indeed a large system must be decentralized to some extent. But the implications of highly decentralized operations are important enough to be noted separately, as decentralization affects a number of points in this chapter.")
"Four ground rules were critical to Kahn's early thinking:
- "Each distinct network would have to stand on its own and no internal changes could be required to any such network to connect it to the Internet.
- "Communications would be on a best effort basis. If a packet didn't make it to the final destination, it would shortly be retransmitted from the source.
- "Black boxes would be used to connect the networks; these would later be called gateways and routers. There would be no information retained by the gateways about the individual flows of packets passing through them, thereby keeping them simple and avoiding complicated adaptation and recovery from various failure modes.
- "There would be no global control at the operations level." [ISOC]
Bob Kahn: "The idea of the Internet was that you would have multiple networks all under autonomous control. By putting this box in the middle, which we eventually called a gateway, it would allow for the federation of arbitrary numbers of networks without the need for any change made to any particular network. So if BBN had one network and AT&T had another, it would be possible to just plug the two together with a [gateway] box in the middle, and they wouldn't have to do anything to make that work other than to agree to let their networks be plugged in." [Nerds p 111]
"The Internet Protocol is designed to interconnect packet-switched communication subnetworks to form an internetwork. The IP transmits blocks of data, called internet datagrams, from sources to destinations throughout the internet. Sources and destinations are hosts located on either the same subnetwork or connected subnetworks. The IP is purposely limited in scope to provide the basic functions necessary to deliver a block of data. Each internet datagram is an independent entity unrelated to any other internet datagram. The IP does not create connections or logical circuits and has no mechanism to promote data reliability, flow control, dequensing, or other services commonly found in virtual circuit protocols." Military Standard Internet Protocol MIL-STD-1777 Sec. 4.1 (DOD DISA Aug 12, 1983)
"The Internet Protocol is designed for use in interconnected systems of packet-switched computer communication networks. Such a system has been called a "catenet" . The internet protocol provides for transmitting blocks of data called datagrams from sources to destinations, where sources and destinations are hosts identified by fixed length addresses. The internet protocol also provides for fragmentation and reassembly of long datagrams, if necessary, for transmission through "small packet" networks." RFC 791, Internet Protocol: DARPA Internet Program Protocol Specification, Sec. 1.1 (Sept 1981) . See also RFC 760, DOD Standard: Internet Protocol Sec. 1.1 (Jan. 1980) ("This document specifies the DoD Standard Internet Protocol.") (same)
Brian Carpenter, RFC 1958, Architectural Principles of the Internet (June 1996) " 2.1 Many members of the Internet community would argue that there is no architecture, but only a tradition, which was not written down for the first 25 years (or at least not by the IAB). However, in very general terms, the community believes that the goal is connectivity, the tool is the Internet Protocol, and the intelligence is end to end rather than hidden in the network. The current exponential growth of the network seems to show that connectivity is its own reward, and is more valuable than any individual application such as mail or the World-Wide Web. This connectivity requires technical cooperation between service providers, and flourishes in the increasingly liberal and competitive commercial telecommunications environment. The key to global connectivity is the inter-networking layer. The key to exploiting this layer over diverse hardware providing global connectivity is the "end to end argument"."
"3.1 Heterogeneity is inevitable and must be supported by design. Multiple types of hardware must be allowed for, e.g. transmission speeds differing by at least 7 orders of magnitude, various computer word lengths, and hosts ranging from memory-starved microprocessors up to massively parallel supercomputers. Multiple types of application protocol must be allowed for, ranging from the simplest such as remote login up to the most complex such as distributed databases.