Birth of the ARPAnet :: 1969
- Internet History
- - Prelude 1950-65
- - - Paul Baran
- - ARPANET 1966-68
- - Birth of ARPANET 1969
- - ARPANET 1970s
- - - TCP/IP
- - Internet 1980s
- - - NSFNET
- - 1990s
- - - CIX
- - DNS
- - World Wide Web
- - VoIP
- - Backbone
- - Internet2
- - Reference
- - - NSFNET
- - 1990s
- - - CIX
- - DNS
- - World Wide Web
- - VoIP
- - Backbone
- - Internet2
- - Reference
- Wireless / Radio
- Common Carrier
- - Communications Act
- - Telecom Act
- - Hush a Phone
- - Computer Inquiries
- - Digital Tornado 1997
- - Steven Report 1998
- - Broadband
- - Universal Service
- - VoIP
- - Mergers
- - Network Neutrality
In January 1969 ARPA awarded the IMP contract to BBN to build the first interface message processor (IMP) (to be packet routers in a stupid sub-network). [Roberts, Net Chronology] [Hauben] [History of Telenet p 29] People at BBN who had prepared the proposal included Frank Heart, Robert Kahn, Will Crowther, Dave Walden, Hawley Rising, Ben Barker, and Severo Ornstein. [Heart 1990] [Vanity Fair (quoting Larry Roberts, "I chose [BBN] based on the team structure and the people. I just felt that the BBN team was less structured. There wouldn’t be as many middle managers and so on.")]
"BBN designed the IMP to accommodate no more than 64 computers and only one network." [Kleinrock] The BBN team was headed by Robert Kahn. [Kahn] [Babbage 22 (installation of first IMP)] The IMPs were to be delivered to UCLA, SRI, UCSB, and the Uni of Utah. BBN was located near Honeywell in Boston and would use the Honeywell H-516 for the first IMPs. [Abbate p 57] [RFC 1000] [Kleinrock 1996] [Hauben] [Heart 1990]
"These sites were running a Sigma 7 with the SEX operating system, an SDS 940 with the Genie operating system, an IBM 360/75 with OS/MVT (or perhaps OS/MFT), and a DEC PDP-10 with the Tenex operating system. Options existed for additional nodes if the first experiments were successful." [RFC 1000]
Sen. Ted Kennedy sent BBN a telegram informing BBN that they had won the contract and said that they were "to be congratulated on winning the contract for the interfaith message processor," congratulating BBN on its ecumenical efforts. [Nerds2.0 p 80] [No Credit] [How the Web was Born p 27] [Roots of the Internet] [Vanity Fair]
Severo Ornstein of BBN quote
I talked to Frank about it one night and he said, "Well, here's this RFQ, from ARPA. They want to build a network and so why don't you take it home and look at it?" And I did and I thought about it a little bit overnight and it seemed as though this was fairly straightforward thing to do. It was fairly well described in the RFQ. And so it seemed we could build it. And I went in and told Frank in words that I guess have become somewhat immortalized that sure, we could build it, "But I had no idea why anybody would want such a thing." [Nerds2.0 p 76]
Leonard Kleinrock recalled
The computer guys would say, "Communication guys, will you please give us good data communications." The communications guys would turn around and say, "What are you talking about, the United States is a copper mine. You've got wires all over the place; use them." The computer guys would say, "No, you don't understand. It takes half a minute to set up a call, and your charge is for a minimum of three minutes. All I want to do is send a hundred milliseconds of data." These guys would turn around back to the computing guys and say, "Go away little boy, there's no revenue there." So the little boys went away, and they created packet switching. [Babbage 27] [See also Gaudin]
Steve Crocker volunteered to manage the RFCs initially. In the early 70s, Jon Postel become the editor of the RFCs. RFCs were maintained by Stanford SRI in its capacity as NIC. The RFCs came to be a function of the Network Working Group. [Roberts, Net Chronology][Crocker NYT ("The early R.F.C.'s ranged from grand visions to mundane details, although the latter quickly became the most common. Less important than the content of those first documents was that they were available free of charge and anyone could write one. Instead of authority-based decision-making, we relied on a process we called "rough consensus and running code." Everyone was welcome to propose ideas, and if enough people liked it and used it, the design became a standard.")] [Living Internet RFC History] [IETF RFC 2555, 30 Years of RFCs (7 April 1999)] [Hauben] [Abbate]
"A month later, after a particularly delightful meeting in Utah, it became clear to us that we had better start writing down our discussions. We had accumulated a few notes on the design of DEL and other matters, and we decided to put them together in a set of notes. I remember having great fear that we would offend whomever the official protocol designers were, and I spent a sleepless night composing humble words for our notes. The basic ground rules were that anyone could say anything and that nothing was official. And to emphasize the point, I labeled the notes "Request for Comments." I never dreamed these notes would distributed through the very medium we were discussing in these notes. Talk about Sorcerer's Apprentice!" [RFC 1000]
S. Crocker, RFC001 Host software, Apr-07-1969.
Network Working Group Steve Crocker
Request for Comments: 1 UCLA
7 April 1969
Title: Host Software
Author: Steve Crocker
Date: 7 April 1969
Network Working Group Request for Comment: 1
The software for the ARPA Network exists partly in the IMPs and partly in the respective Hosts. BB&N has specified the software of the IMPs and it is the responsibility of the HOST groups to agree on HOST software.
During the summer of 1968, representatives from the initial four sites met several times to discuss the HOST software and initial experiments on the network. There emerged from these meetings a working group of three, Steve Carr from Utah, Jeff Rulifson from SRI, and Steve Crocker of UCLA, who met during the fall and winter. The most recent meeting was in the last week of March in Utah. Also present was Bill Duvall of SRI who has recently started working with Jeff Rulifson.
Somewhat independently, Gerard DeLoche of UCLA has been working on the HOST-IMP interface.
I present here some of the tentative agreements reached and some of the open questions encountered. Very little of what is here is firm and reactions are expected. . . . . .
BBN Report No 1822, Interface Message Processor: Specifications for the Interconnection of a Host and an IMP (May).
July 3, 1969, UCLA to Be First Station in Nationwide Computer Network, UCLA Office of Public Affairs.(Kleinrock Slides) "As of now, computer networks are still in their infancy," says Dr. Kleinrock, "But as they grow up and become more sophisticated, we will probably see the spread of 'computer utilities', which, like present electric and telephone utilities, will service individual homes and offices across the country.")
Sept. Larry Roberts succeeded Taylor as head of IPTO in 1969. [Roberts, Net Chronology] Robert Taylor will end up at Xerox PARC. Pressure from Vietnam and Congress is redirecting ARPA's mission towards DoD's military needs. [Almanac] [Markoff Dec. 20, 1999]
Roberts 1967: "The common carriers currently provide 2 or 4 wire, 2 kc lines between two points either dialed or leased, as well as higher band width leased lines and lower band width teletype service. Considering the 2 kc offering, since it is the best dial up service, the use of 2 wire service appears to be very inefficient for the type of traffic predicted for .the network. In the Lincoln - SDC experimental link the average message length appears to be 20 characters. Each message must be acknowledged so that the originator may retransmit o; free the buffer. Thus the line must be reversed so often that the reversal time will effectively half the transmission rate. Therefore, full duplex, four-wire service is more economic and simpler to use.
"Current automatic dialing equipment requires about 20 seconds to obtain a connection and a similar time to disconnect. Thus the response time is much too long assuming a call is made only after a message arrives and that the line is disconnected if no other messages arrive soon. It has proven necessary to hold a line which is being used intermittently to obtain the one-tenth to one second response time required for interactive work. This is very wasteful of the line and unless faster dial up times become available, message switching and concentration will be very important to network participants." Larry Roberts, Multiple Computer Networks and Intercomputer Communications, June 1967
Telecommunications: "Since nobody was going to give the agency a few billion dollars to string its own wires across the country, ARPA would have to move the data through AT&T's telephone system. Unfortunately, that system's basic dial up process was far too cumbersome and slow for computer-speed communications. So instead, Roberts decided ARPA would make a series of long-distance calls, and just never hang up. More precisely, the agency would go to AT&T and lease a series of high-capacity phone lines linking one ARPA site to the next, so that the computers would always be connected." ARPANet connected the IMPs with leased 56 kbps AT&T long distances lines. [Waldrop 80] [Abbate p 56] [Salus p 35] [Nerds p 82] [NIST 1992 p 4] [Roberts Wessler 1970 ("The IMPs are connected together via 50 Kbps data transmission facilities using common carrier (ATT) point to point leased lines... The communications circuits for this network will cost $49K per node per year and the network can support an average traffic of 16 KB per node. ")] The AT&T lines, however, presented a limitation on the network: "Officially, only ARPA contractors could utilize the ARPANET, but this was extended whereby “authorized users” were permitted to use the network. Other parties were not permitted to use the network because the AT&T tariffs for the underlying leased communications lines did not permit shared usage." [History of Telenet p 28]
Compuserv founded by Jeffery Wilkins
The first node of the ARPANet, an Interface Massage Processor (IMP) (built by BBN), delivered on August 30. There had been ruminations that BBN would be late with the delivery, giving those on the receiving end the false perception that they had additional breathing room in which to prepare. But BBN got their work done and airlifted the hulk of an IMP out to UCLA on time. [Roberts, Computer Science Museum 1988 (Pelkey: "he first one went in September '69 at UCLA, and was delivered on-time Labor Day weekend, which caused consternation at UCLA to hear their side of the story")]
Host to Host
1983 - ??
1996 - ??
The first IMP was installed in October 1969 in UCLA. It was a Honeywell DDP-516. The Operating System took 6K of memory. [Picture of Leonard Kleinrock with IMP1 at UCLA] [Roberts, Net Chronology] [Salus p 35] [RFC 1000 (UCLA was counting on a delay by BBN which was having timing troubles. BBN fixed the problem and air shipped the IMP)] [Hauben] "They were each the size of a refrigerator and cost about $100,000 in 1969 dollars." [RFC 2555] [Kleinrock 1996] [Cerf, Oral History 1990] It connected to a host computer a Sigma7 [Cerf, Oral History 1990]
"(This minicomputer had just been released in 1968 and Honeywell displayed it at the 1968 Fall Joint Computer Conference where Kleinrock saw the machine suspended by its hooks at the conference; while running, there was this brute whacking it with a sledge hammer just to show it was robust. Kleinrock suspects that that particular machine is the one that was delivered by BBN to UCLA.) As it turns out, BBN was running two weeks late (much to Kleinrock's delight, since he and his team badly needed the extra development time); BBN, however, shipped the IMP on an airplane instead of on a truck, and it arrived on time. [Kleinrock 1996]
Present the day were Kleinrock and his team, BBN, Honeywell, Scientific Data Systems, AT&T long lines, GTE (the local telephone company), and ARPA. [Kleinrock 1996] Graduate students involved in the project included Vint Cerf, Steve Crocker, Jon Postel... [Cerf, Oral History 1990]
The hosts interconnected with a host-to-host software. "The host-to-host interface was awful to begin with." [Babbage 23] This would would be replaced by NCP which would then be replaced by TCP/IP. [Roberts, Computer Science Museum 1988 ("Steve said, and it was a difficult period too, in terms of getting the host-to-host issues worked out. He said at some level, it was hard to get other people to take it seriously, because some people really didn't want to interface to the network, because they wanted to protect their own private interests to get more computing funding at their own site, so it took a lot of politics and arm twisting to get people's cooperation.")]
In the Fall, Bob Kahn comes out to UCLA to examine the IMP's performance, and meets Vint Cerf [Cerf, Oral History 1990]
Leonard Kleinrock and the first Interface Message Processor
Second IMP: Oct. 29, 10:30 pm:
"A month later the second node was added (at Stanford Research Institute) and the first Host-to-Host message ever to be sent on the Internet was launched from UCLA. This occurred in early October when Kleinrock and one of his programmers proceeded to "logon" to the SRI Host from the UCLA Host. The procedure was to type in "log" and the system at SRI was set up to be clever enough to fill out the rest of the command, namely to add "in" thus creating the word "login". A telephone headset was mounted on the programmers at both ends so they could communicate by voice as the message was transmitted. At the UCLA end, they typed in the "l" and asked SRI if they received it; "got the l" came the voice reply. UCLA typed in the "o", asked if they got it, and received "got the o". UCLA then typed in the "g" and the darned system CRASHED! Quite a beginning. On the second attempt, it worked fine!" [Kleinrock, Net History] [Kleinrock 1996][Kleinrock Internet's First Words]
The Second IMP was delivered to SRI in the beginning of October. [RFC 1000][Kleinrock 1996]
By the end of the year, there were four nodes: [Hauben]
- UCLA (Vint Cerf - PdD student, Steve Crocker - PhD student, and Jon Postel with Leonard Kleinrock) Installed Sept 1, 1969
- Designated the Network Measurement Center. The network itself was part of the experiment and therefore a significant endeavor for decades would be measuring network performance. The NMC would test, measure, and refine the network. [Abbate p 58] [Kleinrock] [Kleinrock 1996] [Cerf, Oral History 1990]
- SRI (Doug Engelbart) (ARPANet's Network Information Center [ISOC]) Installed Oct. 1, 1969 [Roberts, Net Chronology]
- UCSB (Glen Culler), Installed Nov. 1, 1969 and
- Uni Utah Salt Lake (Dave Evans, Ivan Sutherland). installed Dec. 1, 1969
Apollo 11 Goes to the Moon with Neil Armstrong stepping on the moon July 20 [Apollo] Of the two original ARPA projects, one made headlines, the public knew nothing about - both radically changed the world.
Nov. 21 Larry Roberts visits UCLA. Telenet connection to SRI is demonstrated. [RFC 1000]
40th Anniversary of the Net - October 29, 1969
Computer History Museum
ARPAnet - the team behind the internet Created by Arlington County 2011
See also FCC :: Customer Premises Equipment (which affirmed individual's right to attach devices to the end of the telephone network, a necessary precondition to being able to attach IMPs and then modems to the network)
Alan Kay on ARPA: "90 percent of all good things that I can think of that have been done in computer science have been funded by that agency. Chances that they would have been funded elsewhere are very low. The basic ARPA idea is that you find good people and you give them a lot of money and then you step back. If they dont do good things in three years they get dropped - where 'good' is very much related to new or interesting." [Spacewar]
ARPA Project Multiple Access Computer story by Alan Kay: "They had a thing on the PDP-1 called 'The Unknown Glitch'. They used to program the thing either in direct machine code, direct octal, or in DDT. In the early days it was a paper-tape machine. It was painful to assemble stuff, so they never listed out the programs. The programs and stuff just lived in there, just raw seething octal code. And one of the guys wrote a program called 'The Unknown Glitch,' which at random intervals would wake up, print out I AM THE UNKNOWN GLITCH. CATH ME IF YOU CAN, and then it would relocate itself somewhere else in core memory, set a clock interrupt, and go back to sleep. There was no way to find it." [Spacewar]
Old Boys Network: The informal culture of ARPANet has been described as an Old Boys Network. Those who were on the inside were said to have the advantage in receiving ARPA funding; those on the outside were not so advantaged. [Abbate p 55] This informal culture of the community would continue into the 90s, when the Internet was privatized, and create problems. As the Internet moved from private network to public network, questions about arrangements, authority, and structures created consternation, that would be echoed for years in such forums as the COM-PRIV discussion group.
Telephone Network Reliability: Frank Heart: "the phone company had never been able to tell when a phone line was about to fail. Their technology for dealing with phone lines was when someone called up and said, "I can't talk over the phone," they would send someone out to figure out what was wrong with the phone line. The IMPS watched the phone lines all the time, all the time, and they could tell when a line was degrading, not just when it was failing. So there were amusing instances when somebody here would call up the phone company office in California, and tell them that the phone line between Los Angeles and San Francisco was about to break. And the phone company guy, after first thinking we were calling as a joke, would then say, "How could you possibly know that in Boston?" A lot of that went on." [Heart p 27 1990]
Myth: The Internet Was Designed to Survive Nuclear War.
Fact: This was the separate work of luminary Paul Baran. But Paul Baran worked for RAND under contract with the USAF. The ARPANet was built by Larry Roberts at ARPANet.
"[I]n the early 1960s, when computers were scarce, expensive, and cumbersome, using a computer for communications was almost unthinkable. Even the sharing of software or data among users of different computers could be a formidable challenge. Before the advent of computer networks, a person who wanted to transfer information between computers usually had to carry some physical storage medium, such as a reel of magnetic tape or a stack of punch cards, from one machine to the other." [Abbate p. 1]
|One thing that Baran, Davies, and Roberts had in common was the insight that the capabilities of a new generation of small but fast computers could be harnessed to transcend the limitations of previous communications systems. [Abbate p. 40]|
Like distant islands sundered by the sea,
- Vint Cerf, Requiem for the ARPANET
ARPANet Design Objectives
1967: ARPA initiates planning of the ARPANet. Design objectives of ARPANet included
- interconnecting different researchers and research computers,
- Data Sharing ,
- Load Sharing of processing power (where one mainframe was busy, processing could be shifted to a different mainframe with available capacity)
- Program Sharing
- Remote Service
- general purpose open platform reducing need for duplication and
- Message Service: communications between different research centers (minor objective that became a major benefit and use). See Email History.
[See NIST 1992 p 4 ("Sharing of computing resources among researchers was the primary objective. . . Despite heavy military involvement, the resulting ARPANET turned out to be a fairly open network. It provided a test bed for the development of communication protocols to support functionality such as transmission of graphical data, remote login, file transfer, and electronic mail.")] [Roberts 1967 ("The advantages which can be obtained when computers are interconnected in a network such that remote running of programs is possible, include advantages due to specialized hardware and software at particular nodes as well as increased scientific communication.")] [Abbate p. 44, 96]
ARPAnet Plan 1967
"At the meeting it was agreed that work could begin on the conventions to be used for exchanging messages between any pair of computers in the proposed network, and also on consideration of the kinds of communications lines and data sets to be used. In particular, it was decided that the inter-host communication 'protocol' would include conventions for character and block transmission, error checking and retransmission, and computer and user identification. Frank Westervelt, then of the University of Michigan, was picked to write a position paper on these areas of communication, an ad hoc 'Communication Group' was selected from among the institutions represented, and a meeting of the group scheduled." (ARPA draft, III-26)
[Greenstein 25 ("DARPA's administrators wanted innovations in the form of ideas, new designs, and new software. The inventive goals were large and ambitious, as well as open ended, and that meant the opportunity could not be addressed by a single organization, or by the insight of one lone genius. The inventors and DARPA administrators also understood the goals broadly and did not presume to know what specific designs and applications would suit their needs. They broadly funded pie-in-sky research as well as inventions addressing pragmatic problems with anticipated military applications.")]
CSTB, Realizing the Info Future p. 21 1994 "The purpose of the Internet, the largest packet switching network in the world, is to provide a very general communication infrastructure targeted not to one application, such as telephony or delivery of TV, but rather to a wide range of computer-based services, such as electronic mail (e-mail), information retrieval, and teleconferencing."
Resource Sharing: Instead of paying for duplicated resources spread isolated at different universities, ARPA's objective was to network those computers in order to share resources and to share money.
About 1966, Mr. [Robert] Taylor recalls, his office in the Pentagon had a terminal connected to time-sharing community at MIT, a terminal connected to a different kind of computer at the University of California at Berkeley, and a third terminal to the Systems Development Corp. in Santa Monica. "To talk to MIT I had to sit at the MIT terminal. To bring in someone from Berkeley, I had to change chairs to another terminal," he says. "I wished I could connect someone at MIT directly with someone at Berkeley. Out of that came the idea: Why not have one terminal that connects with all of them? "That's why we built ARPAnet," he says. [Almanac] [See also Taylor quoted in Vanity Fair]
Kleinrock: "The interesting thing is, as I recall, that part of the motivation for this network is the fact that in 1967, in the mid 1960s DARPA was heavily supporting a lot of people doing work on time-sharing. And every time an investigator got a new contract, the first thing he wanted was a computer - the best and biggest. Pretty soon Larry said, "This is getting ridiculous," because each facility they created evolved into a specialized kind of facility, like the graphics capability at Utah, the database capability at SRI, and the simulation capability at UCLA. So Larry came up with the concept of a resource sharing network, where there would be specialized sites, and if you wanted that special capability, you connect to that site to get it, or you would pull back data or programs and use them locally. That was one of his motivating reasons, namely, to reduce the number of time-sharing systems he had to support." [Babbage 7]
"Currently, each computer center in the country is forced to recreate all of the software and data files it wishes to utilize. In many cases this involves complete reprogramming of software or reformatting the data files. This duplication is extremely costly and has led to considerable pressure for both very restrictive language standards and the use of identical hardware systems." [Roberts Wessler 1970]
Recalcitrants: As with most technology transitions, not all in the community were enthused. Why, after all, should a university with the latest-greatest main frame have any incentive to share it with others? The quality of one's computer facilities was a boasting right for universities. In order to motivate reluctant participants, further ARPA funding was conditioned upon participation in ARPANET. [Abbate p 55] [Kleinrock 1996 ("most of the ARPA - supported researchers were opposed to joining the network for fear that it would enable outsiders to load down their "private" computers")]
"If you had to give the single most important reason why it was as successful as it was, it was that Larry Roberts had a great deal of authority and freedom and was able to control not only the contractors who were working on it, like BBN, but also the users, since he was supplying all their money. In other words, all the sites at which the IMPs were installed were research sites being supported by DARPA. So he could get their cooperation by the simplest of techniques: he was supplying the money." [Heart 1990]
The western ARPA - funded universities gave Larry Roberts less resistance to this idea of sharing over a network, and this is the reason why the first four nodes on the ARPANET are found out west. [Steve Crocker, Nov. 2011, Smithsonian American Art Museum talk]
[Roberts, Computer Science Museum 1988 ("the problem that most of them had was that if they agreed they would not do as well in their computer funding. They would rather have it themselves. So we just convinced them all they weren't going to get any computer funding anymore unless they cooperated.")]
By 1972, the University of Illinois Center for Advanced Computation was acquiring 90% of the computer services remotely over the ARPANET, at 40% the cost of doing provisioning those services themselves locally. [Abbate p 99]
General Purpose Network / Open / End to End
Larry Roberts sought to avoid the duplication by creating a single general purpose computer network. In 1970 he stated,
"There are many applications of computers for which current communications technology is not adequate. One such application is the specialized customer service computer systems in existence or envisioned for the future; these services provide the customer with information or computational capability. If no commercial computer network service is developed, the future may be as follows:
"One can envision a corporate officer in the future having many different consoles in his office: one to the stock exchange to monitor his own company's and competitor's activities, one to the commodities market to monitor the demand for his product or raw materials, one to his own company's data management system to monitor inventory, sales, payroll, cash flow, etc., and one to a scientific computer used for modeling and simulation to help plan for the future. There are probably many people within that same organization who need some of the same services and potentially many other services. Also, though the data exists in digital form on other computers, it will probably have to be keypunched into the company's modeling and simulation system in order to perform analyses. The picture presented seems rather bleak, but is just a projection of the service systems which have been developed to date.
"The organization providing the service has a hard time, too. In addition to collecting and maintaining the data, the service must have field offices to maintain the consoles and the communications multiplexors adding significantly to their cost. A large fraction of that cost is for communications and consoles, rather than the service itself. Thus, the services which can be justified are very limited.
"Let us now paint another picture given a nationwide network for computer-to-computer communication. The service organization need only connect its computer into the net. It probably would not have any consoles other than for data input, maintenance, and system development. In fact, some of the service's data input may come from another service over the Net. Users could choose the service they desired based on reliability, cleanliness of data, and ease of use, rather than proximity or sole source.
"Large companies would connect their computers into the net and contract with service organizations for the use of those services they desired. The executive would then have one console, connected to his company's machine. He would have one standard way of requesting the service he desires with a far greater number of services available to him.
"For the small company, a master service organization might develop, similar to today's time-sharing service, to offer console service to people who cannot afford their own computer. The master service organization would be wholesalers of the services and might even be used by the large companies in order to avoid contracting with all the individual service organizations.
"The kinds of services that will be available and the cost and ultimate capacity required for such service is difficult to predict. It is clear, however, that if the network philosophy is adopted and if it is made widely available through a common carrier, that the communications system will not be the limiting factor in the development of these services as it is now." [Roberts Wessler 1970]
See also Lyman Chapin, Chris Owens, Interconnection and Peering among Internet Service Providers: A Historical Perspective, An Interisle White Paper (2005) ("The concept of application independence—that the network should be adaptable to any purpose, whether foreseen or unforeseen, rather than tailored specifically for a single application (as the public switched telephone network had been purpose-built for the single application of analog voice communication).")
Sharing Data: In 1970, Larry Roberts described the ARPANET as follows:
"The data sharing between data management systems or data retrieval systems will begin an important phase in the use of the Network. The concept of distributed databases and distributed access to the data is one of the most powerful and useful applications of the network for the general data processing community. As described above, if the Network is responsive in the human time frame, databases can be stored and maintained at a remote location rather than duplicating them at each site the data is needed. Not only can the data be accessed as if the user were local, but also as a Network user he can write programs on his own machine to collect data from a number of locations for comparison, merging or further analysis." [Roberts Wessler 1970]
"The major lesson from the ARPANET experience is that information sharing is a key benefit of computer networking. Indeed it may be argued that many major advances in computer systems and artificial intelligence are the direct result of the enhanced collaboration made possible by the ARPANET." [Jennings p. 945] [Abbate p 100 quoting Jennings]
"When the network was originally built, Larry probably had - if you had to list his goals, you can look at the DARPA order, but if you had to list his goals - he certainly had high in his set of goals the idea that different host sites would cooperatively use software at the other sites. There's a guy at host one, instead of having to reproduce the software on his computer, he could use the software over on somebody else's computer with the software in his computer. And that goal, has, to this day, never been fully accomplished. That goal still to this very day has not been really accomplished to the degree that it was hoped for in its early days." [Heart p 25 1990]
Connectivity: "This network is envisioned as an interconnected communication facilities to utilize capabilities available at other ARPA sites. The network will provide a link between user(s) programs at one site, and programs and data at remote sites." [BBN Proposal]
1968: "The stated objectives of the program were to develop experience in interconnection computers and to improve and increase computer research productivity through resource sharing. Technical needs in scientific and military environments were cited as justification for the program objectives. Relevant prior work was described. It was noted that the computer research centers supported or partially supported by IPT provided a unique testbed for computer networking experiments, as well as providing immediate benefits to the centers and valuable research results to the military. The network planning that had gone on was described, the need for a network information center was noted, and the network design was sketched. A five year schedule for network procurement, construction, operation, and transfer out of ARPA was presented. (It was noteworthy that IPT had initially had in mind eventual transfer of the operational network to a common carrier.) Finally a several-million-dollar, several-year budget was stated." (ARPA draft, III-35)"The Internet developed out of research efforts funded by the U.S. Department of Defense Advanced Research Projects Agency in the 1960s and 1970s to create and test interconnected computer networks. The fundamental aim of computer scientists working on this "ARPANET" was to develop an overall Internet architecture that could connect and make use of existing computer networks that might, themselves, be different both architecturally and technologically. The secondary aims of the ARPANET project were, in order of priority: (1) Internet communication must continue despite the loss of networks or gateways between them; (2) the Internet architecture must support multiple types of communications services; (3) the architecture must accommodate a variety of networks; (4) it must permit distributed, decentralized management of its resources; (5) the architecture must be cost-effective; (6) the architecture must permit attachment by computer devices with a low level of effort; and (7) the resources used in the Internet architecture must be accountable. [FTC Report 2007 p 13-14]
See also Salus p 19; David D. Clark, The Design Philosophy of the DARPA Internet Protocols, COMPUTER COMM. REV., Aug. 1988; B. Carpenter, IAB, Architectural Principals of the Internet, Network Working Group RFC 1958 (June 1996) ("2.1 Many members of the Internet community would argue that there is no architecture, but only a tradition, which was not written down for the first 25 years (or at least not by the IAB). However, in very general terms, the community believes that the goal is connectivity, the tool is the Internet Protocol, and the intelligence is end to end rather than hidden in the network.");[CSTB, Realizing the Info Future p. 3 1994 (setting forth vision for Open Data Networks, stating in first principle that network should "permit universal connectivity.")]; The Internet's Coming of Age, Computer Science and Telecommunications Board, National Research Council 35 (2001) ("the value placed on connectivity as its own reward favors gateways and interconnections over restrictions on connectivity")]
The Design Philosophy of the DARPA Internet Protocols, D.D.Clark, Proc SIGCOMM 88, ACM CCR Vol 18, Number 4, August 1988, pages 106-114 (reprinted in ACM CCR Vol 25, Number 1, January 1995, pages 102-111).
The initial network planned was first 4 nodes and then 12 nodes
Public Utility: Many in the community saw what they were doing as building a public utility for computer communications.
- Paul Baran: "One of his recommendations was for a national public utility to transport computer data, much in the way the telephone system transports voice data. "Is it time now to start thinking about a new and possibly nonexistent public utility," Baran asked, "a common user digital data communication plant designed specifically for the transmission of digital data among a large set of subscribers?"" [Hauben]
- In 1971 Alex McKenzie took charge of the Network Control Center at BBN. He envisioned the ARPANET as a "computing utility." [Abbate p 65]
- William F Massy, Computer Networks: Making the Decision to Join One, Science 1 November 1974, Vol. 186 No. 4162, pp. 414-20 (discussing how computer utility would meet needs of university computer centers).
- Frank Heart: "A utility is something people depend upon. Like the electricity, or the phones, or the lights, or the railroads, or the airplanes. Yes, it was a utility. That's the thing that was the amazing surprise. It was started as an experiment to connect four sites, and it became a utility much, much faster than anybody would have guessed. People began to depend upon it." [Heart p 16 1990]
Larry Roberts believed that the existing communications networks at the time were inefficient and did not properly support communications for computers. By designing a computer network, Roberts believed that he was reinventing communications, designing it to benefit from the advantages and efficiencies of computers. In a 1970 paper, he set out to compare the cost of transmitting one million bits of information 1400 miles (the average distance between ARPANet nodes).
Media Cost per
Telegram $3300.00 For 100 words at 30 bits/wd, daytime Night Letter 565.00 For 100 words at 30 bits/wd, overnight delivery Computer Console 374.00 18 baud avg. use, 300 baud DDD service line & data sets only TELEX 204.00 50 baud teletype service DDD (103A) 22.50 300 baud data sets, DDD daytime service Autodin 8.20 2400 baud message service, full use during working hours DDD 3.45 2000 baud data sets Letter 3.30 Airmail, 4 pages, 250 wds/pg, 30 bits/wd W.U. Broadband 2.03 2400 baud service, full duplex WATS 1.54 2000 baud, used 8 hrs/working day Leased Line (201) .57 2000 baud, commercial, full duplex Data 50 .47 50 KB dial service, utilized full duplex Leased Line (303) .23 50 KB, commercia, full duplex Mail DEC Tape .20 2.5 megabit tape, airmail Mail IBM Tape .034 100 megabit tape, airmail
"Cost per Megabit for Various Communication Media 1400-Mile Distance" [Roberts Wessler 1970]
Roberts table above shows two things. First, it shows how compelling an efficient cost effective computer network could be, bypassing what would otherwise be significant charges from existing networks. Second, it also shows, what was subsequently demonstrated multiple times - that one of the most efficient means of transmitting data was by loading all the data on a memory device, and driving (or mailing) it to its destination.
"Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway" -.Tanenbaum, Andrew S. (1996). Computer Networks. New Jersey: Prentice-Hall. pp. 83. ISBN 0-13-349945-6
Roberts notes that a significant amount of the cost in switched networks are the switches themselves. "Previous store and forward systems like DoD's AUTODIN system, have had such complex, expensive switches that over 95% of the total communications service cost was for the switches. Other switch services adding to the system's cost, deemed superfluous in a computer network, were: long term message storage, multi-address messages and individual message accounting." [Roberts Wessler 1970] Remove the switch from the computer network and you remove the costs.
"By the late 1960s, computer scientists were experimenting with non-linear "packet-switched" techniques to enable computers to communicate with each other. Using this method, computers disassemble information into variable-size pieces of data called "packets" and forward them through a connecting medium to a recipient computer that then reassembles them into their original form. Each packet is a stand-alone entity, like an individual piece of postal mail, and contains source, destination, and reassembly information. Unlike traditional circuit-switched telephone networks, packet-switched networks do not require a dedicated line of communication to be allocated exclusively for the duration of each communication. Instead, individual data packets comprising a larger piece of information, such as an e-mail message, may be dispersed and sent across multiple paths before reaching their destination and then being reassembled. This process is analogous to the way that the individual, numbered pages of a book might be separated from each other, addressed to the same location, forwarded through different post offices, and yet all still reach the same specified destination, where they could be reassembled into their original form. [FTC Report 2007 p 14]
Larry Roberts considered several different designs for the ARPANet including fully interconnected point to point leased lines, line switched (dial-up) service, and packet switching. Roberts stated, "For the kind of service required, it was decided and later verified that the message switched service provided the greater flexibility, higher effective bandwidth, and lower cost than the other two systems." [Roberts Wessler 1970]
[Roberts, Computer Science Museum 1988 ("All of us thought, clearly, in those days, about computer switching rather than circuit switching; some sort of computerized switching. You got the traffic in and you put it out. It could have been that we put it out block for block as fast as it came in; it could have been that we stored the whole message and forwarded it. What we concluded was that you wanted to not store the whole message and forward it, and you couldn't have a perfect virtual cut-through where you sent every block immediately synchronously because it might interfere with the next message, so you had to do it in some smaller breakdown, which is like a packet, or whatever, which, of course, is the size lump you're in anyway, because you've got to put sum checks on it every interval. So, there wasn't any question about packets -- and clearly Donald gave it the name")]
End to End
Also, during the Internet's early years, network architectures generally were based on what has been called the "end-to-end argument." This argument states that computer application functions typically cannot, and should not, be built into the routers and links that make up a network's middle or "core." Instead, according to this argument, these functions generally should be placed at the "edges" of the network at a sending or receiving computer.41 This argument also recognizes, however, that there might be certain functions that can be placed only in the core of a network. Sometimes, this argument is described as placing "intelligence" at or near the edges of the network, while leaving the core's routers and links mainly "dumb" to minimize the potential for transmission and interoperability problems that might arise from placing additional complexity into the middle of the network. [FTC Report 2007 p 17]