Back to top

Chapter 08 - Networking: Diffusion 1972-1979

8.4 Transmission Control Protocol (TCP) 1973-1976

The confusion over how to best design a computer communications network also embroiled the debates within the group now named IFIP Working Group 6.1. In 1973, when Pouzin approached Alex Curran, chairman of IFIP TC-6, regarding the recently formed INWG becoming associated with IFIP TC-6, he readily agreed and they renamed INWG: IFIP Working Group 6.1 (WG 6.1) on Network Interconnection. Steve Crocker, chairman of the original Arpanet NWG, recommended Vint Cerf became Chairman, a suggestion readily approved. Quickly the WG 6.1 meetings became a must for anyone wanting to influence computer communications. For what was recognized by but a handful of people in mid-1973 became, in the short span of twenty-four months, received knowledge by nearly all those involved in computer communications: the world was going to be populated by many computer networks, networks that inevitably would need to be interconnected.

Only how could the complexities of inter-network communications be resolved when the technological details of what constituted a best network remained an open question? And what were to be the roles of the firms of the two enabling markets: telecommunications and computers? So even though not a standards making body, for a few critical years the who’s who of computer communications debated the future of networks and their coming together at IFIP WG 6.1 meetings.

All agreed Arpanet represented a good first proof that a better network than one designed using the circuit-switching model of the telephone system could be built. Arpanet had its deficiencies, however, for it was neither a true datagram network nor did it provide end-to-end error correction. So the big question remained: could a true packet network with end-to-end reliability be created? WG 6.1 debates struggled with the issue of how to design a network that was based on datagrams but provided virtual circuits. Indeed, it was a fundamental barrier to the next step for networking.

In September 1973, at a meeting in Sussex, England, Cerf and Kahn presented a communication protocol Transmission Control Protocol, or TCP, that functioned over interconnected networks of many kinds. Their paper was published by the IEEE Transactions on Communications in May 1974 as A Protocol for Packet Network Intercommunication.23 TCP incorporated end-to-end virtual circuits with datagram transmission and gateways between networks.

Generalized from a solution of how to interconnect Arpanet with a packet radio network, Cerf and Kahn proposed different networks be interconnected by gateways with TCP functioning across both networks and gateways. Gateways would receive incoming traffic from one network and perform whatever transformations were necessary in order to send the data over the outgoing network. One transformation would be protocol conversions. Another, more contentious transformation was fragmenting packet or datagram size: if the outgoing network required a smaller packet size than the incoming network, then the gateway would fragment the packet into multiple packets before resending. Only this required receiving hosts to be informed of gateway created fragmentations in order to reconstruct the fragmented packets into the original transmission. One transmission could span many networks and potentially be fragmented multiple times. Cerf and Kahn wrote:

If a GATEWAY fragments an incoming packet into two or more packets, they must eventually be passed along to the destination HOST as fragments or reassembled for the HOST. It is conceivable that one might desire the GATEWAY to perform the reassembly to simplify the task of the destination HOST (or process) and/or to take advantage of a larger packet size. We take the position that GATEWAYS should not perform this function since GATEWAY reassembly can lead to serious buffering problems, potential deadlocks, the necessity for all fragments of a packet to pass through the same GATEWAY, and increased delay in transmission. Furthermore, it is not sufficient for the GATEWAYS to provide this function since the final GATEWAY may also have to fragment a packet for transmission. Thus the destination HOST must be prepared to do this task.24

This solution immediately raised red flags for Pouzin and the Europeans concerned with any network design that forced hosts to have to correct errors introduced by network operators, i.e. the PTT’s. Coupling the correction of host-to-host errors with network-to-network errors meant that the protocols used by the computers and PTT’s would have to be coupled, and that seemed like a sure way both to cede control to the PTT’s, and to end-up with a sub-optimum computer communication system.

In March 1974, Pouzin responded to the Cerf and Kahn memo (IFIP WG 6.1 document CEKA74) with a proposal of his own: A Proposal for Interconnecting Packet Switching Networks (INWG60). It stimulated more revisions and proposals by both sides. Then in November 1974, the Europeans’ concerns soared when the CCITT decided to establish a standard interface to packet switching networks they would offer (X.25).25 In December, in hopes of forging a solution of one internetwork protocol, Cerf, Yogen Dalal and Carl Sunshine submitted document INWG72: Specifications of Internetwork Transmission Control Program (revised). The revision set forth a windowing scheme to end the fragmentation arguments. Pouzin remembers:

So Vint and Bob Kahn and probably a few others, like Yogen Dalal, tried to use this window scheme: a single packet could be fragmented in a number of packets along the way, all of them would arrive out of order, and still they would use the very same window scheme to control the flow. It was technically tricky – I mean smart, but we didn’t buy the idea because we thought it was first, too complex in implementation, and much too hard to sell to industry. The second thing is it mingled – it actually handled in the very same protocol, matters that belong to the transport level, and matters that belong to the end-to-end protocol. That kind of coupling was politically unacceptable, because these two levels of system were handled by two different worlds: The PTTs and the other world, the computer people. So obviously it was not acceptable in terms of technical sociology. You could not sell something that involves the consensus of these two different worlds. So we thought that was not a good way of organizing things, even though it was technically sound.

The struggle to find a common solution had sharpened understandings of how to design both a network, be it either datagram or virtual circuit, and an interconnected network of many networks. Cerf remembers those difficult years:

So for years, there was this religious battle between people who had datagram style networks and people who had virtual circuit style nets. The international PTT efforts went down the path of virtual circuits while the R & D community generally stuck with a datagram style of operations. What happened as a result is that the R & D community had to come to grips with a much more challenging underlying communications environment where datagrams weren’t guaranteed. If you sent one, it might get clobbered by something else, or it might get lost, or it might get thrown away because of congestion. The higher-level protocols that operated on top of the basic datagram mode had to be far more robust and have more mechanisms in them for flow control and retransmission and duplicate detection’s than the protocols that were evolving on the basis of virtual circuits. So these communities really went in different directions. They used to fight tooth and nail with each other, and I was out there fighting too. I was beating the table saying: ‘God damn it, it had to be datagrams because that required less of a network and you had to do things end-to-end anyway, because you wanted to have the mainframes assure the other end that they had really gotten the data, and not just that the network thinks that you got it,’ so there were a lot of arguments along those lines.

But for Kahn and DARPA the debate needed to end. They wanted to code TCP and see if it would work. In early 1975, DARPA gave three contracts to test whether the TCP specifications were detailed and explicit enough to enable different implementations to function seamlessly. The three teams were headed by Cerf at Stanford, Ray Tomlinson and Bill Plummer at BBN, and by Peter Kirstein at University College in London, England. Pouzin understood the significance of implementing TCP:

After that, they sort of froze their design, but then we started to become disinterested because we didn’t think it could really work. So then we started to skirt the issue and considered that as something we couldn’t avoid because they had the whole ARPA backing behind them, so we thought we couldn’t stop that. On the other hand, we had quite a good feeling that they would not invade Europe very much, so we started to organize our thing in Europe differently.

The European members concluded that they needed the support of a standards making body to advance their cause – IFIP not being a standards making body – and so they approached the International Standards Organization (ISO) in late 1975.

In January 1976, in an attempt to bridge the differences between the TCP and European communities, Alex McKenzie, of BBN, re-crafted an international protocol to satisfy both the demands of those wanting a total end-to-end protocol and those wanting the end-to-end and network-to-network functions separated. A paper authored by McKenzie, Cerf, Scantlebury and Zimmermann discussing the protocol was submitted to the IFIP and subsequently published in Computer Communications Review.26 Unfortunately it proved too little too late. As Cerf underscores, he could not “persuade the TCP community to adopt the compromise given the state of implementation experience of TCP at the time and the untested nature of the IFIP document.”

  • [23]
    :

    Vinton G. Cerf and Robert E. Kahn, “A Protocol for Packet Network Intercommunication,” IEEE Transactions on Communications, Vol. com-22, No. 5, May 1974, pp. 637-648

  • [24]
    :

    Ibid., p.639

  • [25]
    :

    PTT members began attending WG 6.1 meetings, and IFIP received category d membership in CCITT that permitted IFIP members to attend CCITT Rapporteur’s meetings; both developments adding voices to those advocating virtual circuits.

  • [26]
    :

    V. Cerf, A. McKenzie, R. Scantlebury, H. Zimmermann, “PROPOSAL FOR AN INTERNATIONAL END TO END PROTOCOL,” ACM SIGCOMM Computer Communication Review, January 1976.

×