Libertarians often cite the internet as a case in point that liberty is the mother of innovation. Opponents quickly counter that the internet was a government program, proving once again that markets must be guided by the steady hand of the state.
In one sense the critics are correct, though not in ways they understand.
The internet indeed began as a typical government program, the ARPANET, designed to share mainframe computing power and to establish a secure military communications network.
Of course the designers could not have foreseen what the (commercial) internet has become. Still, this reality has important implications for how the internet works — and explains why there are so many roadblocks in the continued development of online technologies. It is only thanks to market participants that the internet became something other than a typical government program: inefficient, overcapitalized, and not directed toward socially useful purposes.
In fact, the role of the government in the creation of the internet is often understated.
The internet owes its very existence to the state and to state funding. The story begins with ARPA, created in 1957 in response to the Soviets’ launch of Sputnik and established to research the efficient use of computers for civilian and military applications.
During the 1960s, the RAND Corporation had begun to think about how to design a military communications network that would be invulnerable to a nuclear attack. Paul Baran, a RAND researcher whose work was financed by the Air Force, produced a classified report in 1964 proposing a radical solution to this communication problem. Baran envisioned a decentralized network of different types of “host” computers, without any central switchboard, designed to operate even if parts of it were destroyed. The network would consist of several “nodes,” each equal in authority, each capable of sending and receiving pieces of data.
Each data fragment could thus travel one of several routes to its destination, such that no one part of the network would be completely dependent on the existence of another part. An experimental network of this type, funded by ARPA and thus known as ARPANET, was established at four universities in 1969.
Researchers at any one of the four nodes could share information, and could operate any one of the other machines remotely, over the new network. (Actually, former ARPA head Charles Herzfeld says that distributing computing power over a network, rather than creating a secure military command-and-control system, was the ARPANET’s original goal, though this is a minority view.)
By 1972, the number of host computers connected to the ARPANET had increased to 37. Because it was so easy to send and retrieve data, within a few years the ARPANET became less a network for shared computing than a high-speed, federally subsidized, electronic post office. The main traffic on the ARPANET was not long-distance computing, but news and personal messages.
As parts of the ARPANET were declassified, commercial networks began to be connected to it. Any type of computer using a particular communications standard, or “protocol,” was capable of sending and receiving information across the network. The design of these protocols was contracted out to private universities such as Stanford and the University of London, and was financed by a variety of federal agencies. The major thoroughfares or “trunk lines” continued to be financed by the Department of Defense.
By the early 1980s, private use of the ARPA communications protocol — what is now called “TCP/IP” — far exceeded military use. In 1984 the National Science Foundation assumed the responsibility of building and maintaining the trunk lines or “backbones.” (ARPANET formally expired in 1989; by that time hardly anybody noticed). The NSF’s Office of Advanced Computing financed the internet’s infrastructure from 1984 until 1994, when the backbones were privatized.
In short, both the design and implementation of the internet have relied almost exclusively on government dollars. The fact that its designers envisioned a packet-switching network has serious implications for how the internet actually works. For example, packet switching is a great technology for file transfers, email, and web browsing but not so good for real-time applications like video and audio feeds, and, to a lesser extent, server-based applications like webmail, Google Earth, SAP, PeopleSoft, and Google Spreadsheet.
Furthermore, without any mechanism for pricing individual packets, the network is overused, like any public good. Every packet is assigned an equal priority. A packet containing a surgeon’s diagnosis of an emergency medical procedure has exactly the same chance of getting through as a packet containing part of Coldplay’s latest single or an online gamer’s instruction to smite his foe.
Because the sender’s marginal cost of each transmission is effectively zero, the network is overused, and often congested. Like any essentially unowned resource, an open-ended packet-switching network suffers from what Garrett Hardin famously called the “Tragedy of the Commons.”
In no sense can we say that packet-switching is the “right” technology. One of my favorite quotes on this subject comes from the Netbook, a semi-official history of the internet:
“The current global computer network has been developed by scientists and researchers and users who were free of market forces. Because of the government oversight and subsidy of network development, these network pioneers were not under the time pressures or bottom-line restraints that dominate commercial ventures. Therefore, they could contribute the time and labor needed to make sure the problems were solved. And most were doing so to contribute to the networking community.”
In other words, the designers of the internet were “free” from the constraint that whatever they produced had to satisfy consumer wants.
We must be very careful not to describe the internet as a “private” technology, a spontaneous order, or a shining example of capitalistic ingenuity. It is none of these. Of course, almost all of the internet’s current applications — unforeseen by its original designers — have been developed in the private sector.
(Unfortunately, the original web and the web browser are not among them, having been designed by the state-funded European Laboratory for Particle Physics (CERN) and the University of Illinois’s NCSA.)
And today’s internet would be impossible without the heroic efforts at Xerox PARC and Apple to develop a useable graphical user interface (GUI), a lightweight and durable mouse, and the Ethernet protocol. Still, none of these would have been viable without the huge investment of public dollars that brought the network into existence in the first place.
Now, it is easy to admire the technology of the internet. I marvel at it every day. But technological value is not the same as economic value. That can only be determined by the free choice of consumers to buy or not to buy. The ARPANET may well have been technologically superior to any commercial networks that existed at the time, just as Betamax may have been technologically superior to VHS, the MacOS to MS-DOS, and Dvorak to QWERTY. (Actually Dvorak wasn’t.) But the products and features valued by engineers are not always the same as those valued by consumers. Markets select for economic superiority, not technological superiority (even in the presence of nefarious “network effects,” as shown convincingly by Liebowitz and Margolis).
Libertarian internet enthusiasts tend to forget the fallacy of the broken window. We see the internet. We see its uses. We see the benefits it brings. We surf the web and check our email and download our music. But we will never see the technologies that weren’t developed because the resources that would have been used to develop them were confiscated by the Defense Department and given to Stanford engineers. Likewise, I may admire the majesty and grandeur of an Egyptian pyramid, a TVA dam, or a Saturn V rocket, but it doesn’t follow that I think they should have been created, let alone at taxpayer expense.
What kind of global computer network would the market have selected? We can only guess. Maybe it would be more like the commercial online networks such as Comcast or MSN, or the private bulletin boards of the 1980s. Most likely, it would use some kind of pricing schedule, where different charges would be assessed for different types of transmissions.
The whole idea of pricing the internet as a scarce resource — and bandwidth is, given current technology, scarce, though we usually don’t notice this — is ignored in most proposals to legislate network neutrality, a form of “network socialism” that can only stymie the internet’s continued growth and development. The net neutrality debate takes place in the shadow of government intervention. So too the debate over the division of the spectrum for wireless transmission. Any resource the government controls will be allocated based on political priorities.
Let us conclude: yes, the government was the founder of the internet. As a result, we are left with a panoply of lingering inefficiencies, misallocations, abuses, and political favoritism. In other words, government involvement accounts for the internet’s continuing problems, while the market should get the credit for its glories.