Few pundits are willing to question the overall efficiency of the market economy, but claims of market failure are still commonplace. One area where this complaint has been heard with some frequency is that of software reliability. Wall Street Journal tech columnist Walter Mossberg wrote a column last year entitled “I’m Tired of the Way Windows Freezes!” in which he indicts the reliability of his PC software.
As Mossberg puts it, “[Computers] should just work, all the time.” In a similar vein, San Jose Mercury News tech columnist Dan Gillmor complains, “...the attitude [of the technical industry ] toward reliability and customer service has been scandalous.”
Over the years, any number of other writers have sounded similar themes: computer systems are too buggy, if programmers built houses we’d be afraid to live in them, and so on. In the view of these writers, software defects are a moral issue, instead of simply a result of the well-known trade-off between schedule, cost, and quality. For them, there is no trade-off possible—defects are a moral failing, and a complete absence of defects must be assured, whatever achieving this goal does to the cost and the schedule.
But is achieving bug-free software always in the customer’s best interest? Consider an example. I once went to work for a partnership that trades stocks with the partners’ capital. There, the people who specified the software, the people who managed its development, the people who paid for this development and the end users were all the same people! There is little risk that the same group of people, in, for instance, their role managing the development, are misrepresenting their own interests in their role as users of the end product.
Nevertheless, I was shocked at the haste with which my first development effort was put into production, practically untested. One of the partners explained to me that this was not recklessness or ignorance, but simple accounting sense. For a company creating an automated trading system, one measure of its quality would be the ratio of “good trades” (i.e., trades the designers intended the system to make) to “bad trades.” If the average cost for a bad trade is $600, and the average benefit of a good one $400, then at the point the system would generate 61% good trades, it is profitable. Any pre-release testing beyond that point is costing the firm money. Further testing may make the system more profitable, but by 61% it is worth releasing.
After the publication of Mossberg’s column, one woman wrote to him: “I never have to reboot my refrigerator, no matter what I put in it.” But a refrigerator does the same thing with all inputs—keeps them cold. It doesn’t have to connect to your head of cabbage, format your waffles, recalculate the spiciness of your horseradish, or spell check the labels on your pickles. A refrigerator is a fairly simple device, and a refrigeration engineer could explain the inner workings of one to Mossberg’s correspondent in about half an hour.
On the other hand, modern computer systems are among the most complex devices humans have ever constructed. To achieve a moderate understanding of the inner workings of Windows NT or Linux, starting from scratch, would take years. Sounding the same theme as Mossberg’s correspondent, Gillmor writes, “The appliances we use at home do not crash.” He seems blissfully unaware of the existence of washing machine repairmen, plumbers, electricians, telephone repairmen, and the dozens of other trades that help maintain our homes.
Consumers’ fantasies about life in an ideal world where there is a surplus of everything is irrelevant to economics. In such a world there would no longer be economic goods, and the science itself would cease to be of importance. Consumers would no doubt love to have software that contained every feature they might ever want to use, was completely without defects, and merely appeared on a small portion of their hard drive at no cost.
But here in the real world, where resources are scarce, consumption involves choosing A while foregoing B. Software can be made more reliable only by leaving out features, or increasing the cost of developing it, or both. Consumers’ preferences in regard to this trade-off are embodied in their actual purchases.
Gillmor says: “[Consumers] just don’t want to consider the possibility that low prices can mean not-so-great service… People have to patronize the companies that provide quality, and have to be willing to pay more.” He doesn’t seem to consider that consumers might be aware of this possibility and could rationally choose to trade quality for price. Someone using an online brokerage might decide that saving $15 a trade is worth suffering the service being down for one hour a month. But for Gillmor making this seemingly rational trade-off is a sign of obtuseness on the part of the consumer.
Even in the market most thoroughly dominated by Microsoft, desktop operating systems, consumers are able to make their own choices on software reliability. There are, after all, two major varieties of Windows on the market. Windows 98, the cheaper of the two, is much more prone to crashing than Windows NT, the more expensive. This fact is well publicized in reviews and advice columns. Yet consumers overwhelmingly opt for Windows 98. Why shouldn’t they have this option?
Microsoft may or may not have a monopoly in some or all of its markets. Whatever the case, Microsoft certainly spends a great deal of time and effort putting out new releases of its products. As is often lamented, these releases tend to focus on new features, and generally contain about the same number of problems as pervious releases. But if customers truly cared about the bugs more than the new features, even as a monopolist, wouldn’t Microsoft want to focus on that area, so as to sell more upgrades?
There is no reason to conclude that consumers are not getting the level of software reliability they prefer, given the scarcity of resources available to produce software. But let us imagine for a moment that this were not the case. Either explicitly or implicitly, views like those described above contain calls for the government to “do something” about the problem. Once interventionists believe that they have found an area in which the market’s behavior is, in some sense, “less than optimal,” they feel that they have fully justified the case for government intervention. But, as Nobel-Prize-winning economist Ronald Coase points out, they have barely begun: Even if most software is, in some sense, “too buggy,” what evidence exists that some government intervention could improve the situation?
A regime of government-enforced strict liability for all software products would decimate the industry, making the risks of developing software prohibitive for all but behemoths like Microsoft—hardly the result the interventionists desire! Another common suggestion is to enforce a licensing system for software engineers. However, such schemes, by driving up the cost of entry, serve to protect the salary of those who can acquire the licenses and to raise the cost of software. I know of no evidence that practitioners who have, for instance, a masters in Computer Science produce software that is more reliable than those who have entered the field with no formal training.
The attempt to replace the actual preferences of consumers, as expressed in their willingness to pay real prices for real products, with fanciful musings about an ideal world in which one’s ends are achieved without expending any means except jawboning is doomed to disappoint. Such frivolity is not an attempt to strive toward an unrealizable standard, as there is no striving involved on the part of the daydreamer! Instead, these ideas, when acted upon by the state, only serve to cripple our ability to create the less-than-perfect goods that we less-than-perfect beings actually can produce.