Summer Issue 1996 of Harvard Journal of Law and Technology

SHOULD TECHNOLOGY CHOICE BE A CONCERN OF ANTITRUST POLICY?

Stan Liebowitz

University Of Texas At Dallas

Stephen E. Margolis

North Carolina State University

I. INTRODUCTION

The economic support for antitrust has always been that monopoly practices are socially harmful because they decrease total surplus. There is disagreement over whether economic efficiency is now or ever was the goal of antitrust, and there are scores of disagreements about exactly what practices result in monopoly inefficiencies. But where the economic rationale for antitrust is considered, that rationale invariably has to do with welfare losses that follow from behavior that is somehow related to restricted outputs and elevated prices.

But a new concern has recently arisen. It was raised in the White Paper that became a part of the antitrust action against Microsoft; it seems to be an active issue in the Justice Department; and it has become a significant theme in the economic literature of industrial organization. The issue can be stated as follows: Are there systematic tendencies for inefficient technologies to become established and resist replacement by superior alternatives? For example, do we drive cars with the wrong type of engines? Do we use the wrong type of nuclear reactors, improperly designed typewriter keyboards, an inferior VCR format, and a backwards computer operating system? If so, should these potential problems be the focus of antitrust? In particular, are there forms of business conduct that facilitate either premature commitment to inferior technologies or the maintenance of their incumbency?

Some analysts in the literature have argued that the answer to these questions is yes. The theoretical support comes from economic models of "path dependence" and "network externality." If this view is accepted as an appropriate concern for antitrust, it would have far-reaching implications. The problem shifts from monopoly versus competition to the choice of one monopoly over another. The problem for antitrust shifts from avoiding monopoly price elevation to choosing among alternative technologies.

The theories of path dependence and network externality are increasingly popular and have migrated from the realm of economic theory to policy. Microsoft's conduct in establishing standards has been the source of alarm in some circles, prompting hyperbole to the point that Microsoft's influence has been alleged to pose a threat to our very freedoms and way of life. These sorts of concerns have made standards a new concern for antitrust policy. Yet the fundamental premises of these theories have received little in the way of critical examination, and empirical verification of these theories is sorely lacking. In this paper we will put forward a model that illustrates how standards and products are established in the market. With this model we can illustrate the rather stringent conditions that are necessary in order for an inappropriate technology to become established as a standard. Our model shows that it is highly unlikely that antitrust policy could be used to improve upon even an imperfect result.

We begin, however, by discussing several aspects of this literature that have received considerable attention but which we believe are not well understood. Sections II through IV summarize arguments that we have presented elsewhere which address some of the fundamental claims of this literature.

II. NETWORK EXTERNALITIES

In making a choice between the Windows and Macintosh operating systems, most of us gave some thought as to what the people around us were choosing or were likely to choose. In deciding whether to switch to Windows 95 or stay with current operating systems, many of us consider what various software companies will do with their products that may provide a motivation to switch to the newer operating system. The software companies' decisions, in turn, depend on their expectations about the number of users who will switch to the newer operating system. Many choices are like this, with one consumer's choice depending on how other consumers are expected to behave. The term "network externality" has been used to denote these network elements. We prefer the term network effect, however, reserving network externality to apply only to those situations in which market failure causes inefficient exploitation of a network effect. This distinction is important because, while network effects may be found in abundance throughout the economy, network externalities--and the policy implications stemming from the attendant market failures--may be rare or nonexistent.

Michael Katz and Carl Shapiro's 1985 paper on network externality in the American Economic Review defines their subject matter as follows: "There are many products for which the utility that a user derives from consumption of the good increases with the number of other agents consuming the good." They add, "[T]he utility that a given user derives from the good depends upon the number of other users who are in the same 'network'...." This idea of a network embraces not only the physically connected examples of computer networks and telecommunications systems but also, according to Katz and Shapiro, goods such as computer software, automobile repair, and video games. It is easy to come up with many more examples of goods that exhibit these so called "positive consumption externalities." When gourmet cooks more easily find preferred ingredients because more people are taking up their avocation, this would be a gourmet-network externality. When fans of live entertainment prefer big cities because the large market for entertainment assures a full variety of acts, this would be an audience-network externality. There is virtually no limit to these examples.

Although positive network effects have been the main focus in this literature, there is no reason that a network externality should necessarily be limited to positive effects. If, for example, a telephone or computer network becomes overloaded, the effect on an individual subscriber will be negative. When we admit the possibility of a negative network externality, the set of goods that exhibit network externalities expands strikingly. As members of a network of highway users, we suffer from a negative network externality because freeways are subject to crowding. And although a larger installed base of computer users might lower the price of computer software, there are many goods, such as housing and filet mignon, where larger networks of users appear to increase the price of the good.

The problem with all of this is that it leads to the conclusion that almost every good exhibits network externalities, which in turn suggests that the concept has not been well specified. In our paper on this subject, we demonstrate that many of the kinds of things that have been called network externalities actually fall into a category that economists have called "pecuniary externalities." The important thing about pecuniary externalities is that while they are an effect that one person has on another, they do not involve any inefficiency. It is important to distinguish, therefore, between network externalities that involve some direct interaction among the network participants, and those that are mediated through the market.

Among the remaining class of network externalities, those that are "real" or nonpecuniary, the interaction occurs through increasing returns in production of some network related good, or some direct interaction among consumers. For either case, a standard result is that as any network gets larger, it becomes increasingly advantaged relative to any smaller competitor networks that might exist. This leads, ineluctably, to a conclusion that only one network can survive in any market. This is equivalent to the phenomenon that economists have long called "natural monopoly". The problem here becomes the competition among the potential natural monopolists, a special case in the economics of increasing returns, which we take up now.

III. INCREASING RETURNS AND PATH DEPENDENCE

Path dependence has been offered as an alternative analytical perspective for economics. This theory takes increasing returns--economic jargon for the condition that bigger is better-- as its starting point, and argues that markets and economies often get stuck with inferior products and standards. Traditional economic analysis, it is claimed, largely ignores increasing returns, but the "new" "positive feedback economics" embraces the possibility. The claim that is made for path dependence is that a minor or fleeting advantage or a seemingly inconsequential lead for some technology, product, or standard can have important and irreversible influences on the ultimate market allocation of resources, even in a world characterized by voluntary decisions and individually maximizing behavior. In short, we get started, perhaps for no good reason, down some path and we are unable to change to an alternative.

In our research, we define three distinct forms of the path dependence claim. The normative implications of these three forms differ sharply, but unfortunately the literature has previously treated all forms of path dependence as interchangeable. Two of these forms -- defined as first- and second-degree path dependence -- are commonplace. They do not materially differ from the "old" economics that they are said to replace, and they have no normative implications. Only the strongest form of path dependence, which we call third-degree path dependence, significantly challenges the old economics, claiming that not only that market solutions are flawed, but also that there are identifiable and feasible improvements. However, the theoretical arguments for the occurrence of this form of path dependence require important restrictions on prices, institutions, or foresight. And this third form of path dependence is yet to have any empirical verification.

First-degree path dependence is simple durability without error. Initial actions, perhaps insignificant ones, do put us on a path that cannot be left without some cost, but that path happens to be optimal (although not necessarily uniquely optimal). For example, a capricious decision to part one's hair on the left may lead to a lifetime of left-side parting, but the initial urge to part on the left might capture all there is to be taken into account. More seriously, a decision to use a particular electric system for powering the machinery in a plant may be a controlling influence for decades, but the long-term effects of the decision may be fully appreciated by the initial decisionmaker and fully taken into account.

Second-degree path dependence is durability in the presence of imperfect information. Information is never perfect. It is likely therefore that decisions will not always appear to be efficient in retrospect. If we claim that we committed to a good choice in light of available information, but that some other path now looks to be preferable, we are making a second-degree claim of path dependence. In such a case, initial conditions lead to outcomes that are regrettable and costly to change. But, if the current costs of changing are less than the benefits, the change is not made. Such a situation is not inefficient in any meaningful sense, however, given the assumed limitations on knowledge when the decision was first made.

Third-degree path dependence involves error. It occurs where there exists, or existed, some feasible arrangement for recognizing and achieving an outcome that is preferred to the one chosen, but that preferred outcome is not obtained. In this case a bad outcome is remediable, but not remedied. The occurrence of an error that is remediable but not remedied has significant normative policy implications. Such an error would constitute economic inefficiency.

The three types of path dependence make progressively stronger claims. First-degree path dependence is a simple assertion of an intertemporal relationship, with no implied claim of inefficiency. Second-degree path dependence stipulates that intertemporal effects propagate error. Third-degree path dependence requires not only that the intertemporal effects propagate error, but that the error was, or now is, avoidable.

The failure to distinguish among these three discrete forms of path dependence has led to some unfortunate mistakes. The error here involves transferring the plausibility of the empirical and logical support for the two weaker forms of path dependence (first- and second-degree) to the strongest implications of third-degree path dependence. Although it is fairly easy to identify allocations, technologies, or institutions that are path-dependent in some form, it is very difficult to establish the theoretical case or empirical grounding for path-dependent inefficiency.

The importance of path dependence would appear to reside in the third-degree form. The overwhelming share of first- and second-degree dependencies will be garden variety durabilities that have long been well-incorporated into economics. But if third-degree path dependence offers a "new economics," the question arises: Does such a phenomenon exist, and if so, what conditions bring it about?

Brian Arthur and others have suggested that the phenomenon does exist. Their work is based on a rather simple story that can be summarized briefly. If there is a value in being compatible with others, then when consumers choose a standard, such as videorecorder format, if they forecast compatibility on the basis of the number of people already committed to each standard, they will tend to choose only the one that is best established, even if that standard is inferior to less well established alternatives. In our critical writing on this, we have shown that this model, or story, relies on extraordinary restrictions that are not likely to be satisfied for real-world choices. In the following, we present a richer story to consider the possibility of getting stuck with the wrong technology.

IV. STANDARDS CONTESTS AS A METAPHOR FOR TECHNOLOGY CHOICES

Rivalries between competing technologies can be thought of as rivalries between standards. Standards are the conventions or commonalities that allow us to interact. Recent examples of battles over standards are numerous: video recording formats, audio taping, audio compact discs, video disks, computer operating systems, spreadsheets, word processors, telecommunications protocols, and HDTV. Standards, networks, and technologies are similar in that the benefits to an adopter of any of these may depend upon the number of adopters. For example, the benefits of a technology may depend on widespread availability of expertise, a body of problem solving experience, and compatibility. Similarly, it is inherent in the nature of a standard that the benefits that accrue to an adopter will depend on the number of other adopters.

The application of path dependence and network externality theories has offered a pessimistic prognosis for firms that would attempt to displace an incumbent standard. It suggests great difficulty, for example, in replacing one generation of software with another. This would seem to promise great rewards for the firm that did manage to control a standard, suggesting that an entrenched standard might fall behind the capabilities of the best available technology without inviting a viable threat from a rival. This was the kind of concern that was raised in the Microsoft case.

There are, however, important shortcomings with this "entrenched incumbents" view. First, it leaves us without an explanation of the successful replacement of one technology with another. How did VHS displace Beta, or graphical user interfaces displace character-based commands, or compact discs replace records, or automobiles replace horses and carriages? Obviously, displacement is quite common. Second, the empirical support for such entrenchment is notably lacking. The continued use of the ever popular QWERTY versus Dvorak keyboard story, or Beta versus VHS story are sad commentaries on the lack of respect for historical accuracy that has affected this literature, as we discuss infra.

The following model has implications that contradict the entrenched incumbents view. It does so by incorporating different characterizations of the production and purchase of goods that embody standards. It allows separate consideration of the coordination advantage of standards (called synchronization effects) and the production technology of these goods. Further, it allows consideration of differences in tastes among consumers. With these departures in modeling come some important results, including these:

The expected effect of a "standards externality" is on the amount of the standard-using activity, not on choice of standard or the mix of standards.

Where there are differences in preferences regarding alternative standards, coexistence of standards is a likely outcome. Further, a single-standard equilibrium, if it is achieved, is more readily displaced by an alternative if preferences differ. This suggests that product strategies leading to strong allegiances of some group of customers are likely to be effective in the face of an incumbent standard.

Entrenched incumbents are less entrenched when consumers react to new sales, and not just the accumulated stocks of goods that embody standards. In particular, a challenging standard that achieves a significant flow of adoptions is shown to be viable. This contrasts with previous models in which a significant installed base gives the incumbent standard an insurmountable advantage.

A. A Model Of Standards Rivalry

The model is based on a fundamental purpose of standards: Standards facilitate interaction among individuals. The term "synchronization" is used to refer to this effect. Synchronization is the benefit received by users of a standard when they interact with other individuals using the same standard. In general, synchronization effects will increase with the number of people using the same standard, although it will often be the case that users' benefits will be less closely tied to the total number of other users of a standard and more closely tied to the number of users with whom they actually interact.

These synchronization benefits are distinguished from the ordinary scale effects on production costs. Synchronization effects in our model may coexist with increasing, decreasing or constant returns to scale. We will demonstrate that neither scale economies in production nor synchronization effects are by themselves necessary or sufficient conditions for an outcome where only one standard survives.

Although it is almost taken for granted among many commentators that average production costs fall with increases in output for most high technology, standardized goods, we are not so sure that this is correct. There are, we would agree, many examples where standardization is associated, rightly or wrongly, with lower prices. The past two decades have witnessed decreases in the costs of computing power, telecommunications, and video-recording, accompanied by increases in the use of computer software, new methods of communications, and video recorders. Consequently, theories that invoke economies of scale have had an easy time capturing our attention.

But there is no reason to believe that the goods referred to as "high-tech" are necessarily subject to increasing returns to scale. The technical advances associated with new technologies may easily disguise actual diseconomies of scale in production. This is the difference between a movement of an entire cost schedule or curve, and a movement along a single schedule, a point made in almost all elementary economic texts, and one that is well understood by economists.

Being able to distinguish between these possibilities on an empirical level, however, is another matter. Some of the most eminent economists, such as Alfred Marshall, have confused a shift in an average cost curves over time with movements down a single average cost curve. Advances in technology are likely to lead to increases in output and lower prices, but this should not be confused with economies of scale in production.

These new high-technology goods are also likely to be associated with unsettled format choices. The eventual adoption of a standard, which may take several years or even decades, often occurs simultaneously with improvements in technology, making an examination of correlations between time series of standardization efforts and production costs misleading. Certainly, an empirical association exists between the adoption of standards and decreases in costs: IBM's personal computer became the dominant format, and computer and software prices fell while the number of computers and programs rose; prices of fax machines and modems fell dramatically after settlement on a standard compression routine. However, the drop in costs associated with the standardization of many new technologies can not be taken as evidence in favor of increasing returns in the production of standardized goods, since the new technologies often lead to rapid decreases in (quality adjusted) costs over time, with or without standardization. For example, although VCR prices fell after VHS won its standardization battle with Beta, VCR prices had also fallen while both formats possessed significant market shares.

The model that follows provides independent consideration of the impacts of synchronization effects and production cost economies and diseconomies. While the synchronization effect, like the effect of any ordinary fixed cost of production, favors the domination of an industry by a single format, it does not guarantee such a result.

B. A Model Of Standard Selection

Consider a setting in which two formats compete. Current consumer choices are affected by the market share of each format during a recent time period. A consumer commits to a format, for at least a while, by purchasing a product with that particular format. For concreteness and familiarity, the discussion will be presented as a choice between Beta and VHS, in which commitment to a format occurs with the purchase of a VCR.

For several reasons, we assume consumers make purchase decisions on the basis of shares (percentage of market controlled by a standard) rather than scales (total output of a standard). First, there is the issue of synchronization costs: if most of the world uses VHS, the fact that the number of Beta users is increasing may be largely irrelevant. Second, for any given scale of a good with standard activity, relative share will determine relative scale. Finally, consumer choices will often be for one format versus another, so that it is the relative, not absolute, benefit of the standard that will affect consumer decisions.

1. The Consumer

Assumptions about consumer values that are the basic building blocks of our model are shown in figure 1. The horizontal axis shows, for the most recent time period, the market share of one format. In our example, the horizontal axis is the share of VHS VCRs as a percentage of all VCRs sold during this period.

We define the autarky value of an individual's investment in a VHS video recorder to be its value assuming no interaction among VHS users (i.e. no other VHS users). A VCR presumably has value even if tapes are never rented or exchanged. But a positive autarky value is not required for the model. In some activities, such as communication with fax machines or modems, it is reasonable to assume an autarky value of zero.

The synchronization value is the additional value that results from the adoption of the format by other consumers. By assumption, the synchronization value assigned by a potential consumer is directly correlated with increases in the consumer's estimate of that format's future market share. Further, we hold that consumers use the format's current market share to estimate the future share of the stock. Thus, the synchronization value of VHS increases with its share of the market.

Total value, defined as the autarky value plus the synchronization value, will increase as the format's market share increases.


Figure 1 shows the value of a format to an average consumer based on its share of the current period's sales (flow).

2. Production

For many standards, an individual's adoption of the standard occurs with the purchase of a single standard-embodying good, such as a computer, a camera, a typewriter, or a videocassette recorder. For these standards, the conditions of production will influence outcomes in social choices regarding standards.


Production of VCRs could be subject to increasing, decreasing or constant cost. For now, we will assume price-taking behavior by producers. For a given total quantity of VCRs sold, the flow of a particular format will, of course, increase directly with the share. Figure 2 shows the supply price function under the assumption that VCR production involves increasing cost. (Other specifications of cost are allowed and discussed below. Here, the figure illustrates a single possible configuration.)

From these relationships, a net value function for videorecorder formats can be derived. The net value function is equal to the total value (the autarky value plus the synchronization value) less supply price. Since the total value increases more rapidly than supply price in figure 2, the net value increases as VHS's share of the market grows.

The net value functions for machines with the Beta format can be constructed in the same fashion. Net value functions will be upward-sloping if the supply price function is less steeply upward-sloping than the synchronization value function. In other words, if decreasing returns in production overwhelm synchronization benefits, the net value line falls with market share. On the other hand, if synchronization benefits are greater than decreasing returns in production, or if production exhibits increasing returns, then the net value curve is upward sloping, as in figure 3.

As we shall see, it is only when the net value function is upward-sloping that choices between standards are fundamentally different in character from choices of other goods (i.e. exhibit increasing returns instead of decreasing returns). We assume throughout the analysis that the slope of the net value function for a given format has the same sign for all consumers.

The net value functions for Beta and VHS are put in a single diagram in figure 3. As VHS share varies from 0% to 100%, Beta share varies from 100% to 0%. If the two formats have identical costs and benefits, the Beta net value curve will be the mirror image of the VHS net value curve.

The intersection of the two curves (if they intersect), labeled Di, represents the market share equilibrium where the consumer is indifferent between the two formats. This value plays a crucial role in our analysis. On either side of Di, the consumer will have a preference depending on the slopes of these curves. For example, if each net value curve is upward-sloping with respect to its own market share, as in figure 3, the consumer will prefer VHS when its market share increases beyond Di (VHS has higher value, relative to Beta, as the VHS share increases beyond Di). If the two net value curves are downward sloping with respect to their own market shares, however, the consumer will prefer Beta as VHS share increases beyond Di.

Note that this analysis assumes that the consumer does not take into account the impact of his decisions on other consumers (i.e. he does not consider how his purchase of a video recorder will alter the valuation to other potential purchasers of video recorders). Therefore, the door is still left open for some sort of (network) externality.

3. The Market

Each customer has an individual Di, a equilibrium point at which the two formats are equally valuable. Accordingly, a population of customers will have a distribution of Di's. Let G(xj) be the fraction of VCR purchasers with Di<xj, that is, G(x) is the cumulative distribution function for Di. This distribution is a key to the selection of a standard.

Perhaps the most basic distribution would be one in which all consumers had the same tastes, so that Di is the same for all consumers. Call this common value Di*. This resulting cumulative distribution is shown in figure 4. The cumulative function is actually the share of the population that will buy VHS next period based on different current market shares of VHS.

Returning to figure 4, we can now see that the candidates for equilibrium are A, B, and C. Points A and C are single format equilibria which are stable: For flows near 0% VHS, all consumers will choose Beta, for flows near 100% VHS all consumers will choose VHS. In contrast, B is an unstable equilibrium. At flows near but to the left of Di* all consumers would choose Beta, at flows near but to the right of Di*, all consumers choose VHS. So, for the case of upward-sloping net value curves, we obtain an either/or choice that is often argued to be the expected outcome for standards. An upward-sloping net value curve, however, is nothing more than the traditional "natural monopoly."

Consider the outcome for downward-sloping net value curves. In this case, all consumers with Di less than the prevailing flow choose Beta. The function G(x) thus reveals the fraction choosing Beta. The function 1-G(x), which is the fraction choosing VHS, is shown in figure 5. The only possible equilibrium is B, a stable equilibrium. At points near, but to the left of Di*, VHS machines are more advantageous than Beta machines (through effects on supply price) and more consumers would choose VHS. Similarly, displacements of equilibrium to the right of Di* would increase the relative advantage of Beta machines, moving the outcome back to the left.


Consumers split their purchases so that a VHS purchase and a Beta purchase have identical net value. This describes a circumstance in which the formats will coexist. This result is significant because it demonstrates that even without differences in taste (which favors coexistence), it is still possible for a mixed-format equilibrium to exist.

The mere existence of synchronization effects can now be seen as insufficient to establish the either-or choice with respect to standards. That is because synchronization effects cannot, by themselves, ensure upward-sloping net value curves.

This model of standardization provides some interesting insights. The nature of the equilibrium, either as a mixed format or as an either-or equilibrium, depends on the slopes of the net value curves, and synchronization effects are only part of the story. For example, upward-sloping net value curves can occur when supply price falls, even when there is no synchronization effect. The existence of synchronization effects, the raison d'être of standardization, also does not rule out the possibility of downward sloping net value curves, and the resulting efficient coexistence of formats. Synchronization effects, therefore, are neither necessary nor sufficient conditions for an either-or equilibrium.

In fact, it is possible that the either-or equilibrium is mostly driven by production costs and not network effects. For example, if software categories were to be dominated by single entries, it would likely be due to the large fixed cost element in the production of software titles as opposed to synchronization effects. But arguments that network effects might lead to software monopolies (as claimed of Microsoft) miss the point. Software creation may be just a newer version of a natural monopoly in terms of old fashioned, prosaic production costs, which are quite independent of any network effects. Large fixed costs leading to (natural) monopoly can just as well be used to characterize the publishing or movie business.

Yet what would be the implications for antitrust? If the market is a natural monopoly, whether due to synchronization or production costs, there would be no benefit in trying to force the market into a competitive structure with many overly small firms having excessively high production cost structures and low synchronization values for consumers. The government might wish to award natural monopoly franchises, as it does for most public utilities, but the history of publicly regulated utilities does not inspire confidence that technological advancement would be promoted, or that costs would be kept down. Since high technology changes so frequently, a firm that achieved monopoly with one technology will not be able to hold on to its lead unless it is extremely resourceful. This further argues against the value of government intervention in technology markets.

4. Internalizing Synchronization Costs

Thus far, the model addresses only private valuations and their effects on outcomes. Since the literature has been preoccupied with how one consumer's format choice affects the values enjoyed by others, we should also examine how internalizing this externality would affect standard choice. We must note, however, that a single owner of a technology or standard is capable of internalizing the impact of consumer's behavior through prices. The following discussion therefore applies to the case in which a technology is not owned by a single entity.

To this point the net value curves have represented private net benefits. Since the synchronization effect is always assumed to have a positive effect on other users of the same format, the social net value function, which includes the synchronization value to others, will always lie above the private net value function, regardless of the slope of the private net value function. The difference in height depends on the relative strength of the synchronization effects and the format's market shares. For example, at zero share of VHS, the VHS private net value curve will be the same as the VHS social net value curve. That is because, where there is no user of VHS to benefit from this individual's purchase, the private and social values must coincide. Where VHS has a positive market share, the social net value curve is everywhere above the private net value curve. This case is shown in figure 6. As the share of VHS increases, and the number of potential beneficiaries of this individual's VHS purchase increases, the difference between the social and private net value curves increases. The same would be true for Beta net value curves.

Depending on the relative sizes of the synchronization effects on users of the two formats the intersection of social net value curves can be to the right or left of the intersections of the private net value curves. In the particular case where the two formats attract users with the same levels of potential interaction and where the private net value curves are the same, internalizing the synchronization externality will have no effect on any individual's Di, and thus no effects on the potential equilibria.

If, in the more likely case where the Di's move to the left or right, the cumulative distribution function would also move in the same direction. In that case, internalizing the synchronization externality may lead to a different equilibrium.

But even if the Di* in figure 4 moves left or right somewhat, when the market starts near point A, that will remain the equilibrium, and if it starts near point B, that will remain the equilibrium. Thus even if internalization of the externality changes Di's, the final market equilibrium need not change. Internalizing the synchronization effect thus might have no impact on the choice of format.

There is one dimension where the internalization of the synchronization effect always has an impact, however. The private net value functions consistently undervalue videorecorders. Therefore, it is not the relative market shares, but rather, the size of the overall market that will be affected by this difference between private and social net value functions. Too few videorecorders of either type will be produced if the synchronization effect is not internalized by the market participants. Internalizing the externality enhances both VHS and Beta, causing consumption of VCR's to increase even if market shares remain constant. This is completely compatible with the conventional literature on ordinary externalities. All this is really saying is that too little of a product will be produced if there is a positive externality (e.g. too few golf courses, or too few copies of Microsoft Excel) and too much will be produced if there is a negative externality (e.g. pollution). This is a far more likely consequence of "network externalities" than the more exotic case of winding up with the wrong standard.

C. Extending The Model

There are several natural extensions of this model. The assumption that all consumers have the same Di can easily be relaxed. Allowing consumers to differ in their Di's acknowledges differences in tastes. These differences may reflect different assessments of the formats, different synchronization values, or both.

Assume that the Di's for consumers range between 20% and 80% (VHS), and that within this range the distribution of Di's are uniform, as illustrated in figure 7. The height of the distribution of Di's indicates the slope of the cumulative distribution function. The cumulative distribution function, therefore, has a straight line segment between (20,0) and (80,100) as shown in figure 8, and intersects the 45 degree diagonal at points A, B, and C. If the net value functions are upward-sloping with respect to own market share, A and C would be stable equilibria and B would not. Thus, this type of uniform distribution of Di's gives the same general result as the assumption that all consumers have identical Di's. Thus, under these assumptions, we tend to get an either/or equilibrium.

If the net value function were falling with respect to own market share, the corresponding figure would be the vertical mirror image of figure 8. Point B would be the only stable equilibrium in flows. Consumers would buy the format that they most valued, unless it suffered a cost disadvantage due to its popularity. With decreasing returns, we expect many formats (brands, producers) in the market. Because this result is so standard, we focus our attention on the less standard case where net value rises with market share, i.e. where natural monopoly in production is a possible outcome.

1. Strong Differences In Tastes

Up to this point, the results of the model indicate that when net value curves are upward-sloping, the equilibrium will be of the either/or type. This need not be the case. Figure 9 shows a distribution of Di's representing the very reasonable case where each format has a fairly large number of adherents, with the rest of the population of Di's thinly (and uniformly) distributed between 20% and 80% VHS.

The distribution of Di's in figure 9 results in the cumulative distribution function shown in figure 10. The only stable equilibrium in this case is point B. The differences in tastes allow two standards to coexist in a stable equilibrium, even where net value curves are upward-sloping. This is an important result. In those instances in which each format offers some advantages to different groups of customers, we should expect to find that different formats appeal to different people. When this is so, formats can coexist in a market equilibrium, and individual consumers are not deprived of one of the choices.

It is important to point out that this is the likely path that markets are expected to follow when there are strong natural monopoly elements. Although a Hotelling model might predict that two firms will produce nearly identical products, we would expect (entrant) firms to try to specialize their products to appeal to particular groups of users. This is, after all, one simple way for firms to overcome any natural monopoly advantage that might exist in production costs of an incumbent. The incumbent firm, on the other hand, might do well creating products that appeal to the widest possible audience in an attempt to foreclose this possibility.

There are some straightforward implications here. First, even when there are economies of scale and/or network effects, the market can allow more than one format to survive. The key to success is to find a market niche and to produce a product that is as close to the preferences of that market segment as possible. Unless the established firms are much larger and have much lower costs, the superior characteristics for the entrant's product, as viewed by the consumer niche, will provide sufficient advantage for the entrant to survive. If each producer can produce a product that appeals to a segment of the population, then the situation represented by figure 10 will occur. That this result is so grounded in common sense does not, to us, diminish its value.

2. Results When One Product Is Superior To Another

It is more complicated to define a superior standard than might be thought. In the rather lopsided case of one format having higher net values than another by all consumers in all market shares, that format clearly would be superior. It is also not difficult to see that in this case, no Di would occur in the interior of 0-100%, and that the only equilibrium is at a share of 100% for the superior format. But it is not common to find such lopsided circumstances. Strongly held, but divergent, preferences lead to different results. If some individuals prefer format A, regardless of share, and others prefer format B, regardless of share, then it is not clear that either can be said to be superior.

For our purposes, however, we shall define standard A to be superior if, for all consumers and any market share X, the net value of A is higher than the net value of B with the same market share (e.g. if all consumers prefer A with 100% share to B with 100% share; similarly, all prefer A when both A and B share 50% of the market).

Assume that VHS is the superior standard. The Di's will then all be less than 50% since individuals would only choose Beta when it had the dominant market share. Assume that the Di's are uniformly distributed between 0% and 20%. Then the cumulative density function lies above the 45 degree line everywhere, as shown in figure 11. Figure 11 is the same as figure 8 except that the upward-sloping segment is displaced to the left. A and C are the only two equilibrium points, but only C is a stable equilibrium. This analysis implies that if society starts at 100% Beta, it could get stuck at A, but only if no one ever purchases a single VHS machine. The trap at A, being an unstable equilibrium, is incredibly fragile.

In this case it is almost certain that the superior format dominates the market. If VHS is superior and both formats originate at the same time, VHS will win unless Beta, although inferior, can somehow capture and keep a market share of 100%. This would seem an almost impossible task for the Beta producers. It is unlikely, however, that both formats would come to market at the same time. If VHS arrives first, Beta need not bother showing up. If Beta arrives first, as it in reality did, then it has a market share of 100% prior to the arrival of VHS. If the entrenched stock is large and if it also has an influence on expected future market shares, then the distribution of Di's would be shifted to the right. This implies the possibility of an equilibrium that is different from C. This is the instance of being 'stuck' in an inferior format.

D. An Example Of Getting Stuck

It is not difficult to alter the previous example so that C becomes a stable equilibrium, even though VHS is preferred by all consumers. One simple alteration is merely to assume some minor changes from those conditions represented in figure 11. For example, as noted above, Beta might have an advantage in the existing stock and consumers might take into account the established base of previous sales in addition to sales this period. Under that assumption we let the Di's range between 10% and 30%, instead of the former 0% and 20%. The market now can be represented by figure 12. Because all consumers prefer Beta when the share of Beta is greater than 90%, the cumulative distribution function is no longer always above the diagonal, and point A becomes a stable equilibrium in addition to point C. Point B, at 12.5% VHS, now is an unstable equilibrium.

Notice that the possibility of getting stuck does not require the existence of any synchronization (network) effect. Upward-sloping net value curves are all that are necessary, and this can be achieved merely with old-fashioned scale economies in production.

E. Getting Unstuck

Under the conditions discussed above, where the market settles at A, owners of the VHS format have an incentive to alter conditions to attempt to dislodge the market from A. One method might be to dump a large number of VHS machines on the market, perhaps by lowering the price, in order to generate an immediate 12.6% market share, driving the equilibrium to C.

Producers of VHS can also try to prime the pump on sales by providing deals to the largest users, or distributors, or retailers (perhaps offering side payments) to convince them to switch to VHS. If this action can provide a market share of 12.5%, VHS can dislodge Beta [as of course it did]. Of course, if the VHS format were not owned, there would have been a potential free rider problem for the VHS producers to solve before these strategies could have been adopted.

There are other alternatives as well, including advertising, publicity, and services to allow partial or total compatibility. [VHS, with RCA's expertise, did put on a large publicity blitz in the US]. Interestingly, VHS, through a combination of lower prices, clever advertising, and most of all, a product considered superior by most consumers, overtook Beta within six months of introduction in the US.

It is important to note that the larger the difference between the two formats, the easier it is for the superior format to overcome any initial lead of an inferior standard. For truly large differentials, we should expect diagrams like figure 11, not figure 12. Thus, the greater the potential error in the choice of a standard, the less likely it is that an error would be made.

Additionally, the greater the difference in the format, the greater the difference in potential profits between formats and the more likely the superior format can get financing to engage in the pump-priming type of activities that we alluded to above. In a circumstance like the ones presented above, all other things equal, the technology that creates more wealth will have an advantage over a technology that creates less. While the owner of a technology may not be able to appropriate its value perfectly, owners of a superior format can be less perfect at overcoming their appropriation problems and still win the competition.

The role of antitrust should be, basically, to get out of the way here. The various pump-priming measures discussed above may well look predatory, but the superior format must be allowed to engage in actions that can help ensure it survives and prospers, particularly if it is not the first format offered to users. If the superior technology is offered first, we are unlikely to see a sustained attempt to dislodge the leader by the owners of inferior technologies, unless they expect that they can achieve their ends through political means, since their expenditures in the market are likely to be futile. If government is to do anything useful, it should help to ensure that the capital market is functioning properly so that new technologies have access to sufficient financing. The recent episode with Netscape and its enormous market capitalization seems to indicate that such financing is more than abundant.

It may not always be apparent how or if a technology is owned. Ownership of a technology can take various forms including ownership of critical inputs, patent, copyright, and industrial design. Literal networks such as telephones, pipelines, and computer systems are most often owed by private parties. Sony licensed the Beta system, JVC-Matsushita the VHS system. Standards are often protected by patent or copyright. Resolution of these startup problems may be an important and as yet not fully recognized function of the patent system and other legal institutions.

F. Other Methods For Getting Unstuck

Transactions are one method for avoiding an inefficient standard or moving from one standard to another. In some circumstances, the numbers of people who interact through a standard is small enough that transactions are a feasible method of resolving any externalities regarding the standard. A small group of engineers working together can certainly get together and decide to use a different CAD package. Or an extended family can coordinate the choice of Camcorder format so that tapes of grandchildren can be exchanged.

Another tactic for dislodging an inferior standard is convertibility. Suppliers of new-generation computers occasionally offer a service to convert files to new formats. Cable-television companies have offered hardware and services to adapt old televisions to new antenna systems for an interim period. For a time before and after the Second World War typewriter manufacturers offered to convert QWERTY typewriters to Dvorak for a very small fee.

All of these tactics tend to unravel the apparent trap of an inefficient standard. But there are additional conditions that can contribute to the ascendancy of the efficient standard. An important one is the growth of the activity that uses the standard. If a market is growing rapidly the number of users who have made commitments to any standard is small relative to the number of future users. Sales of audiocassette players were barely hindered by their incompatibility with the reel-to-reel or eight-track players that preceded them. Sales of sixteen-bit computers were scarcely hampered by their incompatibility with the disks or operating systems of eight-bit computers.

We thus conclude that instances of getting stuck with the wrong standards, when the standards are chosen in the market, should be few and far between. In the next section, we present a summary of our prior work that critically examines two popular case studies used to support the notion of an inferior standard "trap." We conclude by applying the lessons of this work to the recent computer operating system debate.

V. EMPIRICAL EXAMPLES OF STANDARD CHOICE

A. The Fable Of The Keys

Paul David introduced economists to the conventional story of the development and persistence of the current standard keyboard, known as the Universal, or QWERTY, keyboard. Paul Krugman, in his recent book "Peddling Prosperity," speaks approvingly of this entire literature in a chapter entitled "The Economics of QWERTY." The significance of the keyboard example to this literature can not be overstated.

QWERTY refers to the letters in the upper left hand portion of the typewriter (and computer) keyboard. One commonly hears the claim that to keep the old-fashioned mechanisms from jamming on the early typewriters the mechanics who created the keyboard actually designed the keyboard to slow down typing speed. The claim is made that QWERTY's ascendance was due to a serendipitous association with the world's first touch-typist, who won a famous typing contest using the QWERTY design. The QWERTY design is reputed to be far inferior to the "scientifically" designed Dvorak keyboard which claimed to offer a 40% increase in typing speed. Supposedly, the Navy conducted experiments during the Second World War demonstrating that the costs of retraining typists on the new keyboard could be fully recovered within ten days of their retraining! According to path dependency theory, no producers found it profitable to create Dvorak keyboards since everyone already knew QWERTY, and no one learned Dvorak because there were no Dvorak keyboards.

This is an ideal example, which accounts for its continued use by virtually every author looking for an example of path dependence. The number of dimensions of performance are few and in these dimensions the Dvorak keyboard appears overwhelmingly superior. This example is so good that one might think that it would have had to have been invented if it didn't already exist. Although the story was not invented by economists, there is a great deal of invention in the story as told. Certainly, this story has not been held to rigorous standards of scientific skepticism, since the story is false in almost every detail.

The QWERTY keyboard, it turns out, is about as good a design as the Dvorak keyboard, and was better than most competing designs that existed in the late 1800s when there were many keyboard designs maneuvering for a place in the market.

Ignored in these stories of Dvorak's superiority is a carefully controlled experiment conducted under the auspices of the General Service Administration in the 1950s comparing QWERTY with Dvorak. In the experiment, a group of typists were retrained on the Dvorak keyboard. When these retrained Dvorak typists regained their prior QWERTY speed, a group of QWERTY typists began additional training on the QWERTY keyboard, while the new Dvorak typists continued their training. This parallel training is important because it is always possible to improve a typist's performance on any keyboard with additional training. The QWERTY typists were carefully selected to constitute a proper control group for the Dvorak typists, and other scientific controls were applied. The conclusion of the study was that the QWERTY typists always performed better than the Dvorak typists. Thus the experiment contradicted the claims made by advocates of Dvorak and concluded that it made no sense to retrain typists on the Dvorak keyboard. This study, which was influential in its time, brought to an end any serious efforts to shift from QWERTY to Dvorak.

Modern research in ergonomics also reaches similar conclusions. This research consists of simulations and experiments that compare various keyboard designs. It finds little advantage in the Dvorak keyboard layout, confirming the results of the GSA study.

So on what basis were the claims of Dvorak's superiority made? We discovered that most, if not all, of the claims of Dvorak's superiority can be traced to the patent owner, Professor August Dvorak. Yet his book on the relative merits of QWERTY versus his own keyboard has about as much objectivity as a modern infomercial found on late night television.

The wartime Navy study turns out to have been conducted under the auspices the Navy's chief expert in time-motion studies, Lt. Commander August Dvorak, and the results of that study were clearly fudged. The study compared the performance of two groups of typists, one that trained in Dvorak and another that trained in QWERTY. The two groups were not comparable and the data on the two groups were not treated in the same way. For example, the typing speed for the Dvorak group was measured on the first and last days of training, while the data for the QWERTY group was measured as the averages of the first four days and the last four days. This clearly truncated the effective training time for the Dvorak group. The study also appears to be lacking in anything remotely related to objectivity. The difficulties that we had getting a copy of the Navy study, and the fact that it is mentioned, but never actually cited, convinced us that those economists enamored of the Dvorak fable never actually perused a copy of that study.

Many other aspects of the received story were also erroneous. It turns out that there was intense competition between producers of various keyboard designs early in the history of the typewriter keyboard. And contrary to prior claims, there were many typing competitions between touch typists on various keyboard designs, and QWERTY won its share of such competitions. QWERTY was put through a fairly severe set of tests by the market, and the reason QWERTY survives seems to be that it is a reasonably good design. Thus it is not by incredible luck that we ended up with a reasonable standard. Rather, our good fortune in inheriting a reasonably efficient standard may be attributed to QWERTY's success in these severe tests.

We published a very detailed account of this in the Journal of Law and Economics in the spring of 1990. Yet in spite of this six-year-old paper, which has not been factually disputed, economists working on path dependence topics continue to use the QWERTY keyboard as the main example to support their theory that markets cannot be trusted to choose products. One could hardly find better evidence of this theory's lack of empirical support than the continued use of a result that is known to be incorrect.

B. A Tale Of The Tape: Beta Vs. VHS

After the typewriter story, the second most popular illustration of harmful lock-in is the contest between the Beta and VHS videotaping format. It is often claimed that Beta was a better format and that VHS only won the competition between formats because it fortuitously got a large market share early on in the competition with Beta. But this story turns out to be just as inaccurate as the keyboard story.

In 1969, Sony developed a cartridge-based videorecorder, the U-matic, which it hoped to sell to households. Since other companies had such products in the works, Sony persuaded Matsushita and JVC to produce the machine jointly with Sony, and to share technology and patents. The U-matic was not a success as a home machine, though it did find a niche in educational markets. The U-matic was followed by many other unsuccessful attempts to break into the home market.

In the mid 1970's, Sony developed the Betamax. Believing that with the Betamax it finally had a machine that would succeed in the home, Sony again offered the machine to Matsushita and JVC. Once again, Sony hoped to establish a standard that would cut through the clutter of competing formats. But months after Sony had revealed much of its technology to its erstwhile partners, JVC demonstrated a new machine (VHS) that led Sony engineers to conclude that JVC had expropriated their ideas. This apparent usurping by JVC of Sony's technological advances created bitterness between the one-time allies, leaving Sony and Matsushita-JVC to go their own separate ways.

The only real technical difference between Beta and VHS was the manner in which the tape was threaded and, more importantly, the size of the cassette. The choice of cassette size was based on a different perception of consumer desires. Sony believed that a paperback sized cassette, allowing easy transportability (although limiting recording time to 1 hour at the time), was paramount to the consumer, whereas Matsushita believed that a 2 hour recording time, allowing the taping of complete movies, was essential.

The larger VHS cassette accommodated more tape. For any given tape speed this implied a greater recording time. Slowing the tape increases the recording time, but also decreases picture quality. VHS, because of its larger size cassette, could always have an advantageous combination of picture quality and playing time. This difference was to prove crucial.

In an attempt to increase market share, Sony allowed its Beta machines to be sold under Zenith's brand name, a highly unusual move for Sony. To counter this move, Matsushita allowed RCA to puts its name on VHS machines. Although Sony was able to recruit Toshiba and Sanyo to the Beta format, Matsushita was able to bring Hitachi, Sharp, and Mitsubishi into its camp. Beta slowed down the tape and increased its playing time to two hours; VHS did the same and increased playing time to four hours. RCA radically lowered its machines' prices and came up with a simple but effective ad campaign which touted VHS's advantage: "Four hours. $1000. SelectaVision." Zenith responded by lowering the price of its Beta machine to $996.

The market's referendum on playing time versus tape compactness was decisive and rapid. Beta had an initial monopoly that held up for almost two years. But within six months of VHS's introduction in the US, VHS was outselling Beta. These results were repeated in Europe and Japan. By mid 1979 VHS was outselling Beta by more than 2-to-1 in the US. By 1983, Beta's world share was down to 12 percent. By 1984, every VCR manufacturer except Sony had adopted VHS.

Not only did the market not get stuck on the Beta path, it was able to make the switch to the slightly better VHS path. Notice that this is anything but path dependence. Even though Beta got there first, VHS was able to overtake Beta very quickly. This, of course, is the exact opposite of the predictions of path dependence, which implies that the first product to reach the market is likely to win the race even if it is inferior to later rivals. For most consumers, VHS offered a better set of performance features. The market outcome, in which VHS prevailed, is exactly what they wanted.

The lesson offered to us in the path dependence literature is that markets cannot be trusted to chose the right products. We argue that a better lesson is that public policies and legal theories should not be based on a literature that is itself based on only the most casual sort of empirical analysis.

C. Computer Operating Systems - Mac Versus IBM

It is often claimed that the Macintosh operating system is better than either the DOS system or the DOS-based Windows system that followed. However, these standards are not fixed, but instead can and do evolve. The IBM operating system evolved into one that is very similar to the Macintosh. It is possible, in fact, that the Macintosh was introduced too early, for its operating system was more than the hardware of the time could handle with reasonable performance and cost.

DOS had advantages when processors were slow and memory was scarce, since text based displays were much more rapidly displayed and required far less memory. Printers also were not generally up to the task of printing graphical images of pages, except for PostScript printers which required gobs of memory, a very expensive license to use the postscript page description language, and a fast processor to interpret the language and convert the textual commands into graphical images. For ordinary businesses and ordinary users, these advantages of the Macintosh were largely extravagances that could easily be foregone. Even Windows did not really take off until the power of computers was able to overcome its sluggish performance relative to DOS.

As processors, hard drives, and memory increased in speed and power, graphical interfaces increased in attractiveness. Printers also increased commensurately in power. If the MS-DOS world were still using DOS, there is little doubt that Macintosh would have dramatically increased its market share and might now be the dominant brand. But Microsoft apparently understood this. Windows, and now Windows95, have migrated toward the Macintosh path (which in fact was the path originated by the Xerox Palo Alto Research Center), so the original Macintosh backers were correct in their view that many of the features that confronted the user in the Macintosh system were theoretically and aesthetically better than DOS. The fact that a particular brand did not dominant should not be confused with the inability of a technology to dominate. Again, individual choices led to a solution that appears to be efficient.

VI. CONCLUSIONS

High technology goods, and computer software in particular, pose interesting problems for economic analysis. It may be that some types of software products should be produced by only a single supplier. But, this is not the usual venue for antitrust. There might be reason to intervene in the market if there were evidence that rivalry in the marketplace was moribund. But, the evidence seems to be overwhelmingly to the contrary. Or, there might be reason to intervene if there were evidence that these industries were seriously deficient in technological progress. But, there is no such evidence. There might be reason to overturn the market's selection of a standard if it could be shown that markets are systematically deficient at such choices. But, as we have shown, this is an unlikely event, and there is as yet no evidence to support such a view.

We have presented a different view of how markets generally function. In our model individuals have foresight, entrepreneurs have ambition, and knowledge is a prized asset. In the alternative world view consumers are myopic and entrepreneurs are either timid or impotent. In this latter world it is not surprising that accidents have considerable permanence and that mistakes are not corrected. In such a world there are no agents who might profit by devising some means of capturing a part of the aggregate benefits of correction.

If we follow the advice given by many proponents of concepts such as path dependence and network externalities we will likely be handicapping a sector of the economy that has been one of, if not the most, powerful source of growth, innovation and vitality in domestic and international markets. This government interference with high-technology markets would not be based on well supported theories of monopoly behavior, but rather would be based on theories that are highly speculative and generally without any empirical support. Further, attempts to convert these theories into an antitrust agenda as proposed by the Reback White paper have carried these economic theories to outlandish extremes.

The misuse of economic theory for public policy purposes cannot be in the country's long run interest. Even if one does not like Microsoft, its CEO, or its products, it is still a mistake to use antitrust as an instrument with which to bludgeon Microsoft, since there is no telling where the misuse of antitrust will next appear. The high technology marketplace appears to be quite capable of disciplining any firm that does not address the needs of its consumers, as is demonstrated by the extraordinary rate of turnover of product leaders in these markets. Above all else, the theory that is alleged to underpin such antitrust action is a theory that, at best, is of limited applicability and, at worst, is simply wrong. Consumers, manufacturers, regulators and economists will all be better off when our discourse is based on theories that have empirical confirmation in the real world.

    FOOTNOTES

  1. . See Gary L. Reback et al., Why Microsoft Must Be Stopped, UPSIDE, Feb. 1995, at 52, 52-67. These authors argue that Microsoft's ownership of operating system standards will be leveraged into eventual domination of the entire information mechanism of society:

  2. It is difficult to imagine that in an open society such as this one with multiple information sources, a single company could seize sufficient control of information transmission so as to constitute a threat to the underpinnings of a free society. But such a scenario is a realistic (and perhaps probable) outcome.

  3. Id. at 65.

  4. . Arguments of this type were apparently important in the federal district court's decision to reject a settlement between Microsoft and the United States Department of Justice. United States v. Microsoft Corp., 159 F.R.D. 318, 333-38 (D.D.C. 1995) (Sporkin, J.), rev'd, 56 F.3d 1448 (D.C. Cir. 1995). "Microsoft is a company that has a monopolistic position in a field that is central to this country's well being, not only for the balance of this century, but also for the 21st Century....In this technological age, this nation's cutting edge companies must guard against being captured by their own technology and becoming robotized." 159 F.R.D. at 337-338. The Justice Department's recent examination of the Microsoft Network and Windows 95 seems to be based on similar reasoning, particularly since it is hard to imagine any reasonable context using standard antitrust criteria for such an investigation at the embryonic stages of a product. Even the roadblocks thrown up by the Justice Department during Microsoft's proposed acquisition of Intuit (the leader in personal finance software) seem likely to have been influenced by such thinking.

  5. The material in this section draws from the authors' previous publications on the subject of network externalities. See generally S.J. Liebowitz & Stephen E. Margolis, Network Externality: An Uncommon Tragedy, J. ECON. PERSP., Spring 1994, at 133; S.J. Liebowitz & Stephen E. Margolis, Are Network Externalities a New Source of Market Failure?, 17 RES. L. & ECON. 1 (1995).

  6. Liebowitz & Margolis, Network Externality: An Uncommon Tragedy, supra note 3, at 135. Other researchers seem to have adopted this distinction between network effect and network externality. See Michael L. Katz & Carl Shapiro, Systems Competition and Network Effects, J. ECON. PERSP., Spring 1994, at 93, 95.

  7. Michael L. Katz & Carl Shapiro, Network Externalities, Competition, and Compatibility, 75 AM. ECON. REV. 424, 424 (1985).

  8. Id. at 424.

  9. Liebowitz & Margolis, Are Network Externalities a New Source of Market Failure?, supra note 3.

  10. The material in this section draws on a previous publication by the authors. See generally S.J. Liebowitz & Stephen E. Margolis, Path Dependence, Lock-In, and History, 11 J. L., ECON., & ORGANIZATION 205 (1995).

  11. W. Brian Arthur, Positive Feedbacks in the Economy, SCIENTIFIC AMERICAN, Feb. 1990, at 92 passim.

  12. For an illustration of the role that this idea of path dependence has played in challenging the neoclassical economic paradigm, see the recent exchange between Samuel Bowles & Herbert Gintis, The Revenge of Homo Economicus: Contested Exchange and the Revival of Political Economy, J. ECON. PERSP., Winter 1993, at 83; and Oliver E. Williamson, Contested Exchange Versus the Governance of Contractual Relations, J. ECON. PERSP., Winter 1993, at 103. See also Oliver E. Williamson, Transaction Cost Economics and Organization Theory, 2 INDUS. & CORP. CHANGE 107, 131-32, 141 (1993) (discussing influence of institutional characteristics and state of knowledge on scope for improving on market outcomes).

  13. For one instance in which efficiency claims are evident, see Paul A. David, Heroes, Herds and Hysteresis in Technology History: Thomas Edison and 'The Battle of the Systems' Reconsidered, 1 INDUS. & CORP. CHANGE 129, 137 (1992) . ("The accretion of technological innovations inherited from the past therefore cannot legitimately be presumed to constitute socially optimal solutions provided for us--either by heroic entrepreneurs, or by herds of rational managers operating in efficient markets").

  14. **See supra note 9?.

  15. **See Liebowitz and Margolis Path Dependence, Lock-In, and History, supra note8 at 214.].

  16. . Are Network Externalities a New Source of Market Failure?, supra note 3 and S.J. Liebowitz & Stephen E. Margolis, Path Dependence, Lock-In, and History, supra note8.

  17. See Levinson, R. J. And Coleman, M. T. .

    We have defined synchronization to have a meaning similar to that which the literature has given to the term compatibility. E.g., Katz & Shapiro, supra note 5, at 424-425.

  18. Marshall thought that increasing returns was the norm for production of all goods except agricultural and extraction goods. However, as Stigler pointed out, Marshall's discussion of increasing returns indicates that he confused movements along the cost curves with movement of the cost curves. George J. Stigler, PRODUCTION AND DISTRIBUTION THEORIES 68-76 (1941). See also H.S. Ellis & W. Fellner, External Economies and Diseconomies, 33 AM. ECON. REV. 493 (1943). Some modern authors have made the same claim, almost precisely echoing Marshall. E.g. W. Brian Arthur, supra note 9. .

  19. It is possible, and perhaps likely, that the competition between VHS and Beta enhanced the speed of innovation, as the formats fought for market leadership. Increased recording time, hi-fi sound, wireless remote controls, increased picture resolution, etc. all came about very quickly, with each format striving to keep ahead of the other. We are somewhat surprised that there are only few, if any, suggestions that competition between formats might be beneficial in the same way as competition between producers. Dennis W. Carlton & J. Mark Klamer, The Need For Coordination Among Firms, With Special Reference To Network Industries, 50 U. CHI. L. REV. 446 (1983) (illustrates the traditional view of a tradeoff between competition and efficiency).

  20. If there were production economies at the firm level, we should see many natural and entrenched monopolies. Many early leaders of new technology industries are not those who now dominate their industries - e.g. Sony's videorecorder Betamax, Digital Research's operating system (CPM), VisiCalc's spreadsheet standard, Lotus 1-2-3 (which appears to be losing to Excel), and so forth.

  21. Although it may appear that we are modeling consumer behavior only with respect to the purchase flow, the impact of stocks will be added into the model later. A somewhat more general mathematical model based on both stocks and flows gives the same basic results. S.J. Liebowitz & Stephen E. Margolis, Don't Handcuff Technology, __ UPSIDE __ (September 1995).

  22. We assume that all members in the network are equally likely to interact with another user. If some members of the network were more important than others (i.e. greater likelihood of interaction), the overall share would be less essential than shares weighted by the importance of members in the network.

  23. Of course, for any consumer, the two net value curves need not have the same sign. Moreover, different consumers need not have the same signs on their net value curves. In the latter case, there would be a group of customers with density functions like figure 4, and another group with density function like figure 5. The overall density function would be a mixture of these two. In the former case, if one format had a positive sloping (with respect to market share) net value curve, and the other a downward sloping net value curve, the relative size of the slopes in absolute terms would decide whether the result was a mixed-share, or a either-or equilibrium. If the upward-sloping curve were steeper than the downward sloping curve, the result is identical to the case in which both curves are upward-sloping, and the either-or result prevails. If the upward-sloping curve is less steep, the results are the same as when both are downward sloping, and a mixed-share equilibrium would prevail.

  24. Even here we shouldn't let ourselves be seduced by the natural monopoly story. Yes, the (large ) fixed costs imply an element of natural monopoly, but after millions of copies have been sold, how steep is the slope of the average fixed cost curve? We suspect that for many software products, the fixed costs are overwhelmed by variable costs.

  25. This discussion invokes the usual assumption that the supply function does not reflect a real or technological externality.

  26. One possible consequence of internalizing the synchronization effect occurs when the sign of the slope of the social net value function is different from the sign of the slope of the private net value function. Since the social net value function must have a larger slope than the private net value function, this change in sign can only occur when the private net value function is downward sloping, and the social net value function upward-sloping. In this case, the private net value function implies a mixed-share equilibrium, but the social net value function with an either-or equilibrium would result if the externality were internalized.

  27. Sanford V. Berg, The Production of Compatibility: Technical Standards As Collective Goods, 42 KYKLOS: INT'L REV. FOR SOC. SCI. 361, 362 (1989).

  28. In fact, when VHS came to the US market, largely under the RCA brand, it significantly undercut the price of Beta although Beta almost immediately matched the price cut. James Lardner, Fast Forward, 1987 W.W. Norton, New York, page 164.

  29. In fact, both VHS and Beta, aware of the need to generate market share, allowed other firms to put their brands on videorecorders. This was the first time that Sony was willing to allow another firm to put its name on a Sony produced product. Page 159 James Lardner, Fast Forward.

  30. It reached above six billion dollars although the company had virtually no profits and very small sales. Standard and Poor's Stock Guide, McGraw-Hill, 1996 indicates that there were approximately thirty eight million shares that reached a price of $140 per share.

    Edmund W. Kitch, The Nature and Function of the Patent System, 20 J.L. & ECON. 265, 275-280 (1977).

  31. Present day keyboard machines may be converted to the simplified Dvorak keyboard in local typewriter shops. "It is now available on any typewriter. And it costs as little as $5 to convert a Standard to a simplified keyboard." ARTHUR T. FOULKE, MR. TYPEWRITER: A BIOGRAPHY OF CHRISTOPHER LATHAM SHOLES 160 (1961).

  32. S.J. Liebowitz and S.E. Margolis, The Fable Of The Keys, 33 J.L.&. ECON 1 (1990).

  33. Paul A. David, Clio and the Economics of QWERTY, 75 AM. ECON. REV. 332 (1985).

  34. PAUL R. KRUGMAN, PEDDLING PROSPERITY: ECONOMIC SENSE AND NONSENSE IN THE AGE OF DIMINISHING EXPECTATIONS (1994).

    David, Clio and the Economics of QWERTY supranote 33 at 332

  35. DIVISION OF SHORE ESTABLISHMENT & CIVILIAN PERSONNEL, NAVY DEPARTMENT, Navy Department Practical Experiment in Simplified Keyboard Retraining: A Report on the Retraining of Fourteen Standard Keyboard Typists on the Simplified Keyboard and a Comparison of Typist Improvement from Training on the Standard Keyboard and Retraining on the Simplified Keyboard (July 1944) (Oct. 1944).

  36. For a full discussion of the use of this example in academic and other writings, see our forthcoming article in Reason, (forthcoming) June 1996. For use in one of the seminal papers on network externality, see Michael Katz and Carl Shapiro, 94 J. Political Econ. 4 (1986) 822. For one of his frequent uses of QWERTY, see W. Brian Arthur, Positive Feedbacks in the Economy supranote 9. Also see Paul Krugman, Peddling Prosperity supranote 34 and Gary L. Reback et al., Why Microsoft Must Be Stopped supranote 1

  37. Earle P. Strong, A Comparative Experiment in Simplified Keyboard Retraining and Standard Keyboard Supplementary Training, US General Services Administration, 1956.

  38. See our discussion supranote 37

  39. S.J. Liebowitz and S.E. Margolis, The Fable Of The Keys, 33 J.L.&. ECON 1 (1990).

  40. S.J. Liebowitz and S.E. Margolis, The Fable Of The Keys, 33 J.L.&. ECON 1 (1990).

  41. {S.J. Liebowitz and S.E. Margolis, The Fable Of The Keys, 33 J.L.&. ECON 1 (1990).

  42. Liebowitz and Margolis, supra note 32.

  43. See our discussion, supranote 37

  44. S.J. Liebowitz and S.E. Margolis, Path Dependence, Lock-in And History, 11 J.L. ECON. & ORGANIZATION 205 (1995).

  45. Lardner, page 152.