Chapter 2 - Background
2.21 Management Information Systems -- 1959-1972
The transition from a Government dominated to competitively decided computer market-structure occurred in the early 1960’s. Until then, Government created buying demand by purchasing product and stimulated technological innovation by funding R&D. It both pulled and pushed the market. But by the middle of the 1960’s, local interactions among increasingly informed commercial customers and aggressively motivated computer firms dictated market dynamics. To better understand the coming changes to the computer market-structure through the 1980’s requires insight into why and how companies first computerized. It was the 1960’s when organizations began integrating computers into the very nature of how they did business, and in the process drove computer revenues from $600 million in 1960 to $7 billion in 1968 – a compounded growth rate of 36 percent a year.88
No two companies computerized in exactly the same way. Each differed in their needs, objectives, the year started, the computer systems they used, and, of course, their unique human and organizational competencies and resources. However, once a corporation began using computers, the patterns of subsequent use were much the same for all companies. Invariably, saving money was the initial justification for computerization. Making better decisions came next. Once executives became dependent on computers, they then wanted the information on-line. Finally, by the early 1970’s, integrated systems became the drive to computerize.89 These four rationales, or stages of use, will now be explored.
The first reason to computerize, to save money, treated the computer as just another piece of capital equipment. If return on investment (ROI) calculations based on costs and savings exceeded the corporate hurdle rate, the investment was made. Large bureaucratic organizations employing large numbers of people, generating volumes of routine “operational” data, found it easy to identify personnel cost savings. Hence, banks and insurance companies were the first to computerize. Just as they had been the first to make extensive use of punch card systems.
The save money logic was compelling during a time when the clerical workforce in American industry was growing rapidly. From Census Bureau reporting, the number of people working in clerical or closely related jobs had grown to 1 out of every 7 workers by 1958, in contrast to 1 out of every 20 in 1910, and 1 out of every 10 in 1940.90 In 1958, nine million people worked in clerical positions. Aiding in identifying cost savings in clerical “paper pushing systems” were “time and motion” studies based on the principles of industrial management best reflected in the theory and work of Frederick W. Taylor.91
When organizations first computerized, the departments involved generally retained total responsibility. But soon Data Processing (DP) departments were created with corporate-wide responsibility to assure best, and shared, use of computers. The emergence of DP departments clearly indicated executives were beginning to view computers as necessary to the conduct of business.92 Even so, these first efforts were about keeping information, not using information.
One effect of computerization on corporations was theorized as a trend to “recentralization” – more executive management control. This stood in contrast to the “decentralization” underway since before World War II.93 With mainframe computers housed at corporate headquarters, executives no longer needed to disperse authority to where record keeping was performed. Instead, using the information residing on the co-located computers, executives found they could exert direct influence throughout their organizations. And once having absorbed the costs of a computer, the motivation became to extend use to new problems or opportunities. (Economically speaking, new computer applications only had to be justified using variable costs, the fixed costs of the computer having already been accounted for.)
Concurrent with the processing of more and more company data, a new rationale justifying computer use emerged – making better decisions. The premise was that executives could make better decisions using formal techniques. Techniques were means of segmenting, analyzing, and finding solutions to problems – improving the likelihood of better decisions. Better decisions resulted in competitive advantage, superior innovation, more profits, etc. – all justifications for further investment in computers and techniques. Making better decisions significantly broadened the concept of computer use beyond simply lowering operational costs.
These new techniques were known as “operations research.”94 Operations research dates to World War II, and, to more than anyone else, John von Neumann, the very same man who conceived the concept of the “stored-program” computer. In 1944, von Neumann and Oskar Morgenstern published “Theory of Games and Economic Behavior.” No single event did more to create operations research.95 Game theory applies mathematics to understanding and “solving” problems of interest conflict.96 During the war, problems such as the design of shipping convoys, or deciding how many tanks the Germans could build were the focus. After the war, game theory found a welcome home in a world in which man viewed himself as competitively constrained, operating in an increasingly complicated and complex world, and with economic gain to turn on right decisions. Theories initially assumed man was “rational” and had “perfect information.” With time, others objected – arguing decision makers did not have perfect information but instead had “limited knowledge and skills.” In 1957, Herbert Simon introduced the concept of satisficingas an alternative, more realistic, way of viewing decision making than one of finding the “perfect” answer.97 “Bounded rationality” would come in the future. Many techniques besides game theory comprise operations research including: decision theory, linear programming, dynamic programming, Bayesian analysis, PERT, and CPM.
The marriage of techniques and computers were soon used to inform more than acts of deciding. They contributed to the notion of viewing an organization as a whole.98 By building computer models of the entire organization, then performing sensitivity analysis – seeing how changes would affect the whole – it became possible to study both strategic and tactical implications. Arising coterminously were planning and strategic development departments. Organizations began prioitizing the hiring and training of technically-trained personnel, particularly management.99 Computers and their consequences represented an effort to cope with an increasingly complex world, while at the same time, contributing to its complexity. Reflective of the challenges, the discipline of management science emerged – one deeply influenced by operations research and the growing presence of computers.
Yet cries of protest were heard.100 Management was much more than techniques and algorithms embedded in computer programs, these critics reminded all. They voiced concerns about the character and humaneness of the future work place. In short, computerizing everything did not make sense.
DP departments gained in stature and influence as organizations succeeded in using computers for more than saving operational costs. Reflecting this new importance, DP departments became Management Information System (MIS) departments – connoting the transition from keeping information to managing the use of information. A self-organizing “fraternity” emerged among those selling computers, MIS professionals, trade personnel and, frequently, academics. This fraternity had a language and sub-culture of its own, one stressing technical competencies not the skills of general management – a difference often distancing MIS professionals from corporate executives. Nevertheless, few executives were willing to run the risk of not computerizing, leaving them little choice but to hope their MIS professionals would get the job done.
Once executives experienced the value of the timeliness of information, they wanted all the information immediately, real-time – on-line. But Second Generation computers were too slow and under-powered to support on-line performance. It was the same issue confronting those designing SAGE – the users wanted to interact directly with the computer and get results immediately. Second and First Generation computers executed in a mode known as “batch processing” – the exact opposite of on-line, real time. Batch processing forced the user to submit a job, or request, to the MIS department which processed each request within priorities. Results were given to the user, later, after the fact. To executives needing answers now, batch processing would not suffice.
Why did corporate executives feel the need for on-line information? Because by the early 1960’s, the pace of change for business was accelerating, even if executives were not entirely sure why. In 1965, H. Igor Ansoff wrote in the Harvard Business Review: “The 20-year period since World War II has seen a continuing acceleration of product change. Triggered by accumulated technology and pent-up consumer demand, product innovation has become an increasingly important tool of competition and growth. To the business manager it has brought both opportunities and problems.”101
Another measure of this change, between 1946 and 1961, corporate research and development expenditures rose from $1.2 billion to $10 billion.102 The rapid proliferation of products, shorter product life cycles, the emergence of entirely new markets, slowing growth in more traditional markets, and globalization of everything103 all contributed to the felt need by executives for not only computers, but computers capable of giving immediate answers. The power of the Third Generation salesman’s pitch came from the fact that it resonated with the executives concerns and needs.
Their salvation, so IBM told them on April 7, 1964, simply required upgrading to their new family of computers – the System/360. Only the System/360 proved to be exactly the opposite. It was a cataclysm. To those users with a prior investment in computers, it meant throwing away whatever they had done. To those just computerizing, and most users were just computerizing,104 the experience was traumatic. To all, it became a nightmare.
To the beleaguered MIS departments managing the upgrade to the System/360, which meant mastering an operating system, rewriting all their existing application programs, changing over almost all their hardware, training and hiring personnel, and delivering on new applications that were often the justification for upgrading the computer in the first place,105 the experience was overwhelming. Trained, let alone experienced, people were no where to be found. There simply were none. Executives knowing the risks and consequences of failure, could do little more than hope, having no prior computer management experience. And no one could have anticipated the problems of trying to create an operational system in the face of constant system software changes. It was not until around 1968 before the system software for the System/360 was considered reliable, even if not functioning as originally promised.
Into the maelstrom it had created jumped IBM declaring: “We will make it work.” And they did. It is calculated to have cost them $5 billion to do so – the largest investment in a new product up to then in American corporate history. No other firm could have risked such an investment. And in the process, IBM gained a head-lock on the market, forming long-lasting bonds with MIS departments and personnel. MIS departments using IBM computers were known as “Big Blue Shops.” (After the blue of IBM’s corporate image and their machines.)
The bonds between IBM and customers’ MIS departments were rooted in mutual inter-dependence. The more successful IBM was in selling computers to MIS departments, the more likely MIS departments controlled computerization in their companies. (MIS departments increased their control when they, not user departments, bought and managed the computers.) And the more control and influence MIS departments had, the more likely their budgets were to grow to buy more computers from IBM. The interdependence was inherent to the technology IBM was selling – big, expensive computers that had to exist behind air-conditioned, “glass” walls, and drove the corporate information architecture to centralized operations, the province of the MIS department.106 IBM reinforced these interdependencies in their practice of account control – each customer felt special and capable of calling on the resources of all of IBM. The cumulative results became structural inertia blinding both IBM and MIS departments to first minicomputers, and then more importantly, to personal computers.
To create on-line computing required the connection of computer distant terminals to host computers. This need drove the innovation of early computer communication products and the emergence of a data communications market-structure – a story to be told in later chapters.
As the conversion to Third Generation computers by large organizations was being completed, roughly between 1968 to 1972, a new rationale for extending computer use emerged, one affecting the very ways companies would function and be organized: integrated systems.
A 1967 survey by the management consulting firm of Booz, Allen & Hamilton, published in 1968, reports: “the computer increasingly is penetrating and permeating all areas of major manufacturing corporations.”107 The evolution in use of computers by large organizations was now extending to every operation and activity. Not surprisingly, the next objective was for them to all work together as integrated systems. The counsel from MIS remained one of a hierarchically organized information system. The justification for using computers had become so compelling that when alternatives to mainframe computers became available, they too could be justified. Before long the sheer numbers of computers being used by corporations eroded the paradigm of centralized computing institutionalized in MIS departments in late 69 and early 70.
Computers had been adapted to so many organizational activities, they had become indispensable. Organizations became thought of as information processors. “Organizations were no longer predominantly human systems,”108 write Bonczek, et al (1981) of this period. Once computerized, existing organizations could do more, better, and smaller organizations could perform much larger than their headcount would historically indicate. Consequently, firms selling services to other organizations became a growth sector – representing one out of every seven jobs in1985 versus one out of every ten in 1969.109
Maybe the most dramatic indication of how important computers had become since 1960, when their use was simply to save money, was that by 1967, executives felt if they didn’t exploit computer systems, their competitors would to their disadvantage. The same Booz, Allen & Hamilton survey concludes: “ The day may not be far distant when those who analyze annual business failures can add another category to their list of causes – failure to exploit the computer.”110
In the brief period of a decade, (1960-1970) corporate America’s attitude towards computers went from a piece of capital equipment used to save clerical labor costs, to the cornerstone of the business. The transition to the ultimate objective, having a “fully” integrated computer system, required two intermediate steps – using computers to make better decisions and on-line information. For most companies it would take until the 1980’s to complete the task – then only to find the task begin again. Firms, meaning Big Blue Shops, entered the 1980’s having evolved an information infrastructure both centric and hierarchical.
MIS departments emerged as computers became essential to the modern business corporation. For example, by 1976, salaries were the largest cost of computerization. When corporate overhead is added, the second largest source of costs, the combined are proxy for the cost of internal infrastructure – $17.9 billion, or 46% of all costs. See the following table – Table 1.0 1976 Computer Expenditures by Using Organizations. But in the 1980’s, technological and competitive advances prove the MIS structure to be structural inertia to the dynamics of distributed computing.
Exhibit 2.24 1976 Computer Expenditures by Using Organizations ($ billions) 111
|Corporate overhead allocation||8|
|Rentals & leases||6|
|General purpose computer systems||3|
|Data communication equipment & lines||2.6|
|Software packages & facilities||1.7|
|Data Entry Equipment||1.4|
|Total User Spending||38.9|
But now to the story of IBM and the System/360.
Computers in use exploded from 2,500 in 1959 to 50,000 in 1969. Datamation, January 1970, p. 69
Peter G. W. Keen and Michael S. Scott Morton, “Decision Support Systems: An Organizational Perspective,” Addison-Wesley 1978, p. 55-56
Ida Russakoff Hoos “When the Computer Takes Over the Office,” HBR, Jul/Aug 1960 , p. 103TBD
Ibid., Keen and Morton, p. 49
“Helping the Executive Make Up His Mind” Fortune April 1962
R. Duncan Luce and Howard Raiffa, “Games and Decisions,” John Wiley & Sons, Inc. 1957, p.3
Herbert Simon, “A Behavioral Model of Rational Choice”
“Free For All” Management Science February 1970
H. Igor Ansoff, “The Firm of the Future,” Harvard Business Review Sept/Oct. 1965, p. 176
To name a few of the most quoted: John Dearden (1964) “Can Management Information be Automated?,” Russell L. Ackoff (1967) “Management Misinformation Systems,” and Chris Argyris “Management Information Systems: The Challenge to Rationality and Emotionality.”
Ibid., p. 163
Between 1959 and 1969 the number of computers grew from 2,500 to 50,000.
Keen and Morton, p. 36
Third Generation computers – the System/360 –were a boon to computer professionals with job growth at every level, especially programmers and system analysts (growing from”317,000 in 1967 to approximately 560,000 by 1970”).Said another way, in 1965, software costs had increased to”one-half of the total , and in 1970 it accounted for approximately 80% of total computer costs”).
Neal J. Dean, “The computer comes of age,” Harvard Business Review Jan/Feb 1968, p. 83
Bonczek, et al, p. 5
Citation and explanation that lower percent for manufacturing still meant jobs were created since total employment went up from…..
Ibid., p. 91
Ibid., p. 104