Computer Systems

St. Isidore of Seville, shown above, is the patron saint of computers and the Internet. In the 7th century AD, he produced a 20-volume encyclopedia of knowledge from antiquity to his day. It was so popular that nearly a thousand manuscripts survive. For a summary and excerpts, see the review by Ernest Brehaut (1912).

IBM’s Future System of the Past

In September 1971, IBM began a major leap into the future called FS (Future Systems). It was intended to replace System/370 as the new hardware and software architecture for IBM and the world. At that time, there were many enthusiastic supporters, as well as quite a few Dilberts who could see a disaster looming. To understand the FS project, imagine a cast of thousands, which included most of the best and brightest in IBM as supporting actors, dozens of pointy-haired bosses in the middle, and a very strong-willed executive at the top, who had a hardware background, but no understanding of software design and development. There were some very good managers who understood the situation, but they were frustrated by the impossibility of turning the battleship around. Those who tried were branded as “not team players.”

In 1971, the warning signs were obvious to anyone who understood the complexities of hardware and software design, but upper-level management would not listen. By 1973, ironic jokes began to circulate, such as “Moses went into the desert and saw FS.” The acronym FS is a pun on the Hebrew word , which means “absolutely nothing”. In 1974, the symptoms were becoming obvious to everyone, but most people were afraid to ask “career-killing questions” in public. That’s when I wrote Memo 125, which earned me the everlasing enmity of certain managers who had not yet prepared their alibis. To illustrate the kinds of events that occurred in the waning days of the FS disaster, Bob Bacon drew a comic book he called The Adventures of Task-Force Tim. For more discussion and references to the historical developments, see the web site by Mark Smotherman.

Higher Level System (HLS)

In 1969, John C. McPherson, the only IBM vice president who could write a computer program, chaired a Machine Organization Concepts Study Group. It was convened in response to an inquiry by Bob Evans, who was the president of the System Development Division (SDD), which designed IBM’s largest mainframes. This study group was significant because (a) a decade earlier McPherson had chaired a committee that led to the hugely successful System/360; and (b) Evans had persuaded (or forced) competing projects in different divisions of IBM to implement the 360 architecture.

The study group met on a roughly biweekly schedule from November 1969 to February 1970. At their last meeting, they “unanimously endorsed” the following “resolution”:  “The Machine Organization Concepts Study Group has studied the question of feasibility and advisibiliy of a higher level system and concludes that such a change of direction is both feasible and necessary and very advantageous to the Company’s expansion, both to new fields of application and to larger numbers of users. It offers a way for consolodating the advances in the knowledge in the use of machines in the past 25 years and forms a firm base for future development and will use to advantage new technologies.”

The Higher Level System Report was ambiguous about hardware-software tradeoffs. Appendix I of the report criticized conventional hardware designs and implied the need for something radically new. But page 1 of the report said that HLS could be implemented “from a fresh viewpoint without too great a departure from present computer organizations.” In fact, the Opel Task Force Report, which was released in September 1971, could have been implemented with minimal change to any System/370 hardware. As Figure 1 shows, the NMI (New Machine Interface) merely requires protection bits on any storage field that contains a pointer (hardware address). That is a good idea. But as Memo 125 showed, it’s a development project, not a research project. At that point, I switched my research goals from computer architecture to artificial intelligence. I started to design the Interactive Language Implementation System and to develop Conceptual graphs for a database interface.

Advanced Future Systems

The Advanced Future Systems project (AFS) began in 1969, a few months before the HLS study group. The AFS manager, Carl Conti, was a participant in the HLS group, and his contributions reflected some of the early work on AFS. Conti also hired some of the HLS participants to work on AFS, and he forged an informal coalition with Al Magdall of Endicott. As an IBM Fellow, Nat Rochester did not report to Conti, but he and his group were an integral part of the AFS project.

Unlike other FS proposals, the AFS architecture could be implemented on slightly modified conventional machines or on a new RISC design by John Cocke. As a consultant to the AFS project, Cocke had a strong influence on a design that permitted a wide range of hardware-sofware tradeoffs. During the summer of 1971, he discussed all the issues in Memo 125 with members of the AFS project. Unfortunately, the reorganizations in September put a man in charge who insisted on implementing PL/I data structures in hardware. Memo 125 could have been written in 1971, but no one would listen.

In December 1991, Carl Conti retired as an IBM Senior Vice President. His last major action before retiring was to force the AS/400 to adopt the same hardware as the Power series. With that change, the FS legacy was finally implemented on John Cocke’s RISC machine. It required just one addition to the hardware:  a single bit for every 64-bit word to indicate the presence of a protected descriptor. In 1971, Conti presented that same solution to the Opel Task Force, but he was too low in the corporate pecking order. The chairman, John Opel, was a former salesman. He knew how to sell computers, but not design them.

Lost Opportunities

In 1978, a scaled-down version of FS was announced as System/38. The greatest strength of HLS, AFS, and System/38 was a “one-level store”:  a virtual memory that eliminated the distinction between RAM and files on disk drives. That made System/38 much easier to program than earlier computer systems or even most systems today. But the instruction set of System/38 was essentially System/370 with descriptors. The microcode of any 370 system could have been modified to support it, and programming for FS could have begun in 1971.

As another option, John Cocke had designed a RISC machine, which eventually became the Power PC. In 1994, under the name AS/400, it ran a later version of the System/38 software. But as early as 1971, Cocke had persuaded the AFS group that a conventional computer with the HLS descriptors and a well-designed compiler could run the FS software. That computer could be a modified System/370, his RISC design, or both. The first FS machine could have been delivered as an “option” of the IBM 3033, which was announced in 1977 and delivered in 1978.

An even better option was a high-speed System/370-compatible machine that Gene Amdahl had designed in 1969. Unfortunately, that project was canceled for political reasons. Therefore, Amdahl left IBM, got funding from Fujitsu, founded the Amdahl Corporation, and delivered his 470 system in 1975. Until the IBM 3033 in 1978, Amdahl’s machine was much faster than anything IBM was selling. With suitable microcode, a design by Cocke or Amdahl could have served as the high-end hardware for FS. System/38 would have been the entry-level version for smaller systems.

The Law of Standards

From observation of many standards efforts and participation in some, I formulated a principle called the law of standards: Whenever a major organization develops a new system as an official standard for X, the primary result is the widespread adoption of some simpler system as a de facto standard for X. This law does not imply that standards efforts are doomed to failure, but it does imply that evolutionary projects are more likely to succeed than revolutionary ones. As Memo 125 said, “Being omniscient, God wisely chose the evolutionary approach to system design.”

Note added in 2006:  The conclusion drawn in 2000 was that the Linux API would become the de facto standard for software development. That has come to pass. Major software companies, such as Oracle, do all their software development on Linux and port the result to Windows and other systems. Although IBM does software development on AIX (IBM’s brand of Unix), they made AIX Linux compatible. Linux is the universal source:  anything implemented on Linux can be ported to any other system. Windows is the universal sink:  anything implemented on Windows cannot be ported to anything else.

Note added in 2016: In the 1990s, a reporter asked Linus Torvalds about his goals for the Linux operating system. Torvalds jokingly replied “World domination.” That joke is now true. The overwhelming majority of computers, from Android cell phones to Google’s Chrome system to the largest supercomputers and computer farms, run Linux. In second place are the Apple cell phones and computers, which run the similar BSD Unix operating system. In third place are the Windows computers, which have a shrinking market share between the low-end cell phones and the high-end computers and farms. But there is a new market for the smallest systems in the Internet of Things (IoT). The larger “things” are more likely to run Linux than Windows or Apple OS, but the smallest things use special-purpose software.

Copyright ©2000, 2006, 2016 by John F. Sowa.

  Last Modified: