. . . . . . . . . . . . . . . . . . . . . .

Advanced Future Systems

. . . . . . . . . . . . . . . . . . . . . .

In the late 1960s, IBM management foresaw a threat from the development of large-scale integration (LSI) and later VLSI. The much cheaper hardware would make it difficult for IBM to maintain their large profit margins. In 1969, Bob Evans (the president of the IBM System Development Division) asked Erich Bloch (the director of the Poughkeepsie Lab) to determine whether a new kind of system design could take better advantage of the much cheaper hardware. Bloch asked Carl J. Conti to develop a plan for an Advanced Systems Project (ASP). Conti named his department Advanced Future Systems (AFS) and gathered a group of half a dozen people with expertise in hardware, software, and their interrelationships.

The HLS Study Group

After some preliminary work, Conti reported that a better integration of hardware and software design could make computer applications significantly easier to develop and maintain. Since such a design would involve multiple IBM locations and divisions, Block reported to Evans that collaboration with other divisions would be necessary. To get interdivisional support, Evans requested John McPherson (CHQ Armonk) to convene the study group that produced the Higher Level System Report.

Appendix II of the HLS report lists the 12 participants from three divisions at 7 different locations. To build a coalition, Conti hired two of the participants, Tony Peacock and Bill Worley, and persuaded Al Magdall (Endicott), Al Kolwicz (Boulder), and Nat Rochester (an IBM Fellow at the Cambridge Science Center) to collaborate with AFS on a common system architecture. Everyone agreed on the need for better hardware support for an HLS, but they ignored the problems of migrating System/370 applications to the HLS.

June 1970

By June, the AFS group had developed the design in sufficient detail for a full-day presentation to an IBM audience in Poughkeepsie. Conti presented the introduction and summation of the system called PROMETHEUS (an acronym for Program Resource Optimizing Machine, Enhancing Throughput, Hardware Efficiency, User Satisfactio n). Peacock presented the naming scheme and storage hierarchy; Sowa, the system control; Phil Benkard, the Hercules language (also called the AFS Internal Language and finally, the System Language); Don Rain, system evaluation; and Vaughn Winker, LSI considerations.

The following week, Nat Rochester hosted a five-day workshop at a location that was covenient for Poughkeepsie, Endicott, Boulder, and Cambridge:  Cape Cod. On Monday through Friday, the morning sessions ran from 8 am to 1 pm. Informal lunch and beach sessions went from 1 to 4 pm. And the afternoon working sessions went from 4 to 7. After the long lunch-beach breaks, the late afternoon sessions were highly productive.

Participants: Nat Rochester, Steve Zilles (Cambridge); Phil Benkard, Mike Feder, Tony Peacock, Don Rain, John Sowa (Poughkeepsie); Rex Comerford, Humberto Cordero (Endicott); Mark Elson (Boulder).

Conclusions: There was a general consensus on the goals of the HLS report, but some differences on emphasis and interpretations,

[The remainder of this web page is incomplete.]
[There is much more to be said and many more documents to be scanned.]

Everything is an Object

The unifying mantra of the AFS design is that everything is an object. From the logical point of view, a single bit is an object, which may be owned by another object called a bit string. An I/O device is an object, which may be owned by another object called a computer system, which may be owned by an object called a computer network.

Every object resides in a storage cell, and it has two parts, an access machine and an owned resource. The access machine for an I/O device would own the device and control all access to it.

What Could Have Been

In December 1991, Carl Conti retired as an IBM Senior Vice President. His last major action before retiring was to force the AS/400 to adopt the same hardware as the Power series. With that change, the FS legacy was finally implemented on John Cocke’s RISC machine. It required just one addition to the hardware:  a single bit for every 64-bit word to indicate the presence of a protected descriptor. Those protection bits would not require a change to the storage devices themselves. The bits could be stored in a protected area that was faster and easier to add as an upgrade or option. The operations on descriptors could be implemented in microcode, which is the most securely protected code in any system.

The unifiction of the AS/400 with the Power hardware was announced in 1994. If Conti had been appointed the architecture manager of FS in 1971, the decision to unify FS with all of IBM’s legacy systems and with Cocke’s RISC machine would have been made twenty years earlier. IBM could continue to sell new hardware based on legacy designs, but they could be integrated as components of the AFS framework. Any system or device — even competitive hardware or software — could become an AFS object. In effect, the AFS design made two traditional distinctions invisible:  interpreted vs. compiled software and real vs. virtual hardware.

AFS did not require a virtual system because the access machine of every object encapsulated its owned resource, which could be virtual or real. That implies that any object could be replaced by any hardware or software that made the same responses to the same requests. The object could be as small as a single bit or as large as a network of heterogeneous hardware-software systems.

The protected descriptors and access machines could enforce any desired policies of ownership and authority. For example, the replace request to an object might cause a cascade of requests to other objects, but it would not affect independent branches of the system or network of systems. As long as a system had more than one CPU, it would even be possible for a CPU to begin the execution of a request to replace itself. One or more of the other CPUs would complete that operation. This feature would allow maintenance, repair, and upgrades without shutting down the system. Users might notice a slowdown, but not a crash.

Copyright ©2016 by John F. Sowa.

  Last Modified: