Advanced Future Systems

In 1965, Gordon Moore predicted an exponential growth in the number of transistors that could be packed into a single chip. That trend would imply an exponential decrease in the cost of computers. To maintain their large profit margins, IBM would need to find innovative ways to use that computational power. In 1969, Bob Evans (the president of the IBM System Development Division) asked Erich Bloch (the director of the Poughkeepsie Lab) to determine whether a new kind of system design could take advantage of the much cheaper hardware. Bloch asked Carl J. Conti to develop a plan for an Advanced Systems Project (ASP). Conti named his department Advanced Future Systems (AFS) and gathered a group of half a dozen people with expertise in hardware, software, and their interrelationships.

The HLS Study Group

Since a radically new system would involve multiple IBM locations and divisions, Evans also asked John McPherson (CHQ Armonk) to convene the study group that produced the Higher Level System Report. Appendix II of the report lists the 12 participants from three divisions at 7 different locations. To build a coalition, Conti hired two of the participants, Tony Peacock and Bill Worley, and persuaded Al Magdall (Endicott), Al Kolwicz (Boulder), and Nat Rochester (an IBM Fellow at the Cambridge Science Center) to collaborate with AFS on a common system architecture.

Everyone agreed on the desirability of a higher-level system, and the need for hardware support for it. But there were varying opinions about the hardware-software tradeoffs. The most serious weakness of the HLS report was a failure to address the problems of a smooth migration path from System/370. One reason for that failure was a desire to avoid a string of political fights that killed the Advanced Computing System (ACS) project in 1968. The strongest advocate for a System/360 compatible design was Gene Amdahl, who left IBM in 1970 to found the Amdahl Corporation. In 1975, Amdahl delivered his 470 system, which was far and away superior to anything IBM could deliver until 1978.

Those mistakes might have been avoided if the group had input from three of IBM’s best computer architects:  Gene Amdahl, Fred Brooks, and John Cocke. All three would have emphasized simplicity, but from different points of view:  Amdahl would insist that HLS be upward compatible with System/370. Brooks would warn the group about the “second system effect” — the tendency for designers to include every feature they omitted from their first system. And Cocke would maintain that nothing should be implemented in hardware that couldn’t be processed as fast or faster by an optimizing compiler.

June 1970

In early 1970, Carl Conti in Poughkeepsie and Al Magdall in Endicott had agreed that their departments would produce a system architecture that could be implemented on the large-scale systems in Poughkeepsie and the medium-scale systems in Endicott. They set a target of March 1971 for the common architecture. Conti also encouraged Nat Rochester in Cambridge and Al Kolwicz in Boulder to collaborate.

By June, the AFS group had developed the design in sufficient detail for a full-day presentation to an IBM audience in Poughkeepsie. Conti presented the introduction and summation of the system called PROMETHEUS (an acronym for Program Resource Optimizing Machine, Enhancing Throughput, Hardware Efficiency, User Satisfactio n). Peacock presented the naming scheme and storage hierarchy; Sowa, the system control; Phil Benkard, the Hercules language (also called the AFS Internal Language and finally, the System Language); Don Rain, system evaluation; and Vaughn Winker, LSI considerations.

The following week, Nat Rochester hosted a five-day workshop at a location that was covenient for Poughkeepsie, Endicott, Boulder, and Cambridge:  Cape Cod. The participants included Nat Rochester, Steve Zilles (Cambridge); Phil Benkard, Mike Feder, Tony Peacock, Don Rain, John Sowa (Poughkeepsie); Rex Comerford, Humberto Cordero (Endicott); Mark Elson (Boulder). Ray Larner from Boulder was invited, but he chose not to participate. As it turned out, that was an ill omen.

On Monday through Friday, the morning sessions ran from 8 am to 1 pm. Informal lunch and beach sessions went from 1 to 4 pm. And the afternoon working sessions went from 4 to 7. After the long lunch-beach breaks, the late afternoon sessions were highly productive. There was a general consensus on the goals of the HLS report, but with some differences in emphasis and interpretations.

By the end of 1970, the Endicott group had produced a design for a hardware implementation of APL. Meanwhile, Ray Larner and Mark Elson at Boulder had a design for a hardware system that would support PL/I data structures. It was sufficiently general that it could also support the structures of COBOL and FORTRAN. Bill Worley, who was the manager of the software group in AFS, wrote a memo that showed the weaknesses of the Endicott design for any language other than APL. Since APL was only a tiny part of the IBM market, that memo implied that the Endicott design could not meet the HLS requirements for a general-purpose system. It would be inefficient even for COBOL, which was the most widely used language for the System/360 machines built in Endicott. Therefore, Al Magdall invited Larner to work with the Endicott group to produce a more general design. Worley’s memo destroyed a bad design, but it put Larner in a powerful position.



[The remainder of this web page is incomplete.]
[There is much more to be said and many more documents to be scanned.]

Everything is an Object

The unifying mantra of the AFS design is that everything is an object. From the logical point of view, a single bit is an object, which may be owned by another object called a bit string. An I/O device is an object, which may be owned by another object called a computer system, which may be owned by an object called a computer network.

Every object resides in a storage cell, and it has two parts, an access machine and an owned resource. The access machine for an I/O device would own the device and control all access to it.

What Could Have Been

In December 1991, Carl Conti retired as an IBM Senior Vice President. His last major action before retiring was to force the AS/400 to adopt the same hardware as the Power series. With that change, the FS legacy was finally implemented on John Cocke’s RISC machine. It required just one addition to the hardware:  a single bit for every 64-bit word to indicate the presence of a protected descriptor. Those protection bits would not require a change to the storage devices themselves. The bits could be stored in a protected area that was faster and easier to add as an upgrade or option. The operations on descriptors could be implemented in microcode, which is the most securely protected code in any system.

The unifiction of the AS/400 with the Power hardware was announced in 1994. If Conti had been appointed the architecture manager of FS in 1971, the decision to unify FS with all of IBM’s legacy systems and with Cocke’s RISC machine would have been made twenty years earlier. IBM could continue to sell new hardware based on legacy designs, but they could be integrated as components of the AFS framework. Any system or device — even competitive hardware or software — could become an AFS object. In effect, the AFS design made two traditional distinctions invisible:  interpreted vs. compiled software and real vs. virtual hardware.

AFS did not require a virtual system because the access machine of every object encapsulated its owned resource, which could be virtual or real. That implies that any object could be replaced by any hardware or software that made the same responses to the same requests. The object could be as small as a single bit or as large as a network of heterogeneous hardware-software systems.

The protected descriptors and access machines could enforce any desired policies of ownership and authority. For example, the replace request to an object might cause a cascade of requests to other objects, but it would not affect independent branches of the system or network of systems. As long as a system had more than one CPU, it would even be possible for a CPU to begin the execution of a request to replace itself. One or more of the other CPUs would complete that operation. This feature would allow maintenance, repair, and upgrades without shutting down the system. Users might notice a slowdown, but not a crash.


Copyright ©2016 by John F. Sowa.

  Last Modified: