Irving John Good (1916-2009)

5 min read | Topic: Appendix 1: "How We Got Here"
Irving John Good (1916-2009)

Genesis, the 2024 book by Henry Kissinger, Craig Mundie, and Eric Schmidt, explores the development of artificial intelligence through a range of both possible and negative outcomes for humanity. Yuval Noah Harari’s most recent book Nexus focuses more on the possible negative consequences of AI and raises concerns about what could happen if AI moves beyond human control. Surprisingly, pondering the potential future impacts of a technology with the characteristics that we currently ascribe to generative pre-trained transformers has quite a long history and is not limited to the last few years as the technology has accelerated in its maturity.

All the way back in 1965, researcher Irving John Good wrote an article for the journal Advances in Computers titled “Speculations Concerning the First Ultraintelligent Machines” in which he states:

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”
In his essay Good argues that building the first ultraintelligent machine (UIM) would trigger an “intelligence explosion.” Many of his predictions for such a machine resonate with the technology we now see under development. For example, He describes the earliest UIM as likely to be ultraparallel and neural‑network‑like and unusually fluent with ordinary language. A core idea is that representing and using meaning will be a more efficient way to achieve intelligence than deterministic (rules-based) programming techniques. His far-sighted essay connects ideas from communication theory (“regeneration” and probabilistic regeneration), statistical semantics and information retrieval, and also connects to then current biological ideas of the modified cell‑assembly / subassembly model of the brain. His essay even explores topics that seem very contemporary including learning via reinforcement, discrete synaptic strengths that change probabilistically, and the role of sleep‑like consolidation of information.

For all of his foresightedness on technical aspects, this initial exploration of UIM was limited in its examination of the potential negative consequences for humanity. His optimistic view even included the assertion that “the survival of man depends on the early construction of an ultraintelligent machine.” Good was one of the mathematicians working with Alan Turing at Bletchley Park to break Nazi codes and later in his life advised Stanley Kubrick during the development of the film “2001: A Space Odyssey” and so his world view definitely included potential negative consequences. He both recognized that this intelligence explosion would have to continue to be under the control of humanity and he also worried about the potential for the transformation of “society in an unimaginable way” and recognized that these changes might be beyond human comprehension.

In the end, Good’s exploration of downside scenarios though is in a few paragraphs while overall he argues that building a UIM is crucial and potentially lifesaving for our species. While for us the few concerns that he highlighted all the way back in 1965 resonate more deeply as this technology that he envisioned comes into view: the potential for loss of control, human redundancy, social upheaval, and other ethically fraught consequences both from the machines themselves (he raises the question of whether a machine could “feel pain”), and from the way in which humans choose to use these machines.