I have always enjoyed the mental exercise of imagining the topology of Aaron Sloman’s “space of possible minds.” How could one not? The idea that there exist exotic minds is both intellectually satisfying and pleasingly romantic. But what if the topology of Sloman’s space were degenerate? What if it were singular?
I am utterly unqualified to write on minds, so I will confine myself to writing on intelligence here, allowing the reader to pick their favourite definition of “intelligence” as a candle up to which my ramblings can be held.
In the (admittedly simple) world of a theoretical computer scientist, everything is a computation; John Wheeler’s “it from bit” applies to brains just as it does to any other physical system. There is nothing magical, or privileged, about computations that happen to be implemented using cascading action potentials in a biological brain. Computation is computation. Perhaps the only thing particularly notable about our brains is that they are big and relatively efficient; and this is only notable because when speaking of computation, size matters.
In 2019, Rich Sutton reminded us of The Bitter Lesson that we computer scientists seem hellbent on continuing to learn: simple models with access to greater computational power outperform handcrafted, baroque, and apparently clever complex models with less available computational power. Modern progress in large language models (LLMs) has provided compelling empirical evidence for the general wisdom of this lesson; in particular, the emergence of novel and unexpected behaviours and properties in LLMs when one does nothing more than scale the model.
Indeed, as computational power increases, artificial systems begin to exhibit behaviours that resemble those we might be inclined to attribute to “intelligence” in biological organisms, including (meta)-learning, pattern recognition, model-building, multi-step reasoning, and decision-making.
Looking to the handiwork of the blind watchmaker of evolution, a parallel narrative can be found in the similarities observed amongst intelligent biological organisms from disjoint evolutionary lineages. For example: primates, cephalopods, and birds have independently evolved brains with different architectures, but all possessing significant computational power. And all of these brains demonstrate the capacity for complex problem-solving, tool use, and communication, despite their divergent evolutionary histories. While octopus, corvid, and human intelligences are by no means identical, they have a sufficient degree of phenotypic similarity that the behaviours of a crow, or octopus, are not only not exotic to me (a primate), they are often understandable, relatable, and predictable.
In a universe in which a Sloman-like “space of possible intelligences” has high dimension and complex topology, the probability of the (purifying) random walk of natural selection repeatedly following convergent trajectories seems low. In such a universe it seems more improbable still that, as we develop truly artificial intelligences, they, too, converge to a similar, non-exotic, intelligence. And yet, it is precisely the apparent non-exoticness of, e.g., ChatGPT, that makes it so attractive as a product.
(It is a fair criticism, though, to point out that the underlying LLM for ChatGPT is trained on human-generated text and the reinforcement learning mechanism sitting on top of that trained on human reactions. I wonder how one might train an transformer architecture of sufficient size in a non-anthropocentrically-biased way…)
What if Rich Sutton’s bitter lesson is not just for computer scientists, but for all of us? What if the “secret” of intelligence were simply having enough computational power? What if there were a threshold of computational power in a complex system beyond which the development of intelligence becomes as inevitable as the apparent acceleration experienced by a mass in a gravitational field?
I propose the Singular Intelligence Hypothesis which posits that intelligence, as a complex adaptive trait, follows a convergent evolutionary path across organisms and systems. I suggest that any system with sufficient computational power will inevitably develop the traits and features we associate with intelligence, regardless of its origin or physical structure. The hypothesis implies that there is a universal phenotype of intelligence that emerges when specific conditions are met.
While this is only the roughest sketch of an idea at this point, it is something to which I would like to devote more serious thought.