In our AGI “coming soon” world no one talks up the achievements of BGI very much, despite the fact that Biological General Intelligence has been a thing for a long time, and is the thing trying to create AGI.
It is easy to be dazzled by the many things that computers are really good at: data processing, remembering everything, maths, working in bounded contexts, top-down deductive reasoning, probabilistic inductive reasoning, and inexhaustible patience.
And play down what AI is not good at — or what BGI is good at. This is quite a long (encouraging) list:
The Problem of Being (Ontology)
Embodiment
We are not just brains (or “brians”, if you are dyslexic like me) being transported by our bodies. We are combined mind-body systems. We don’t have a body, we are our body (see Merleau-Ponty). We take this for granted because we’ve been doing it ever since we started bottom shuffling around.
It turns out that being in the world and understanding the world (not just making probability outcomes about it) is really hard, because we only ever have partial data about the world we are in. Humans have over 30 different senses to do this, not just the classic five external senses. Fun fact: playing chess at grand-master level or doing calculus requires very little computational effort, while low-level sensorimotor skills (e.g. walking) need enormous computational resources — Moravec’s Paradox.
Situatedness
Worse than this, we are always situated in the environment of a specific time and space. We don’t notice that we are in a continuous loop of perception and action within a dynamic environment, but so much is handled by our unconscious mind. This uses primarily our unconscious intelligence (we can all walk in a line, until the line is a rope and we are up in the air, and we are trying to walk with our conscious mind instead of our unconscious one).
Our years of being situated have given us a set of “common-sense” heuristics about what to do when something weird happens. Even a very context-limited activity, with rigid rules and set procedures, like driving cars, is proving very difficult (see Heidegger’s idea of Dasein, Being-in-the-world).
Evolution
We and our hominoid ancestors have been kicking around for a long time, reproducing ourselves and constantly adapting to changing circumstances. The lack of embodiment and situatedness will make evolution (beyond a limited technical sense) difficult for computational AGI.
You can, for example, examine the colour red from all sorts of technical metrics: its wavelength, the chemical composition of its pigment, its position in colour theory, or its different cultural significance. These are all good forms of knowledge. But what is missing is the subjective experience (qualia) of seeing red (this is Jackson’s Mary’s Room thought experiment). To be a true AGI it would need to be able to have subjective experiences. From another direction, consider Nagel’s “What Is It Like To Be A Bat” thought experiment. We can never truly know what it is like to have an experience like echolocation — we might imagine aspects of it, but not really experience it.
Experience is a key feature of BGI — see all the Star Trek episodes where either Data or the Voyager Doctor try to experience things like BGIs do — it’s a long list.
Key idea: Qualia are not captured by metrics. If AGI cannot have experiences, it will always be “about” the world rather than in it.
Lack of Motivation
Humans have a complex cognitive architecture that directs problem-solving and focuses our energies in an unpredictable and unknowable world: we are driven by wants, needs and emotional responses. This gives BGIs intrinsic motivation. It helps us structure time and prioritise efforts. Our architecture is driven by a negotiation between differing emotions, whatever in the hierarchy of needs is pressing right now, plus the demands and constraints of others.
It answers the big existential question we ask every morning: “What shall I do today?” Without intrinsic motivation, AGI would just sit there, perfectly inert, waiting for a BGI to come and feed it an extrinsic motivation. Schopenhauer talked about the Will To Life, and Nietzsche more problematically the Will To Power — foundational biological urges that direct our cognitive architecture.
Persona Consistency
How will AGI develop a personality that is consistent? Or perhaps, why would it do that? We BGIs develop personalities that mainly behave in consistent ways because (amongst other things) we are social animals and it would be difficult to get on with others if every time we reacted differently.
Worse than that, being digital, the AGI’s persona could presumably be copied (unless embodied in its own robotic body). Would each AGI (if more than one) be as unique as each BGI? Parfit’s Teletransporter thought experiment asks what would happen if we could teleport ourselves to Mars — is the “new you” that arrives on Mars the same as the old you? If the Earth “old you” was destroyed in the teleport process is the “new you” actually you? Or if the Earth “old you” wasn’t destroyed, are there now two yous? We take the self to be indivisible. Would that be the same for AGIs and if not, what would that mean?
Not all thinking goes in a straight line or follows a straightforward algorithm. AI can appear to be creative, but it is really drawing on its immense recall (of BGI creativity) and probabilistic calculations to throw together suggestions. This is super useful but it is not the same as reasoning itself.
Searle’s Chinese Room argument imagines a man who speaks no Chinese in a room. He receives Chinese symbols through a slot, follows a complex English rulebook (an algorithm) to find the corresponding symbols, and sends them back out. To an outside observer, the room “understands” Chinese. But the man inside understands no Chinese — he only understands how to follow the algorithm: he is being a (human) computer in the oldest sense of the word.
Abductive Reasoning
AI is great at deduction (if A, then B) and induction (seeing a pattern in data), but struggles with abduction: seeking the most plausible explanation from a usually limited set of data — e.g. the grass is wet this morning, therefore it probably rained last night. The process is: (1) you observe that something is not normal; (2) you draw on a rule or background knowledge; (3) you create a hypothesis that plausibly matches the observation to the rule/knowledge.
Because AI is disembodied and doesn’t know how the world functions it struggles with abduction. It is happy if there is a strong correlation (wet grass = 99% probability of rain). It is unhappy with “best guesses” for novel or under-represented subjects in its training data.
Lack of Theory of Mind
Unless AGI is going to lack intersubjectivity it will need a Theory of Mind — to ascribe beliefs, intentions, desires, knowledge, and emotions to itself and to others. Otherwise it cannot be empathetic, understand sarcasm, collaborate, or really understand that anyone else but itself exists. The Sally-Anne Test is the classic experiment used to determine if a child has developed the social cognition to demonstrate it has a theory of mind. Of course, AI can be trained, parrot-style, to pass the test, but not because it understands the concept of belief.
Perfectibility
Underpinning thoughts about AGI is an implicit Frankenstein fantasy — that it will run away from us to cause the singularity or an intelligence explosion; that it would be so much smarter than us it wouldn’t need us and would always outsmart us; that we can assemble an intelligent perfection from the stuff we have to hand. But the world/universe isn’t like that — it’s messy and unpredictable and ultimately unknowable. AGI would need to understand its limitations and imperfections as much as (most of us) do. See Gödel’s Incompleteness Theorems setting a fundamental, mathematical limit on any computational system.
Cognitive Displacement
There has been technological displacement for a long while: automation, digital transformation, robotics have removed jobs people used to do. Generative AI brings a new dimension — from cognitive augmentation (your AI assistant) through to cognitive displacement — AI removing traditional knowledge-worker jobs. Knowledge working as a mass occupation has only been a thing since the industrial revolution; before then most people did “muscle work”. BGIs will have to redefine our value system again toward what we are good at or enjoy: embodied experiences, emotional connection, care, artistic expression, valuing subjective consciousness — and knowing moral choices defy perfection.
So, if AGI does become a thing it can do what it does, and we can do ours. With some middle ground of augmentation… come to think of it, where is my robot butler?
Yes, this is a challenge for us too, but AGI would be essentially amoral without a lot of (human) effort that goes into constructing guardrails that prohibit immoral content. Worse, ethics is not binary — would AGI take a utilitarian approach (greatest good for the greatest number — Bentham) or a deontological one (adhere to cardinal rules that can never be broken — Kant) or focus on being virtuous (Aristotle)? Foot’s Trolley Problem shows that ethics is not a solvable equation but a landscape of dilemmas.
Bad Habits
AI has picked up many bad habits because it has had to learn from us. The many forms of cognitive bias in human thought and the many inequalities in human society are reflected in what it learns. AGI is our offspring and will reflect us, warts and all. Designers try hard to mitigate and detoxify this, but it’s a very hard task — and a fully unbiased world is a utopian pipe dream.
Reality check: If we want “moral” AGI, we have to decide whose morality, when, where, and why — and accept that trade-offs are unavoidable.
Finally, a slightly cynical note of caution. AGI has been a failed prediction (or, less generously, gross marketing hype to generate investment capital) for quite a while now. I was born in the 1950s and I want to know: Where is my flying car and my jetpack? Why isn’t there a colony on Mars? Why is my life not more like George Jetson’s? Musk said that AGI would be here in 5 years’ time, in 2014. None of those things are true. Maybe because they are a lot harder than we imagined. Possibly because they are not realistically possible.
Don’t get me wrong. I think Generative AI is amazing and I’m deeply enjoying working together with it to augment my life. And it might be that AGI is a probability. I just think we underestimate the difficulty of creating AGI because we downplay our own capabilities as BGIs, because, you know, it’s just us doing our thing.
“Far out in the uncharted backwaters of the unfashionable end of the western spiral arm of the Galaxy lies a small unregarded yellow sun. Orbiting this at a distance of roughly ninety-two million miles is an utterly insignificant little blue-green planet whose ape-descended life forms are so amazingly primitive that they still think digital watches are a pretty neat idea.”