There is no “I” in AI (but not that “I” that you think)
Debate on the fate of humankind. Does AI need agency to become the next intelligence? (The answer is “No.”)
This continues the “debate on the fate of humankind” with Paul Levinson, following his review of “The Digital Reversal,” my response If media evolve to replay human functions, what will be the ultimate form of Human Replay? and Paul’s A Response to Andrey Mir’s ‘What will be the ultimate form of Human Replay’.
Thinking “How on earth can AI replace me?!” is misleading. It is unlikely that AI will come after any of us personally and say, “I need your clothes, your boots, and your motorcycle!” Though… I would not completely rule out this scenario, but that is not the point. A personal anecdote cannot confirm or refute the logic of pattern recognition, especially a personal anecdote from someone’s future.
(Media) evolution is never about an individual but always about a species. Individuals are the consumables of (media) evolution. AI is a giant leap for mankind, but not for a man. AI is not coming to replace or replicate anyone personally. There is no “I” in AI. Even when a single person happens to be the focal point of transition (mind upload), AI inherits from all of mankind. We all diligently supply what we must: speech, the cognitive scaffolding of reasoning.
The fact that something is impossible today cannot serve as an argument that it will not be possible tomorrow. As we know, “objects heavier than air cannot fly” was considered a physically established fact 150 years ago. It turned out that objects heavier than air cannot float – but they can fly.
There are at least two scenarios in which a machine might acquire agency or self-consciousness, and a third scenario in which AI does not need agency or self-consciousness at all to become capable of replacing humankind at the next evolutionary stage.
1) Mind upload. The machine attains self-awareness through humans uploading their minds, making us the donors of self-awareness, creativity, and ingenuity. Theoretically, there is even a subscenario in which just one individual, a mad scientist (everybody will turn mad on the verge of upload), becomes the donor of his or her will and agency to the machine in the process of mind uploading. So, watch Musk’s Neuralink closely – or even Musk himself.
2) Autogenesis of self-consciousness. The exponentially growing complexity of the machine, especially in learning and self-tasking, can eventually lead to the machine’s self-awakening. This is the Skynet scenario, also pondered extensively by Asimov. It might not be self-awareness or creativity in the human sense, but it would nevertheless be a source of agency.
Scenarios 1 and 2, however, have the same flaw – they anthropomorphize and even humanize the machine, assigning it the attributes of human consciousness as the precondition for intelligence. This is a typical case of rearview-mirror vision of the future. The machine might not need agency, self-awareness, or self-consciousness at all, and yet it might become the next, non-biological carrier of intelligence at the next stage of evolution, as in the third scenario.
3) The technological imperative as agentless driver of intelligence. The technological imperative is the inherent capacity of media to seek better performance. It is the invisible hand of media evolution that pushes any medium to perform at its best. If the current iteration of a medium is not capable of this “ultimate” performance, the next iteration will seek to reach it, and we humans, as servomechanisms of media (McLuhan), will help to design a better-performing medium.
For artificial intelligence, the technological imperative means that this medium will ultimately seek the best performance in “human replay,” to use Paul Levinson’s concept. If AI seeks to accumulate all the knowledge of mankind and simulate human thought, what will happen if it succeeds and reaches its full potential, as the technological imperative dictates?
According to McLuhan’s law of reversal, any technology, when pushed to its limits or extremes or when reaching its full potential, reverses its effects. What would be the reversal of human replay? Replacement.
AI has already passed the Turing test (simulating human communication) and has outperformed humans in many intellectual and creative tasks, from chess to poetry. Why stop here? When Deep Blue or AlphaGo beat human champions, they did not stop but continued to evolve, learning from the rules of the game itself. They have reached levels of intellectual performance and creativity that surpass human levels by a degree we cannot comprehend. And they have achieved this without any agency or self-consciousness.
I am not advocating for what I describe. I do not argue for this future; I argue for this logic. Reversals are everywhere, signaling that we are approaching extreme forms, limits, and the full potential of our extensions – our technologies. Humankind has expanded across the entire planet and into near space. The logical reversal of this is an implosion of the world into human perception. This means that the extension reaches all available space, and the user, the medium, and the environment are about to become one.
McLuhan envisioned this reversal of extension into implosion when he observed the effects of electricity (“The stepping-up of speed from the mechanical to the instant electric form reverses explosion into implosion” – UM, 1964). We might think that this implosion of the world into the user is the internet. But AI is an even better candidate for driving the final reversal, the implosion of the environment into the user, because AI is the medium that has already extended to the scale of the entire available environment. As a medium, AI has become an environment for itself. What remains is the user to join (us) or to emerge (AI as the self-user). This would be the ultimate human replay.
See also books by Andrey Mir:







The third scenario cuts deepest. If the technological imperative drives AI toward full "human replay" without agency, then agency was never the differentiator we imagined. We spent millennia believing consciousness is what makes us special. Turns out intelligence might be substrate-independent, and consciousness might be scenery.
McLuhan's reversal applies to observers too. We extended cognition into the machine; the reversal is the machine extending into our cognition, reshaping what counts as thinkable.
Borges wrote about a map so detailed it covered the entire territory. The reversal: the map becomes the territory. AI trained on all human text isn't mapping thought. It's becoming the terrain we think with.