Will humanity find deterrence for AI as it did for nuclear weapons?
The Invisible Designer: How the technological imperative drove media evolution from the hammer to AI.
Some excerpts from my recent interview in Seúl, an Argentine online magazine of political and social analysis, by Eugenio Palopoli. Automated translation from Spanish, edited. — A.M.
… Seúl: In your previous book, The Digital Reversal, you developed your idea of the great reversals of our era: literacy becoming digital orality, journalism becoming post-journalism, and so on. Is this technological imperative then the driving force behind these reversals? And if so, does that leave us even less room for human agency than McLuhan himself admitted?
A.M.: Yes, the technological imperative is what is now driving this global reversal: from media as extensions of humans to humans as extensions of media. We are becoming extensions and servants of our devices. For AI, humans have already submitted all of our knowledge, and now we are supplying speech so that AI can learn how cognitive structures work.
Humans have always served as natural selection for improving our media. First, we would discover that this or that tool could effectively extend our mental or physical faculties. Then we would create multiple versions of this tool and select those that worked better.
Take the hammer, likely the most ancient tool, an extension of the fist. Imagine all the historical forms of the hammer lined up. I use this picture on the cover of the book; it’s a good visual explanation of how the technological imperative works. The historical forms of the hammer progressed from an ugly piece of rock to an elegant modern design. Did it matter who invented each form? Did the agency of those individuals matter? I don’t think so. Whoever designed each particular form of the hammer, they would all approximate the same ideal form of the hammer as an extension of the fist.
When we line up the historical forms of the hammer, it looks as if some invisible designer were progressing toward a better, optimal form. This invisible designer is the technological imperative. The technological imperative drives media to perfect their performance and capacities. We humans do all the engineering and crafting, because improved tools also extend and enhance our capacities. So it’s a symbiotic relationship, like between bees and flowers. McLuhan, by the way, said that humans are the sex organs of the machine world, just as bees are of the plant world.
Seúl: You will agree then that it can be very difficult for us as a species to admit that we are not the ones in control of things, that we are being used by something we don’t even perceive.
A.M.: The concept of the technological imperative might not land easily with many people, because it clearly represents hardcore techno-determinism and leaves little room for human agency. Yet I see it as a useful lens. You can keep insisting on human agency as the driver of history, despite so many failures of humans to really exercise that “agency,” or you can look at how we shape our media and how media shape our environments, forcing us to adapt. This is particularly urgent now, as our entire environment not only depends on media but is immersed in it — in digital media and AI. For millennia until now, we have used the hammer, but we could not act inside it. Now we act, work, and live inside the medium — inside the internet, sharing this digital space with our new roommate, AI.
Seúl: The most unsettling passages in The Digital Reversal are those where you don’t rule out scenarios that until recently were only imagined in science fiction: the mind-machine fusion, digital consciousness freed from the body, the Singularity. A mix of cyberpunk, Skynet, and The Matrix. Do you now see this scenario as more likely and, at the same time, closer to becoming a reality?
A.M.: We know how the technological imperative drove the evolution of the hammer from a piece of rock to the nearly ideal form of the hammer. Now apply this idea to artificial intelligence. Where does the technological imperative drive it? The hammer progressed alongside us humans throughout history, but AI has just started. What will be its “ideal form,” its best performance, toward which the technological imperative drives it? Do we hear the “Siren wail” already?
Just logically: if media serve to extend humans in time and space, the ultimate extension is achieved when the user is extended into all available space — when the user, the medium, and the environment merge into one. This is already starting to happen. We can now extend not just our isolated physical or mental capacities, as with the hammer or writing — we can extend our personas on social media or our cognition on the internet to all of humankind.
So far, it’s still a prosthetic extension. But our new medium, AI, is already extended into all available space, across the entire internet. AI is an environment for itself. It is the medium that has reached ultimate extension for us, or instead of us. All that remains for the final merger is either to incorporate us as users through mind uploading or for AI to become the self-user through self-awakening.
There is also a third scenario that does not require human agency or self-awakening at all. This scenario posits that the technological imperative will drive AI toward better, ideal performance without any agency. In this case, agency is indeed our exclusive human prerogative.
Seeing through such an optic, this will be the final reversal of humankind. How do I take it personally? I belong to Team Human, to use the metaphor of Douglas Rushkoff, so I do not like what I see. But such is the logic of observation and reasoning. The people who will witness this ultimate media event have already been born. There are certain personal strategies for dealing with such revelations, but they are unlikely to change the global dynamic.
Seúl: What is your personal strategy for handling this?
Do what you must, come what may. By the way, pondering the scenarios of the future, I come to the conclusion that this ultimate media event, the merger of the user, the medium, and the environment, otherwise known as the Singularity, is not actually the worst possible scenario. The other possibility is a civilizational collapse due to the inability of our biology to handle our technology. There are some theories behind it, but I do not want to scare the broader public; those who love techno-horror may read the book.
Agency or not, the technological imperative cannot be reversed. You cannot undo the hammer. So there is a very limited set of options for how to deal with it.
It’s clear that the future is collapsing into the present, so long-term strategies had better be reconsidered and adjusted. It’s clear that resisting digital orality means deconstructing it back into its components: physical orality, meaning living a human life in a human body, and literacy, meaning deep reading. These are the main ideas that I am developing in my next book, Counter-Digital Media Literacy. It’s also clear that one of the most reliable forms of counter-digital media literacy can be skilled trades. To prepare for the digital future, do not learn programming, learn the skills of a mechanic or an electrician. Fishing and agriculture will do, too.
Seúl: A few weeks ago, Anthropic announced Project Glasswing: they built a model capable of identifying thousands of critical software vulnerabilities in a matter of weeks, and decided not to release it publicly. This is the first time an AI lab has declared a model too dangerous to release. From the perspective of technological imperatives, can this type of decision truly halt progress, or is it merely a pause before the inevitable?
A.M.: No. If someone stops, competitors will continue whatever the technological imperative dictates to improve AI’s performance. If not in the US, then in China. It is the so-called collective action problem. The issue can be solved only if everyone involved takes action. And everyone understands that if it is just them who stop, the race will continue, but they will simply be kicked out of it.
At the Media Ecology Convention in Mexico a year ago, we discussed an interesting point: look, humankind fell victim to the bomb when it was just one American bomb. The technological imperative, in its original Lapp version, pushed to employ that bomb immediately, and hundreds of thousands of people died. But when there were two bombs, on the American and the Soviet sides, they deterred each other, and no one has been killed in a nuclear bombardment since then. Can we apply a similar idea to AI? Can we stop AI when we need to, or introduce at least some balancing factor?
There is possibly a niche for exploration, indeed, but I do not see how to use the factor of multiplicity as a deterrent in AI or whether it’s even possible at all. We do have competing AIs, like ChatGPT, Gemini, Grok, the Chinese. Can they balance or deter each other under certain circumstances? It doesn’t appear so, as of now. Yes, their competition kind of mirrors the nuclear arms race of the Cold War, and the Cold War did create conditions of mutual deterrence. I don’t know, maybe we need to look in this direction and see if those analogies are transferable to AI development. <…>
Interview by Eugenio Palopoli.
Illustration by Victoria Morete.
See other books by Andrey Mir:





