The reversal of human agency: The technological imperative
Media are an emergent environmental force maintained by users—us, humans
Do humans or technologies shape history? That’s the core question of techno-determinism—and it became crucial during the Cold War. Are we doomed just because the doomsday weapon is already made? Or can we still control its power? (The same question resurfaces with AI.) Read more in: The Digital Reversal. Thread-saga of Media Evolution.
The fear of AI comes from the suspicion that it could overpower humans in interspecies competition—like Cro-Magnons displacing Neanderthals, or more advanced aliens exterminating the human race in sci-fi. We look at AI and think: nah, not yet. Or: no, never!
Agency is seen as the threshold of AI’s self-consciousness, after which we need to worry. Generally, the sci-fi consensus suggests two scenarios for AI to gain self-consciousness and begin recognizing itself as a sentient being no longer bound to humankind.
First, AI may acquire self-awareness from humans. Brain–machine interfaces, basically, imply that humans will become the donors of self-consciousness for AI. Another scenario involves the self-awakening of AI—like Skynet in Terminator—after reaching a certain level of complexity.
However, humanity’s unique trait—God-given free will—might not be needed for AI to reach its full potential and become transcendentally powerful. Moreover, from a media ecology perspective, artificial general intelligence can proceed without agency—human or self-induced—at all.
***
Reflecting on the effects of electronic media, McLuhan wrote: “…the possibility of public participation becomes a sort of technological imperative which has been called ‘Lapp’s Law’: ‘If it can be done, it’s got to be done’—a kind of siren wail of the evolutionary appetite.”[i]
The term “technological imperative” emerged during the Cold War arms race, reflecting the logic of military R&D: once a weapon is feasible, pressure builds to develop and deploy it. The Kantian moral principle “ought implies can” was reversed into “can implies ought.”
Ralph Lapp, a physicist who had worked on the Manhattan Project, stated in 1970: “If a weapons system could be made, then it would be made.”[ii] As human technologies gained the power to erase the human race, scientists began to question whether humans really had control over it.
Do humans or technologies shape history? That’s the core question of techno-determinism—and it became crucial during the Cold War. Are we doomed just because the doomsday weapon is already made? Or can we still control its power? (The same question resurfaces with AI.)
The term “technological determinism” first appeared in the 1919 proceedings of the American Historical Association, which rejected “a theory of scientific and technological determinism, such as Marx contended for.” [iii] So, the term has been used as a pejorative from the start.
The rejection of techno-determinism aimed to defend the idea that humans—not technologies—shape history. British Marxist and anti-nuclear activist Edward Thompson put it clearly in 1980: “As for the Bomb, that is a Thing, and a Thing cannot be a historical agent.”[iv]
But what really drives the Bomb? When there was only one Bomb—on the U.S. side—it was used right away. But once there were two Bombs, with the USSR joining in, the deterrent factor of Mutual Assured Destruction emerged. Since then, no Bomb has been used.
Turns out, two Bombs behave differently than one. Once both sides had Bombs, the Bombs imposed the logic of mutual destruction. So, are human decisions the cause or the effect of the Bomb? One Bomb led to one decision, two led to another—regardless of human agency.
***
How can media have their own logic? The answer is the technological imperative—the emergent force that drives media evolution: any medium seeks better performance to unfold its full potential—to actualize its ideal form. This drive is never intentional, yet still directional.
The technological imperative is what media dictate to humans. It has two aspects: 1) “What can be done—ought to be done” (like opening Pandora’s box); 2) the constant improvement of a medium toward its best form and function, driving humans to develop media.
For example, the hammer—as an extension of the hand and fist—evolved to perform its function of hitting to the best of its form’s “capacity.” Lining up actual, historical hammers, we can imagine an invisible designer seeking to reveal and embody the ideal form of the hammer.
We don’t know who contributed to the hammer’s pursuit of its ideal form. Their agency didn’t matter. Whoever they were, they all would actualize the same form, because this form is immanent to this medium as an extension of the hand and fist.
It turns out the ideal form of a medium belongs not to human ingenuity but to the medium itself as an extension of a certain human physical or cognitive faculty. Levinson’s metaphor of media as “human replay”[v] defines the ideal form of each medium as a specific human extension.
The invisible designer who actualized the ideal form of the hammer to its best-performing configuration was the technological imperative. The technological imperative is the invisible hand of media evolution—and its formal cause.
What, then, is the role of humans and their agency? Once humans learned to apply implements, they would naturally select what kinds and shapes of tools led to better outcomes. That was a natural selection of a medium’s form. Then we developed the forms that worked.
By taking care of media, we maintain media evolution. The more convenient and all-permeating media are, the more media users reverse into media servants. Some still see that as a metaphor. But AI developers openly admit that humans are indeed just training tokens for the machine.
***
The Chess program Deep Blue was built to compete with humans. In 1997, it defeated world champion Garry Kasparov. The program had nothing left to learn from humans, because it outperformed them. But it didn’t stop—it kept learning from the rules of the game itself.
The same happened to AlphaGo: it defeated the best human Go player in 2017 but did not stop. The technological imperative kept pushing it to improve its performance further, with or without humans. Not only did humans no longer coach these programs—no humans were needed at all.
What’s the point of replaying humans if humans are outplayed? We have reached a threshold at which digital media, meant to replay human cognitive abilities, have outperformed humans and need to move further, learning from some rules and abstract principles.
If a machine is set to pursue better performance in intelligence and is outperforming humans, what rules or principles can it learn from afterward? What are the rules or principles that thinking is based on? Language.
All thoughts, however complex, can be expressed in language. Language contains all potential thoughts. All human speech contains all practical thoughts in all contexts. The true affordance of the internet was to host all human speech and make it accessible for LLM training.
***
Media are all about the “how”—how to accomplish a task through better human extension. But they can’t originate the “why.” The last barrier between AI and humans isn’t the “how” but the “why.” We humans can have a “why,” however ridiculous it may be; machines cannot.
The “why” is the condition of agency. The technological imperative, however—especially at the stage of AI—disregards the “why” entirely. It inexorably pushes media toward better performance in whatever they do, without any “why.” Why? Just because. Because it can.
The technological imperative—the media’s striving for better performance—ensures artificial intelligence will acquire intelligence: without free will, without agency, without humans, and beyond human capacity and control. We humans diligently supply what we must—speech.
The technological imperative is no longer even bound to replaying humans. It’s just an emergent force driving media to better performance within a given form—potentially to the form’s ultimate, ideal performance. What, then, is the ideal performance of intelligence as a “form”?
The ideal, ultimate logic of media extension is to extend the user to the maximal possible extent—to encompass the entire environment. If/when this happens, the environment collapses into the medium (implosion). The medium becomes the environment. AI is an environment to itself.
The user, the medium, and the environment will merge into one. Under the technological imperative, the ultimate medium requires neither will nor agency nor even a user. The ultimate medium encompassing both the user and the environment will be the self-user.
Read more in: The Digital Reversal. Thread-saga of Media Evolution.
On September 16, I launched a fundraising campaign on Kickstarter for my next book, Counter-Digital Media Literacy. The goal is to raise CA$6,400 in 30 days. On the 2nd day, the book hit 53% of its goal. Huge thanks to everyone who already supported the project. Join the cause of counter-digital media literacy!
See also books by Andrey Mir:
The Viral Inquisitor and other essays on postjournalism and media ecology (2024)
Digital Future in the Rearview Mirror: Jaspers’ Axial Age and Logan’s Alphabet Effect (2024)
[i] McLuhan, Marshall. (1974). “At the moment of Sputnik the planet became a global theater in which there are no spectators but only actors.” Journal of Communication. March, Volume 24, Issue 1, pp. 48-58.
[ii] Lapp, Ralph. (1970). Arms Beyond Doubt: The Tyranny of Weapons Technology.
[iii] Peters, John Durham. (2017). “‘You mean my whole fallacy is wrong’: On technological determinism.” Representations, Fall, No. 140, Special Issue: Fallacies, pp. 10–26.
[iv] Thompson, Edward. (1980). Notes on Exterminism, the Last Stage of Civilization.
[v] Levinson, Paul. (2017). Human Replay: A Theory of the Evolution of Media.





