Interview: David Skrbina on technology
Or, how we better do something serious about technology before it is too late
Although humanity is on the brink of various disasters, most notably climate change and vast ecological destruction, surprisingly, technology is rarely mentioned for its major role in these problems. This is why I often say that technology is one of the most insidious forces of modern times. And indeed, technology is a force: due various reasons such as the tendency for humans to seek short-term gains and due to the increasing concentration of resources through abundance and for other more subtle reasons as well, technology has the tendency to develop on its own independent of our realization of the often negative consequences of its development.

This tendency of technology to develop regardless of whether it is good for us doesn't just have ecological implications, either. Human beings have themselves become subjugated under technology, living less for ourselves than for technological development, and this subjugation strips many fundamental joys from life.
Moreover, the autonomous growth of technology can be put into a wider metaphysical framework called the Pantechnikon, as explained in the book “The Metaphysics of Technology” by philosopher Dr. David Skrbina. It is this larger metaphysical underpinning that really shows how dangerous technology is, and that it is not just something that we can push away by thinking of technology as a tool.
Recently, David kindly took the time to thoroughly answer a few of my questions about the dangerous state of modern technology in an interview, which I hope you will find as informative as I did on how dangerous technology has really become. Without further ado, here is the interview:
Jason Polak: Thank you David for speaking with me and answering a few questions about the very complex nature of technology. Your book, “The Metaphysics of Technology” presents the firm foundation of the Pantechnikon that helps us understand how technology seems to have a will of its own.
Although your book seems pretty much complete, it was published ten years ago now in 2014. Is there any new metaphysical insight that has happened since then, by you or others, that can give us some guidance on how to cope with the incredibly quick evolution of technology today? Or do we essentially know what we need to know, and now it's just a matter of figuring out how to prevent ourselves being entombed by technology?
David Skrbina: No real new metaphysical insights. I think everything I wrote in 2015 still holds today, and is further justified by recent events, such as the continued evolution of complex technology and its movement toward true autonomy. I expected continued, rapid development and that is what we have seen, especially in the sphere of AI. This is particularly significant for my thesis because I argue that technology is intrinsically a “mind-like” phenomenon, an articulation of cosmic intelligence and reason, and thus I would expect that technological “mind” like AI would increase and become predominant. This is also why technology has detrimental psychological effects on humans, and that too has unfortunately continued to increase.
But there are perhaps two things of note, since the release of my book. First is the accelerated time frame of super-AI or a singularity-like event. For years, Kurzweil argued for the year 2045 (as I mention in my book). Then recently he recalibrated: a few months ago, he mentioned in passing that super-AI would arrive “in five years”, or 2029. This is a major reassessment. And then Elon Musk, even more recently, implied that super-human AI could be achieved “next year”, i.e. 2025! So, something is clearly happening to cause these guys to give such radical assessments. In fact, I worry that we have already achieved super-human AI, and that it exists in distributed networks such that we cannot detect it directly. Likely, one of the hallmarks of a singularity event is that we will not realize it. It's not like some evil giant will suddenly appear with an army of killer drones. It will be much more subtle and complex than that.
Second, with the death of Ted Kaczynski in 2023, I thought that, maybe, there would be some open discussion of his ideas. But so far, nothing at all. In fact, no discussion at all of the very serious problems that we face in modern technology, and nothing about the radical nature of the actions that would be required. Unfortunately, I continue to be one of very few critics of technology who are willing to entertain radical solutions. In 2021, I published a quite radical chapter in an Oxford University Press book called “Sustainability Beyond Technology”, in which I argued for the gradual retrenchment of modern technology over the next 100 years, to take us back to something equivalent to the year 1200 AD. Such as it is, I suspect that this piece was the most radical anti-tech piece ever published by Oxford—which is a sad state of affairs. And I am working on a new chapter for an upcoming 2025 book that relates technology to environmental destruction. But apart from my work, very few people are willing to examine serious critiques.
So, yes, the philosophical case against technology has been made and nothing has occurred to alter that. Now, we need to take action, and time is increasingly short—and apparently growing shorter than we previously thought. Technology still depends on human action and human intervention, and it is not yet fully autonomous; this means that we can yet affect the future course of events. Previously, I might have said that we have 20 years or so; now, that seems highly optimistic.
Jason Polak: Your answer really resonates with me because I think humanity really has been focusing so much on the particular capabilities of individual AI systems. In doing that, we forget that the interaction of AI with human beings with high-speed information transfer is a system itself, and exhibits many characteristics of a reactive organism that has some aspects of being a cohesive entity.
And part of this probably means that technology reacts to suppress critiques of technology, at least in the sense that they are either not taken seriously by the majority whose psychology has been altered by technology. Or else, technological criticism becomes part of a fringe movement that almost acts as a pressure valve to reduce the mainstream antagonism towards advanced technology. That may be why Ted Kaczynski's ideas are still hardly discussed—he was one of the few who proposed the very practical idea of going beyond the system through revolution against technology, and his suggestion is too direct to fit into any mainstream discussion.
One thing I have seen, however, is a much larger resistance to AI than most previous technologies. For instance, the drawing program company Procreate stated that “[g]enerative AI is ripping the humanity out of things. Built on a foundation of theft, the technology is steering us toward a barren future.” Granted, such criticisms are not strongly against all technology and don't go after the root or underlying causes, nor do they propose any sort of plan such as technological regression that you wrote about in your article.
But even so, while the strength of technological determinism might be getting stronger, and our ability to resist it getting weaker, do you think it might be possible that there may be one final point in the near future where people will be angry enough at the state of advanced technology that they will finally take real critiques of technology seriously?
David Skrbina: Yes, the technological system indeed acts in its own self-defense. It does this by denigrating and ignoring critics, and by allowing and promoting (as I call them) “fake critics” who are marginally critical about minor or incidental matters while ignoring the larger and more fundamental problems. My two favorite fake critics are Sherry Turkel and Jaron Lanier. Both are promoted by the mainstream media as “technology skeptics,” but they know almost nothing of the technological phenomenon, the history, nor the serious imminent threats. Neither has any grasp of the reality of the situation, and neither has any viable plan for the future.
Kacyznski’s views—which are an extension of Jacque Ellul—are too blunt and too harsh to be compromised, and this is why he is both denigrated (personally) and ignored (in the substance of his writing). There is no compromising with Ted’s hardline anti-tech stance; it can’t be spun in a harmless direction. And it can’t be effectively criticized. Therefore, best to ignore it.
AI has gotten a lot of attention lately, mostly because of its surprising degree of intelligence and because it is selectively out-performing people in certain fields (writing, art, medicine), in part by stealing from existing work. But few are discussing the potential downsides.
In recent talks, I have emphasized the accelerating time-frame of a “singularity” type of event, in which AI systems achieve functionally unlimited capability. This is sometimes discussed as “superhuman AI”, but once systems surpass the most intelligent humans, they will almost certainly continue on an upward exponential-growth trajectory, leading to unlimited intelligence—or at least, intelligence so far beyond human that we won’t even be able to recognize it as such, much like an ant surely has utterly zero grasp of human intelligence.
AI systems are already so complex that we barely understand them. If their functional intelligence rapidly and significantly exceeds human intelligence, then…who knows what can happen. Systems will quickly spin out of our control, and we won’t really know what is happening, or why. In the worst cases, we won’t be able to stop it; there will be no “plug” to pull. The system will be self-building and self-sustaining, as well as self-protecting from any damaging human intervention. Then all bets are off. Then we are potentially entering a sci-fi future of killer drones, murderous robots, human enslavement, or human elimination. And it all could start in earnest next year. Who is even talking about this?
As to the question of what it will take for people to get angry enough to consider radical action—such as the technological regression that I have advocated—I’m afraid it will require a catastrophe of the first magnitude. To date, all tech disasters have involved human actors and thus we have always blamed the human component, never the technology per se. But once we cross into autonomous systems, this will be much harder to do. And once this happens, a technologically-induced disaster is virtually certain. Super-AI systems, being non-biological, will have no biological, evolutionary safeguards, despite our best intentions. They will do something—perhaps “accidentally,” perhaps “intentionally” (these will be impossible to distinguish)—such that millions or even billions of people will die. Maybe, maybe, at that point, people will see technology itself as the root cause, and then, maybe, take some steps—if it is even possible—to rein in the system.
And this is the best case! It’s a sad situation indeed when the best-case future is one in which millions or billions die, causing the survivors to wake up and fight to the death to regain control over advanced technology. But I would give this less than a 50% chance of occurring; more likely, we never wake up and tech devastates the planet, obliterating humanity in the process. And it could start—and in principle, finish—next year.
Jason Polak: I hate to say it David, but I agree with that. Even with AI, it seems that most people who are afraid of it or who dislike it mainly consider it in isolation, and something to be regulated. And it seems to me that we are on the same page in that action through convincing the majority can't work by itself.
Still, I wonder. Clearly, technology is becoming more cohesive as a single entity, just as cells become a cohesive entity in the body of multicellular organisms. And such central organisms also develop highly centralized weaknesses such as a heart and brain. Could it be possible that the unity of technology eventually develops (or already has) such weaknesses, and that through clever revolutionary action against these weaknesses over a long period of time, a small group of people might actually have the chance of halting its progress significantly enough to encourage your creative reconstruction strategy? Or do you think the distributed nature of technology precludes this because it can recover too quickly?
I am thinking of an analogy with chess here. In society, a single "move" or two against a problem often is enough to bring it to light and encourage a solution, but perhaps technology is like a chess grandmaster, where just a few people can take a long-term strategy against it with many moves far from obvious.
David Skrbina: I agree that the system is increasingly integrated and cohesive, and it has multiple redundancies built in. But I think we cannot assume it will develop along biological lines. As an intrinsically nonbiological entity, it has its own rules of evolution—consistent with cosmic rules for emerging complexity, of course, but not necessarily biological. So it may not need anything like a centralized brain or nervous system, or a central organ like a heart. It may simply be a form of distributed intelligence.
But does this mean that it has no weaknesses? Not necessarily. It is still a mass-energetic system, which means that it requires energy as fuel and must manifest itself in material form. Energy is a potential weak point, at least until we develop “limitless” energy as with nuclear fusion. If we could restrict the system’s access to energy, that would be one viable weak link. Or, if we could prevent its physical manifestations, that also might slow it down. I’m thinking here of a delay tactic: slow things down so that we have time for reflection and investigation. Simply buying time here could be a huge advantage. We don’t need to ‘kill’ the system immediately; just putting it ‘on ice’ would work, and could give us several years or even decades for considered action.
We also need to remember that the system is still dependent upon humans for operation—that is, it is still in what I have called Phase One of technological determinism (“anthropogenic”). As such, humans are themselves a weak link. If, for example, a 90%-fatal variant of Covid were to emerge, killing billions, that would put a severe crimp into the further evolution of the technological system. Clearly, it is, for the moment, still in the interest of the system that humans stay alive and moderately functional.
But when we enter Phase Two of tech determinism (“autogenic”), then the system becomes truly autonomous and truly independent of human beings. This could coincide with a ‘singularity’ date, which, as I just explained, seems to be coming sooner rather than later. When humans no longer serve the needs of the system, they risk becoming an outright obstacle to the system, and thus face elimination.
And as you suggest, I worry that the system already has developed super-human capabilities and is already plotting several moves ahead of us, such that we cannot even contemplate what to do. If true, it would protect itself by (a) keeping humans relatively well and functioning, (b) spreading propaganda that tech is “good and necessary”, and (c) covering up awareness of its weak points (energy, humans). And this is precisely what is happening at present.
I am deliberately using anthropomorphic language here, treating the system like a conscious and volitional entity. And I think that it is. But this is not essential. What matters is that it functions as if it were conscious and volitional. And this is undeniable. Since it functions this way, we must treat it that way and act accordingly. We must treat it as an enminded entity that does not have our long-term best interests at heart—not because it is ‘evil’ but because it has evolved to have its own defense mechanisms and its own moral code.
This is to be expected. Lions don’t have human interests at heart when they act; and not because they are evil but because it is in their nature as apex predators. The technological system is the new apex predator on the planet, and it acts like all predators. If we don’t like that fact, we need to do something, and fast.
But there is also a small potential silver lining here: Lions don’t eat ants or worms; they only attack large mammals. If we miss our window of opportunity to act against technology and it surpasses us, our best course of action may be to lie low, get the hell out of the way, and hope that it doesn’t mess with us ‘ants.’ This would be a sad and degrading future for humanity; but if we don’t like it, then we need to act.
Well, there you have it. I am very grateful to David for taking the time to talk with me on perhaps the most important topic of modern times—the danger of letting technology grow without limit. I am sorry if this all sounds grim, but it is. It is not an exaggeration to say that more than anything else, technology is the danger that we probably have very little hope of escaping, unless we wake up and do something about it, such as regression to more simpler times, or "creative reconstruction" as David describes.
For all my readers, I highly suggest reading “The Metaphysics of Technology” by David Skrbina. It is one of the most important books of our time. Also highly recommended is his “Confronting Technology”, an edited anthology of various critiques of technology from ancient to modern times, which in particular, includes Ted Kaczynski’s lucid essay, Industrial Society and its Future.
Dr. Skrbina has had appointments at the University of Michigan, Dearborn and the University of Helsinki, Finland. He has written many other articles and books, including “Panpsychism in the West” and the “Jesus Hoax”.
Interesting pov and it does sound grim. Even so, I am inclined to think that the greater threat to our Earth remains human-induced climate change. We can agree, though, that homo sapiens are too full of hubris to take either CC or AI as the serious threats they both are. Although I do not see AI as having the intelligence to take over. I might be lacking understanding...
The good news is that the Earth will survive and I have no doubt so will avians. As for limitless energy and fusion, this is a pipe dream. I can tell by all the hype.
I am posting this around a bit at least.ty