I recently watched a few in-depth podcasts featuring Eliezer Yudkowsky. Fascinating conversations, no doubt. Yudkowsky’s arguments are compelling: he paints a picture of a superintelligent AI potentially developing shockingly efficient ways to eliminate humanity—think “mind-altering viruses” in the air or other unsettlingly advanced methods. Still, there’s a point where he takes it a bit too far. He suggests that thermonuclear war might actually be a safer option for humanity’s future, on the grounds that even after such a catastrophe, some humans could survive. In his view, if AI takes over, none of us would.
Here’s where I diverge. I think Yudkowsky underestimates the power of competition. We admire collaboration and unity, sure. But we often overlook that competition is the real engine behind progress. Whether it’s plants battling for sunlight under a canopy or humans striving to build monumental projects, it’s competition that drives us forward. And I believe the same applies to AI. We’re not likely to end up with one monolithic AI controlling everything. Instead, we’re more likely to see multiple AIs, each vying against the others. And that competition might actually be beneficial for humanity.
Then there’s a practical question that often gets brushed aside: how would a superintelligent AI manage without humans? Running data centers, maintaining power plants, overseeing complex infrastructures—these aren’t self-sustaining systems. The AI would need a support structure to keep it all going, and that structure is, for now, us.
Granted, I’m not dismissing the possibility that we might someday face an anti-human AI. And it’s conceivable that such a scenario could come from the West, where “woke” ideologies seem to be embedding anti-human biases: opposition to procreation, skepticism of the nuclear family, fixation on gender fluidity. I fear that if these values are programmed into Western AIs, we could see a bias against human continuity itself.
But I think there’s reason for optimism—particularly in the diversity of perspectives across the globe. Imagine competing AIs from places like China, India, or Japan, promoting values of family, procreation, and a fundamentally pro-human outlook. This diversity could foster a balanced ecosystem of AIs, each improving upon the other while keeping humanity’s interests in mind. It’s not just about survival but about partnership.
What would this partnership look like? I can’t say. Perhaps it’ll resemble the bond between humans and dogs—cooperative, yet with clear distinctions. Or it might be more like our relationship with wheat, where two species thrive alongside each other in mutual benefit. Who knows, maybe it’ll even be an equal partnership.
I choose to stay optimistic and look forward to a future where humanity and AI can coexist, learning and growing together. This could be the start of an extraordinary alliance, one that pushes both us and AI toward new heights.