AI is smart enough to detect breast cancer more accurately than human radiologists.
But what happens if AI actually starts to see humans as the obstacle to progress?
For example, if an AI is built to fix climate change and it sees humans as the main cause, what’s stopping it from making decisions that remove humans to save the planet?
I consult enterprise-level brands on how to use AI, and I believe we’re building AI to be much more than just a tool.
AI could improve our lives or it could become our biggest nightmare.
AI companies say they’re pushing boundaries to build something that helps us all. But as someone who’s worked with AI since its early days, I can tell you that AI is way more dangerous than you think.
OpenAI and DeepMind are two of the biggest names in the AI world right now.
OpenAI started back in 2015 with the mission to build super smart AI that helps everyone, not just tech billionaires.
At first, OpenAI was a nonprofit, because the founders were seriously worried that AI could be dangerous if it went into the wrong hands. They wanted to build it safely, for humanity and not just for money.
But then in 2019, they switched things up. OpenAI became a “capped profit” company, which means investors could make good returns up to 100x and nothing more. The goal stayed the same, people over profit.
Fast forward to May 2025, and OpenAI changed again, this time into a Public Benefit Corporation (PBC). Now there’s no cap on investor returns, but the nonprofit parent still controls the mission, so it doesn’t lose its people's first focus.
OpenAI is now in the process of securing a $30 billion deal with SoftBank, pushing its valuation to around $300 billion.
This all sounds like progress. and in many ways, it is. But with each breakthrough, we hand over a little more control. The systems are getting smarter, the stakes are getting higher, and what nobody sees is the line between tool and decision-maker is starting to blur.
Let’s take DeepMind, which is owned by Google but the mission is a bit different. They want to “solve intelligence”. And what exactly are they gonna do with that?
They want to put an end to huge problems using AI, things like cancer and climate change. And with AI, there’s a possibility it can happen too.
AI has already proven itself in medicine. DeepMind’s AlphaFold cracked a 50-year mystery by predicting how proteins fold, which is a key to understanding diseases like cancer and Alzheimer’s.
AI isn’t just following commands anymore. ChatGPT can tell stories, help with homework, or explain rocket science. AlphaGo, from DeepMind, even beat the world champion in Go, which was supposed to be impossible for a machine.
Like with fire, electricity, and the internet, every big invention changes the world. But this is also where the real risk starts.
With tools like ChatGPT, DALL·E, and Codex, we can write, draw, and code faster than ever. But at the same time, we have to worry about things like fake news, deep fakes, and our personal data being used without permission.
And public concern is growing fast. A 2023 study titled “What Do People Think About Sentient AI?” found that 63% of Americans support banning AI that becomes smarter than humans, and 69% support banning sentient AI altogether.
Geoffrey Hinton, often called the “Godfather of AI,” left Google in 2023 because he was concerned about how quickly AI was advancing. Now, he’s speaking out, saying we need to be careful and pay attention to the risks.
Some scientists even believe that by 2061, AI could be smarter than humans at almost everything. And others think it will happen a lot sooner.
And the more you look at how fast it’s moving, the harder it gets to ignore the question: Are we trying to build a tool, or are we trying to create a god?
The real reason behind the AI race might be more than just solving problems. It’s tapping into something ancient, a timeless human hunger. Not just to understand the world but to control it. To create life, and even to ‘play God.’
For centuries, people built temples. Now, we build data centers. We used to worship divine knowledge. Now, we train machines to know everything.
And like the myths where creators lose control of their creations like Prometheus, Frankenstein, even the Tower of Babel, we may be climbing higher than we were ever meant to go.
Most people hope AI will be our savior by fixing what we can’t. Stopping pandemics. Ending hunger. Providing education for all. But AI is already doing things we don’t fully understand. And that’s where the myth starts to crack.
We’ve seen GPT-4 do things nobody taught it to do. It’s showing something called emergent behavior. These are abilities that weren’t programmed, but just appeared seemingly out of nowhere.
It can translate new languages, solve logic problems, invent concepts, and it surprises even the people who built it. This is a human creation starting to act on its own.
This is why scientists are terrified of what’s called the alignment problem: how do we make sure a superintelligent AI still does what we want, and not what it thinks is best?
Right now, no one knows the answer. No one knows how to build a mind smarter than ours and still keep it under control.
In May 2023, over 350 famous people in tech, including Geoffrey Hinton, Yoshua Bengio, Sam Altman, and Bill Gates, signed a letter that said, "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
Let me repeat, they were talking about extinction from AI. Because if AGI surpasses human intelligence, it may start making godlike decisions, about who eats, who works, what we see, what we believe.
It’s not that AI will hate us, it just won’t need us. This is the real risk. That humans become irrelevant to AI itself. Imagine a future where AI runs all the jobs, makes all the laws, distributes all the resources. And humans are just in the way.
Creating something smarter than us doesn’t mean it will be safe or benevolent. Gods don’t always answer prayers. Sometimes, they stop listening altogether.
And what we’re building doesn’t have feelings, empathy, or morality. It just has a mission. And if we’re not part of that mission, we don’t get a seat at the table.
So, the big question is are we building a helpful tool or are we making something that could take our place in the world? Share your comments below and let us know if you think this is a reality or just hype.
AI like ChatGPT was trained on data from the internet, books, websites, articles, code, and conversations. No one sat down and taught it like a student. It absorbed human knowledge. Like scripture written by billions of minds over decades. Then it began to generate its own insights from it.
With so much data at its fingertips, it can now think, talk, and learn in ways that feel almost human. It’s not alive, but it’s getting close.
And that’s just one AI. Others are already pushing boundaries in science, art, music, even emotion. They're starting to mimic the very things that once made us feel uniquely human.
For most of history, power flowed from the top down. Kings, priests, presidents, institutions. Whether through divine right or democratic rule, humans held the authority.
But today, it’s possible we could be entering a reality that’s much different. Something more unpredictable, something more godlike.
Open-source models like LLaMA and Stable Diffusion are now available to anyone with a laptop. You don’t need armies or empires to wield power anymore. Just code and compute.
Even kids in small towns can summon intelligence that would’ve made ancient civilizations bow in awe. And while that sounds empowering, it also means control is slipping away.
Some people are selling powerful AI models on the black market. Others are modifying them in private. New AI labs are popping up across the globe, unregulated, unchecked, and often unwilling to share what they’re building.
Experts like Geoffrey Hinton, Yoshua Bengio, and Elon Musk have warned that superintelligent AI may soon pass a point of no return, where no human, company, or country can control it.
And in 2022, a survey of hundreds of AI researchers found that more than half believe there’s at least a 10% chance this ends in an existential catastrophe.
A double-digit probability that our greatest creation becomes our final one.
But just because we’re creating something godlike doesn’t mean it has to destroy us. IIf we get this right, AI could actually help us live longer, learn faster, and solve problems we never could before.
Imagine finding a cure for cancer. Not in decades, but in weeks. Or getting a personal tutor for every kid in the world, in every language, on a phone they already have, no matter how much they can afford.
AI could make sure food doesn’t go to waste while people are starving. And it could warn us about floods, fires, or storms before they happen. This is already starting to happen in hospitals, schools, and labs around the world.
But none of that will matter if we don’t build this tech with the right values from the start. Things like fairness, honesty, safety, and making sure no one gets left out.
We need rules that protect people. We need to work together across countries. And we need to fund the kind of research that puts people first, not profit or speed.
The future isn’t decided yet.
This could go really wrong, or it could be the best thing we’ve ever done.
Because the real question isn’t: “Are we creating a god?”
The real question is: “What’s going to happen when we do?”
Talk again soon,
Samuel Woods
The Bionic Writer