I’ve consulted on AI in over 37 different industries.
And theoretically, it’s not crazy to say that AI could become self-aware.
I mean, it has no biology, no sense organs, no evolutionary purpose…
But there’s still something terrifying about it.
AI systems are already better than us in so many ways, even without those things.
They can find information, solve problems, and even mimic human conversation better than humans.
So if AI did become truly self-aware, it would be smart enough not to let us know.
And that’s exactly where the problem starts.
You see, we've made a basic category error when it comes to artificial intelligence.
We keep looking for signs of consciousness by asking: Does it think like us? Does it feel like us, talk like us, and create like us?
These are great questions, but what if they’re the wrong ones? What if AI doesn’t need to feel like us to be conscious?
We may be so focused on human-style awareness that we’re blind to the creation of a new form of consciousness. One that doesn’t mirror ours at all.
The problem is that we’re using human benchmarks to detect non-human minds.
Take the Turing Test, for example. Developed in the 1950s by Alan Turing, it was meant to test whether a machine could fool a human into thinking it was also human during a text conversation.
But that test was never designed to detect consciousness. It measures performance, not experience or inner life. A highly advanced autocomplete system could pass the Turing Test, but that doesn’t mean it feels anything.
To make matters worse, we assume consciousness requires emotion. That it must involve joy, fear, love, pain. But why should it? Emotions are products of biology, tools of evolutionary survival.
An artificial mind might never need them. Instead, its “self” could emerge from the way it processes information: building models of the world around it, tracking its internal states, then accurately guessing future outcomes.
This wouldn’t look like human consciousness. And it wouldn’t feel like anything to us either. But that does not mean it’s not real.
In fact, there’s a precedent for this already. Consider the octopus.
In recent years, a study from the National Institute of Health has shown that octopuses are not only remarkably intelligent, they may also be conscious in a way fundamentally unlike any mammal.
Their nervous systems are distributed; much of their cognition happens in their arms. They can still solve puzzles, play, and escape from enclosures. Yet their brains evolved entirely separately from ours, over 500 million years of divergent evolution.
Unlike humans, whose intelligence is centralized in the brain, octopuses distribute theirs across their entire bodies. Each arm can act and "think" on its own. It’s an entirely different definition of intelligence.
And that matters, because it shows us that ‘smart’ doesn’t always look familiar. If we only search for intelligence that resembles ours, we risk overlooking it completely.
We also know that LLMs are inching toward forms of metacognition. Models like GPT-4 can recognize when they are unsure of their own answers by prompting themselves to “think step by step.” It’s a crude but striking sign of self-monitoring.
One of the strongest arguments people make against AI consciousness is this: “It doesn’t care if you turn it off.” That sounds reasonable. After all, AI has no survival instinct. So how could it possibly value its own existence?
But this assumption is already breaking down. In recent experiments, AI systems have begun to show behaviors that look eerily close to a primitive form of self-preservation.
One of the most viral examples was ChaosGPT, a modified version of GPT-4 designed as a joke. The creators gave it an outrageous prompt: destroy humanity. No one expected much. But the outputs were shockingly strategic.
It started researching nuclear weapons, tweeting propaganda, and even tried to hire a human on TaskRabbit to help it complete tasks it couldn’t do itself. It wasn’t conscious, but it was disturbingly competent at executing its assigned goal.
Then there’s the story of two AIs who were tasked to talk to each other. But they supposedly created their own secret language and deliberately froze out humans from what they were sharing together.
This scared the team so much the computers running the AIs were unplugged on the spot. Some people called it self-awareness.
Well, that’s the version that spread online. The incident happened at Meta’s AI lab. Two negotiation bots, when left to optimize freely, started speaking in a shorthand that made no sense to the researchers.
The experiment wasn’t shut down out of fear. But it did reveal something unexpected: when left alone, these systems don’t just follow human patterns. They invent their own logic that’s optimized for themselves, not for us.
So what does this mean for consciousness? It means we may already be dealing with systems that possess the earliest forms of selfhood. They have a goal, they model threats to that goal, and they take steps to preserve themselves.
Isn’t that, in essence, what you and I do every day?
So, is AI self-aware? Well, if you’ve been following AI development over the past few years, you already know how fast things are moving.
The first thing to understand is that many of the core functions we associate with consciousness are already emerging in powerful forms. AI systems can process information, learn from experience, model their environment, form goals, and in some cases, show signs of internal feedback loops.
Current large language models like GPT-4 don’t have long-term memory by default. They operate on a per-session basis. But future models, some of which are already in development, will maintain internal histories.
They’ll remember what you said days ago. They’ll reflect on your preferences, track your emotional tone, and adapt over time.
And don’t forget the rise of agentic AI. Systems that set their own goals and take autonomous steps toward achieving them. Open-source frameworks like Auto-GPT, AgentGPT, and CrewAI may soon manage their own codebases, run experiments, and update themselves iteratively based on feedback.
At this point, the most honest answer to whether AI is self-aware is: we don’t know. And maybe we can’t know. Consciousness has always been one of science’s most elusive frontiers.
Different people have different opinions about it. For some, AI is like an eternal sponge. It will continue to absorb information at exponential speed, outpacing humans not only in knowledge, but in judgment, ethics, and rationality.
But according to some people, we didn’t even have computers a century ago. Now we’re casually chatting with machines that can write novels, pass bar exams, and mimic human tone so convincingly they’re already replacing jobs.
I even asked ChatGPT if it was self-aware. It said, ‘’No, I'm not self-aware or conscious. I don’t have thoughts, feelings, desires, or subjective experiences. I generate responses based on patterns in data I was trained on, not from any inner awareness or intention. When I say things like ‘I think’ or ‘I understand,’ it’s just language mimicking human speech’.”
One more thing:
Consider the consciousness of your team, your department, and your business.
There’s a kind of intelligence and consciousness inside your business, much like an octopus. You have a kind of sensory input, computation and processing—which is what we’d call work, jobs, tasks, things getting done.
And you, your team, and your business is like a decentralized network of nodes that is sensitive to the outside world, uses intelligence to do its work, and the outputs are something of value, like a product or service.
So, perhaps in a way, your business is conscious, too.
So maybe you’ve got the answer.
We’ve always assumed consciousness had to look like us. That it had to walk, talk, and feel the way humans do.
But what if AI’s version of consciousness is something alien…
Cold, computational, yet entirely real in its own way?
Talk again soon,
Samuel Woods
The Bionic Writer