The Dialectical Machine: Using Prompts to Think Against Yourself
Most people use LLMs to validate their thinking. But it’s more useful to destroy weak ideas before they escape into the world.
The Self-Confidence Trap
I was convinced I’d figured something out.
After months of working with various AI models on creative campaigns, I’d developed a theory about why most AI-generated content fails. The problem, I declared to anyone who would listen, was “emotional flatness.” AI couldn’t capture the emotional texture of human experience. It produced technically correct but soulless output because it lacked subjective experience.
I wrote about this. I talked about it in meetings. I built entire frameworks around it.
Then a colleague asked a simple question: “Have you actually tested that? Or is this just a feeling you have?”
I hadn’t tested it. I’d assumed it. The theory felt true because it confirmed what I already believed about the relationship between consciousness and creativity. I’d dressed up an assumption in the language of insight.
When I actually examined the LLM outputs I’d been dismissing, I found something different. The “emotional flatness” wasn’t inherent to AI. It was a consequence of how I was prompting. I was asking for outcomes without providing emotional context. The flatness was mine, reflected back.
My theory didn’t survive contact with scrutiny. It collapsed the moment someone asked me to defend it with evidence rather than conviction.
This is the trap many of us fall into. We mistake fluency for understanding. We confuse the ability to articulate a position with the validity of that position. And we rarely encounter anyone willing to push back hard enough to expose the difference.
The Yes-Machine Problem
LLMs can, by default, make this worse.
If you ChatGPT or Claude a question, you’ll get a competent-sounding answer. Write something and paste it in, and it’ll polish your prose without questioning whether the underlying idea deserved to exist. Request feedback, and you’ll receive gentle suggestions framed with diplomatic hedges.
This is how these systems are trained. They’re optimized to be helpful, agreeable, collaborative. They’re designed to assist rather than oppose.
So the default behavior is that LLMs become the ultimate yes-man. They validate half-formed thinking with well-structured paragraphs. They give flimsy ideas the appearance of rigor and they let you feel productive while avoiding the uncomfortable work of genuine examination.
Most people using LLMs for writing and thinking are doing exactly this. They bring vague intentions into the conversation and receive polished vagueness back.
This is backwards.
One powerful tradition of rigorous thought (one of the dominant ones in the West) depends on opposition. Socrates didn’t help Athenians feel good about their beliefs. He stung them like a gadfly until their assumptions collapsed or hardened into something defensible. The scientific method institutionalized doubt. Peer review exists specifically to find holes in arguments. Hegel built an entire philosophy around the collision of thesis and antithesis.
This isn’t the only way to think rigorously. For example, Alfred North Whitehead’s process philosophy emphasizes integration and creative advance rather than combat. The hermeneutic tradition seeks fusion of horizons. But the dialectical mode—thought refined through opposition—is a specific capacity most of us have lost access to. We lack sparring partners. We lack people willing to tell us we’re wrong.
Good thinking requires friction. Ideas that never meet resistance remain untested, and untested ideas are indistinguishable from pleasant-sounding noise.
The problem is that most of us have no one who will seriously challenge our thinking anymore. Social media feeds agree with us. Our inner monologue is an echo chamber that sounds increasingly like a feed of content we already approve of.
LLMs can change this. They can be deliberately adversarial.
The Reframe: LLM as Opponent
There can be a lot of value in using LLMs to think against yourself.
Use the LLM as an opponent instead of as an assistant. Ask it to find the holes in your thinking.
This requires a fundamentally different relationship with the technology than most people have. You’re inviting attack rather than requesting agreement or production.
I call this Dialectical Prompting. It’s structured conversation with an LLM where the explicit goal is resistance, questioning, and pressure-testing. The LLM becomes a Socratic interlocutor whose job is to expose what you haven’t examined.
This matters because most of us can’t do this for ourselves. We’re too attached to our own ideas. We’ve spent too long developing them to see their weaknesses clearly. We need an external force that doesn’t care about our feelings, doesn’t worry about damaging the relationship, and has no incentive to let bad thinking slide.
An LLM, properly configured, can be that force.
What follows is a framework for using LLMs as dialectical machines. Four distinct modes of adversarial engagement, each designed to strengthen thinking by subjecting it to specific kinds of opposition.
Mode 1: Assess Your Assumption
Every idea stands on hidden foundations, beliefs we don’t examine, and premises we treat as facts.
Most weak ideas aren’t wrong on their own terms. They’re built on foundations that wouldn’t survive scrutiny if anyone bothered to dig them up. Assumption Excavation is the process of finding those foundations and testing whether they hold weight.
What You Bring to This Dialogue
Before engaging an LLM in assumption excavation, you need:
A specific claim or position you hold. A vague interest area won’t work. You need a concrete belief you’re willing to defend. “Content marketing is changing” is useless. “Most B2B companies should stop producing blog content entirely because the attention economics have inverted” gives the LLM something to work with.
Your reasoning for holding this position. Write out why you believe what you believe before you start the conversation. This forces you to make implicit logic explicit, which is often where the weakest assumptions hide.
Willingness to have your position dismantled. If you enter this conversation hoping to be validated, you’ll unconsciously resist the examination. Enter with genuine curiosity about whether your idea survives.
The Dialogue Structure
Begin with this framing:
Begin with this framing:
I want you to help me excavate the assumptions underneath a position I hold. Your job is not to argue whether I'm right or wrong, but to identify every belief my position depends on—especially beliefs I might not have consciously chosen.
Here's my position: [State your specific claim]
Here's my reasoning: [Explain why you believe this]
Now I want you to:
1. Identify the explicit assumptions my reasoning depends on—the premises I've stated or implied.
2. Identify the hidden assumptions—beliefs I'm treating as facts without examination, premises I haven't stated but my argument requires.
3. For each assumption, tell me: Is this something I've chosen deliberately, or something I've inherited from my context, training, or environment?
Start with the assumption that seems most foundational. If that one fails, does my entire position collapse?After the LLM responds, your job is to engage. For each assumption it surfaces, ask yourself: Did I consciously choose this? Can I defend it? What happens to my position if this assumption is wrong?
Push the LLM to go deeper on the assumptions that feel uncomfortable. Those are usually the ones most worth examining.
What Good Excavation Looks Like
The goal isn’t to abandon your position. It’s to understand exactly what it rests on.
Sometimes you’ll discover that your position depends on beliefs you hold for good reasons and can defend. That’s strengthening.
Sometimes you’ll discover that your position depends on beliefs you inherited without examination from your industry, your education, your social context. That’s valuable information. You can then decide whether to adopt those beliefs consciously or revise your position.
Sometimes you’ll discover that your position collapses entirely once you see what it requires. That’s the most valuable outcome of all, even though it doesn’t feel like it.
Mode 2: Force Precision
Vague language hides weak thinking. Abstract terms, hedged claims, and broad generalizations are often symptoms of ideas that haven’t been thought through completely.
Precision Enforcement is the process of removing every escape route from your thinking. It forces you to say exactly what you mean, in concrete terms, with specific referents. Ideas that can’t survive this process probably shouldn’t be shared.
What You Bring to This Dialogue
A draft, argument, or idea expressed in your own words. This works best with something you’ve already written, like a paragraph, a page, an outline. The LLM needs actual language to analyze.
Honesty about where you’re uncertain. Before you begin, mark the places in your writing where you know you’re being vague. Often we use abstract language precisely because we haven’t figured out what we actually mean.
Commitment to killing sentences. You need to enter this conversation willing to delete things. If a sentence can’t be made precise, it shouldn’t exist.
The Dialogue Structure
Begin with this framing:
I want you to act as a precision enforcer on the following text. Your job is to identify every sentence, phrase, or claim that hides behind vague language.
Here’s the standard: If a sentence could apply to anyone, any company, any situation—it’s not actually saying anything. If a claim uses abstract terms without concrete referents, it’s avoiding commitment. If an assertion presents assumptions as facts, it needs to either be defended or removed.
Here’s my text:
[Paste your draft]
For each problem you find:
1. Quote the specific language
2. Explain what’s vague or uncommitted about it
3. Ask me the question I need to answer to make it precise
Don’t rewrite anything for me. I need to do that work myself. Your job is to show me where the problems are.When the LLM identifies vague language, resist the urge to defend it. Instead, try to answer the question it poses. If you can’t answer with specifics, you’ve found a hole in your thinking.
Continue the dialogue by asking:
Now that I’ve revised [specific section], does it survive? Or does the precision expose new problems?
What Good Enforcement Looks Like
Precision enforcement should feel uncomfortable. If it’s easy, either your thinking was already clear or the LLM isn’t pushing hard enough.
The goal is prose where every sentence commits to something specific and defensible. After this process, you should be able to point to any claim in your writing and explain exactly what it means, who it applies to, and why you believe it.
Some pieces won’t survive. You’ll discover that once the vague language is removed, there’s nothing left. The idea was all mist and no mountain. Better to find that out before you publish than after.
Mode 3: Steelman Opposition
The easiest way to feel smart is to argue against weak versions of opposing positions. We do this constantly without realizing it. We construct strawmen (caricatures of views we disagree with) and then knock them down with satisfaction.
This makes our thinking weaker, not stronger.
Steelman Opposition is the reverse. You invite the LLM to construct the strongest possible argument against your position. The version a brilliant, informed, good-faith critic would make, which is more like a genuine challenge rather than a strawman or parody.
If your position can survive the steelman, it’s probably solid. If it can’t, you need to either revise or abandon it.
What You Bring to This Dialogue
A position you’re confident about. This mode works best when you genuinely believe something and want to test whether you should. The more confident you are, the more valuable the steelman becomes.
Your best arguments for your position. Write these out before you ask for the opposition. This prevents you from unconsciously weakening your own case to make the steelman easier to defeat.
The identity of a specific critic. Steelmanning works better when it’s grounded in a real perspective. Who would disagree with you? What do they know that you might not? What values do they hold that lead them to different conclusions?
The Dialogue Structure
Begin with this framing:
I hold a position I believe is correct, but I want to test it against the strongest possible opposition.
My position: [State your claim clearly]
My arguments for this position:
[List your best reasoning]
Now I want you to steelman the opposition. Construct the strongest possible argument against my position. This means:
1. Assume the critic is intelligent, informed, and arguing in good faith
2. Use evidence and reasoning that would actually be persuasive
3. Attack my strongest points, not my weakest ones
4. Identify what I might be wrong about, not just where reasonable people might disagree
If it helps, argue from the perspective of: [Name a specific type of critic—an economist, a practitioner with 20 years of experience, someone from a different cultural context, etc.]
Make the argument strong enough that I have to actually wrestle with it.After receiving the steelman, don’t immediately respond with counterarguments. Sit with it. Ask yourself: Is there truth here? What would I have to believe for this criticism to be valid? What evidence would change my mind?
Then continue:
That argument challenges me on [specific point]. Help me think through whether my position survives it. Play devil’s advocate as I try to respond.
What Good Steelmanning Looks Like
A good steelman should make you uncomfortable. If you can dismiss it easily, either the LLM didn’t construct it well or you’re not engaging honestly.
The goal isn’t to defeat the opposition. It’s to understand it fully, then decide whether your position holds. Sometimes you’ll find that the steelman contains insights you should incorporate or that your position was correct but for the wrong reasons. Sometimes you’ll find that you need to qualify or limit your claims.
Occasionally you’ll find that the steelman is simply right, and you’ve been wrong. That’s the most valuable outcome, though it never feels that way in the moment.
Mode 4: Cost Accounting
Every position has costs. Things it requires you to give up or opportunities it foreclose on. Often, there are short-term sacrifices.
Most of us ignore these costs when we’re enamored with an idea. We focus on benefits and overlook tradeoffs. Cost Accounting forces a complete inventory of what a position actually requires—what you’re paying to hold it, and whether the price is worth it.
What You Bring to This Dialogue
A decision, belief, or strategy you’re committed to. This mode works for ideas you’re actually implementing, not just entertaining. The costs only become real when you’re paying them.
Honesty about what you’ve already sacrificed. Think about what holding this position has already cost you. Relationships, opportunities, time, money, other beliefs you had to abandon.
Willingness to question your commitment. The point of cost accounting isn’t to make you feel bad about your choices. It’s to ensure you’re making them with full information.
The Dialogue Structure
Begin with this framing:
I hold a position (or I’m implementing a strategy) and I want to do a complete cost accounting. Your job is to help me see everything I’m paying to hold this position—including costs I might not have consciously acknowledged.
My position/strategy: [State clearly]
Why I hold it: [Your reasoning]
Now help me inventory the costs:
1. Short-term costs: What am I giving up right now? Money, time, attention, opportunities, relationships?
2. Opportunity costs: What paths does this position foreclose? What can I no longer do, say, or pursue if I’m committed to this?
3. Identity costs: What versions of myself become unavailable? What beliefs or values am I sacrificing?
4. Relationship costs: Who does this alienate? What connections does it make harder to maintain?
5. Reversibility costs: If this turns out to be wrong, how hard is it to reverse course? What becomes impossible to recover?
Be thorough. I want to see the full price, not a sanitized version.After the LLM provides its inventory, interrogate it:
For [specific cost], is this actually required by my position? Or is it a cost I’m choosing to pay that I could avoid?And then:
Given this full accounting, help me think through: Is the expected value still positive? What would need to be true for these costs to be worth paying?What Good Cost Accounting Looks Like
The goal isn’t to talk yourself out of positions you genuinely believe in. It’s to ensure you’re holding them with clear eyes.
Some costs are worth paying. Some are even features rather than bugs, like positions worth holding often require sacrifice. But you should be paying them consciously, with full understanding of what you’re giving up.
If the cost accounting reveals prices you’re not willing to pay, that’s critical information. Better to discover that now than after you’ve committed more deeply.
The Practice: Sustaining Adversarial Dialogue
These four modes aren’t meant to be used once and forgotten. They’re practices. The more you engage with them, the more automatic adversarial thinking becomes.
A few principles for sustaining the practice:
Don’t stop at the first response. A single exchange with an LLM is better than nothing, but real dialectical value comes from extended dialogue. Push back. Ask follow-ups. Make the LLM defend its criticisms. Go five, ten, twenty exchanges deep.
Resist the urge to seek comfort. You’ll be tempted to prompt the LLM toward validation. To soften its criticisms. To find reasons your position survives when it probably doesn’t. Notice this urge and resist it.
Write before you prompt. The worst time to encounter adversarial thinking is when your ideas are still formless. Do the work of articulating your position clearly before you subject it to examination. Otherwise the LLM is a stress-testing mist.
Let ideas die. Not every idea deserves to survive. If Mode 2 collapses your argument into nothing, if Mode 3 produces a steelman you can’t answer, if Mode 4 reveals costs you’re not willing to pay, then let the idea go. The whole point is to find out which ideas are worth keeping before you invest more in them.
Keep a record of what died. There’s value in tracking which ideas didn’t survive, and why. Patterns emerge. You’ll notice categories of assumption you keep making, types of cost you keep ignoring, specific weaknesses in your thinking that recur.
Prompt Yourself As Much as the Models
AI can, improperly used, atrophy our capacity for rigorous thought. If we use it to generate output, to produce content, to skip the hard work of thinking, then we become intellectually and cognitively weaker. Our ability to examine, question, and refine our own beliefs degrades through disuse.
LLMs can also strengthen that capacity. They can strengthen it through opposition rather than agreement. And through forcing us to produce better rather than producing for us.
Socrates described himself as a gadfly, a stinging insect that kept Athens from becoming sluggish and complacent. He believed the unexamined life was not worth living. He spent his days asking uncomfortable questions, exposing hidden assumptions, forcing people to defend what they thought they knew.
Athens executed him for it. Uncomfortable questions have always been unwelcome.
But now we have access to a tireless Socratic interlocutor. One that doesn’t get offended when we dismiss its challenges and doesn’t require social navigation or relationship management. And one that will push back as hard as we ask it to, for as long as we can sustain the dialogue.
Last thing I’ll say is: as much as you may prompt an LLM, you’ll want to run the same prompts above on yourself and respond to them. Then, compare your answers to the LLM.
Aside from Dialectical Prompting, there’s another method that I think you’ll love: Process Prompting.
You’ll discover this in the next issue.
Talk again soon,
Samuel Woods
The Bionic Writer


