Uxtopian
All articles

What Happens When You Ask AI to Disagree With Itself

Most of what we build with AI right now is convergent. Ask a question, get an answer. Summarize this. Generate that. The whole interaction model assumes the value is in the output — the single, confident response. I wanted to see what happens when you invert that. Instead of asking AI to converge on an answer, what if I asked it to sustain a disagreement?

I built a thing called the Symposium. Nine thinkers I admired — Kurzweil, Wiener, Rushkoff, Lanier, Zuboff, Bostrom, Taleb, Chaum, Han — sit around a virtual table. You pose a question. They debate for 30 simulated minutes. At the end, they’re forced to find consensus — not by averaging positions, but by identifying what’s genuinely shared and what’s irreconcilable. The consensus block is often the most revealing part.

Ask it something easy and you’ll get a polite exchange. Ask it something hard — “Is privacy a human right or a market failure?” (a question that came up in my conversation with Robert Stribley) or “Should AI systems be allowed to disagree with their users?” — and the gaps between thinkers become the most interesting part of the conversation.

We’ve spent a while now optimizing AI for agreement. The user asks, the model answers, the interaction ends. But the problems worth solving — in product, in organizations, in design, in life — don’t have single answers. They have tensions. Trade-offs. Perspectives that are individually coherent and mutually incompatible. Taleb thinks the risk is fragility. Zuboff thinks it’s surveillance. Kurzweil thinks the upside overwhelms both concerns. They’re all right. The shape of their disagreement tells you more about the question than any one of their answers does.

The gaps between perspectives are where the real information lives. Not in what anyone thinks, but in where they can’t agree — and why.

The AI wasn’t the hard part. The interaction model was. Nine voices generating text sequentially, the default experience is a wall of paragraphs. Nobody reads that. The challenge was making it feel like a room — like you’re overhearing a conversation that has rhythm, interruption, momentum. Every design decision came down to the same question: how much control does the user get, and when?

The debate streams in real time. You can sit back and watch, or you can hit the barge button, which stops the conversation dead and forces the floor open. You interrupted. Now say something. The thinkers respond to your interjection in the next phase. You’re not a spectator anymore.

The debate keeps generating in the background even when you pause your screen. Turns queue up silently — “3 waiting,” “5 waiting” — until you’re ready. It’s the difference between a live stream you can’t rewind and a room you can step out of and come back to. Sounds small. Turns out it’s what makes long debates readable.

There’s a one-time-use wildcard I call the instigator. Type any public figure’s name — Kate Crawford, Cornel West, whoever — and they get summoned into the room. They read everything that’s been said and challenge it. Then all nine thinkers respond to the outsider. You’re introducing a perspective the room didn’t account for and watching what breaks.

After the first phase of debate, the floor opens for 12 seconds. You can throw a question into the room, redirect the conversation, provoke a specific thinker, or skip and let them continue. The countdown creates just enough pressure to make you commit to something rather than passively watching. After both phases, the system synthesizes — not a summary, but a genuine attempt to find what nine different frameworks can agree on. You can click on each thinker for their individual take. What survives that process is usually a much more precise version of the question you started with. What doesn’t survive tells you where the real fault lines are.

The thing that surprised me most: the quality of the output depends almost entirely on the quality of the question. Vague question, vague debate. A precise, loaded question — one with genuine tension built in — produces exchanges I’d actually want to read. Same thing I’ve seen in every design org I’ve led. The bottleneck is never the talent or the tools. It’s the clarity of the question. AI just makes that pattern impossible to ignore because the feedback loop is immediate. Ask a bad question, get a bad debate, in seconds.

Watching AI model disagreement is more useful than watching it model agreement. When I ask Claude to help me think through a product decision, the single-voice answer is fine. But when I can see the Rushkoff perspective and the Bostrom perspective collide on the same question — that’s when I actually change my mind about something.

IA

Ian Alexander

VP of Design — writing on leadership, AI product strategy, and building teams that ship.