Uxtopian
All articles

We’re more patient with AI than one another

I’ve been thinking about how quickly we’ve adapted to working with AI. We all understand the deal. If the output is bad, it’s probably on us. The prompt was vague. The context was missing. We didn’t give it enough constraints.

So we revise. We clarify. We try again. No frustration. No judgment. Just iteration.

What’s strange is how little of that generosity we extend to each other. Somewhere along the way, we learned to treat machines as systems that need better inputs — but we still treat humans as if they should just know. And when they don’t, we judge competence, take it personally, make assumptions, or shut down.

That gap keeps showing up for me, especially as a product and design leader working at the intersection of people, systems, and AI. It’s also what pulled me back to The Four Agreements by Don Miguel Ruiz — not as a spiritual guide, but as a surprisingly practical framework for modern work.

We’ve learned how to collaborate with machines faster than we’ve learned how to collaborate with each other.

AI takes us literally. We’ve all learned that the hard way. If we’re unclear, the result reflects it. With humans, though, we’re often loose with language. We imply instead of saying. We soften when we should be direct. We use cleverness when clarity would do more good.

Being impeccable with your word doesn’t mean being rigid or verbose. It means saying what you mean and owning your intent. Words shape outcomes. In leadership, language is one of the primary tools we have. It can create alignment or quietly destroy it. Clarity equals kindness.

When AI pushes back or gives us an answer we don’t like, we don’t get offended. We treat it as information. When a human does the same thing, it suddenly feels different. Feedback feels loaded. Questions feel like challenges. Disagreement feels like judgment. Most of the time, it’s none of those things.

In product organizations, this shows up constantly. Engineering resistance. Executive skepticism. User frustration. These aren’t personal attacks. They’re signals. Often, they’re downstream effects of unclear strategy, misaligned incentives, or someone simply having a bad day.

AI punishes assumptions immediately. You think it understands what you meant. It doesn’t. So you ask better questions. You add clarity. You iterate. With humans, we skip that step. We assume intent. We assume competence, or lack of it. We assume alignment that was never explicitly established.

Strong leaders replace assumptions with curiosity. They ask before they conclude. They clarify before they judge. Not because they’re soft, but because outcomes matter. Working with AI has trained many of us to be better question-askers. We just haven’t fully applied that skill back to human systems yet.

We’ve learned how to collaborate with machines faster than we’ve learned how to collaborate with each other. AI has taught us to be clearer, be less reactive, iterate instead of blame, and separate intent from outcome. The opportunity now is to bring those behaviors back into our human relationships. If we can patiently refine prompts for a system with no feelings, we can afford to be a little more patient with the people building alongside us.

That might be the real work of leadership in this next era.

IA

Ian Alexander

VP of Design — writing on leadership, AI product strategy, and building teams that ship.