AI isn’t making UX worse. Sloppy designers are.

Recently, a partner sent me an image to use on a client site. At first glance, it looked like another rough AI mockup. But then we looked closer. The image had started as a photo taken by a well-known photographer, someone with millions of followers on Instagram. Then it had been put through generative AI. And then it had been sloppily photoshopped.
So it wasn’t just AI-generated, it was an IP violation layered with lazy design. It was poorly edited. And it was immediately recognizable as inauthentic. The lighting didn’t match. The proportions were off. There were clear AI giveaways: uncanny hands, distorted facial features, and subtle rendering issues that gave it away instantly.
It wasn’t just rough, it was reckless. The kind of thing that says, “This was put through a machine, and no one cared enough to clean it up, or even check where it came from.”
The problem wasn’t the tools used. The problem was the judgment. Someone thought this was good enough to send to a client and pass off as “original artwork” on their site.
If we’d used it, people would’ve noticed, and not in a good way. Our client’s brand would’ve taken a hit, and not just in perception, but at a moment that actually mattered. They’re a startup. They’re bootstrapping. Every pixel, every decision, every asset counts. When someone sends them something phoned-in, it’s not just lazy, it’s disrespectful. They’re paying for design that reflects who they are and where they’re going. This would’ve undercut all of it before they even launched.
And for us? We risk looking complicit. Like we didn’t catch it, or worse, like we didn’t care either.
And this wasn’t a one-off.
Lazy lowers the bar for everyone. It’s contagious.
I’ve seen people submit conference talk proposals that were clearly ChatGPT-written: bad formatting, generic sentences, no personalization. I’ve seen client briefs so thin and AI-scraped they couldn’t hold their own weight.
Sloppy isn’t harmless, it blocks everyone downstream from doing their job well. It doesn’t just waste time. It erodes trust.
If you’ve ever watched an episode of Inspector Gadget, you know the drill: lots of noise, lots of tech, but at the end of the day, it was Penny and Brain doing the real work. That’s what AI is right now. The spectacle can be impressive, but the outcomes still depend on who’s guiding the machine.
Recently, Patrick Neeman and I spoke at UXPA Boston, where we delivered a talk focused squarely on how the industry should think about AI as a tool, especially in a time when the industry is grappling with the speed and scale of change.
This wasn’t an academic panel. It was a room full of seasoned pros, veteran UXers who understand that our field is in a moment of real crisis. That’s why we felt the need to give that talk.
To drive the point home, we built our entire deck using AI-generated images. Every slide. And we told the audience upfront: we left the errors in on purpose. Spelling mistakes, weird fingers, inconsistent shadows, the whole parade of AI quirks.
Here are a couple examples:


The point wasn’t to dunk on the tech. It was to demonstrate that you can’t just let the machine drive and expect a perfect outcome. You still need a human behind the wheel.
The day after UXPA Boston, Patrick and I were on the other side of the country speaking in Seattle at the University of Washington for the Women in UX conference. This audience was different. These were people just entering the industry, full of excitement, optimism, and thoughtful questions.
Even in their early careers, they could sense the ethical tension. They weren’t asking how to avoid AI, they were asking how to use it well. The room was full of smart, thoughtful people, many of them early in their UX careers, asking exactly the right questions. They weren’t asking, “Can I use AI to replace my job?” They were asking, “How do I use AI without cutting corners?”
They wanted to know where the line was.
And here’s the truth: That line isn’t fixed. It’s contextual. But it exists. And the more we use AI in our work, the more we have to pay attention to when we’re using it to help and when we’re using it to hide.
Tools don’t set standards. People do.
We often hear the phrase “AI is just a tool.” And that’s true. But it’s an incomplete truth. A hammer is a tool, too. You can use it to build a house or to smash a window.
The ethics don’t live in the object. They live in the intent and in the craft.
Most of the articles out there focus on the big, theoretical stuff: privacy, surveillance, explainability.
Darrell Estabrook and Gytis Markevicius have created an ethics-in-design primer that talks about fairness and user trust and it does a solid job of surfacing principles like fairness, user trust, and human-centeredness. That said, its guidance remains high-level. It talks about ‘human oversight’, but doesn’t define what that looks like when someone uses ChatGPT to write a case study and ships it without revision.
Meanwhile, Nandkumar Bhujbal argues that AI systems must preserve user autonomy which is a fair point. But autonomy is just as compromised by poor design decisions as it is by surveillance. When we let AI write the error messages or confirmation modals without review, we’re not protecting autonomy, we’re delegating it.
This leads us back to the human factor. Dennis Dickson raises a warning about ethical ambiguity as AI blends into our workflows. That’s the crux: AI isn’t visible to the end user anymore. So the ethics of how we use it get buried and designers stop asking who’s accountable.
Jay Eckert underscores the need for designers to understand their tools’ boundaries. He warns that relying on AI without critical oversight will lead to “the erosion of design as a thoughtful, human practice.” That’s the kind of phrase we should pin above our monitors.
It’s great to see people raising their voices here and, frankly, laying the groundwork.
No one has gone far enough to say what needs to be said: that pushing unpolished work live because ‘the AI wrote it’ is a choice. And a bad one.
We also need to talk about something more immediate: the ethics of effort.
Because if you’re a UX designer using AI to generate something and then pushing it live without vetting it, refining it, questioning it, that’s not ethical. That’s lazy. And laziness at scale is just as damaging as malice. Sometimes more so.
The ethics of effort
Caiden Laubach writes about how ethical AI involves using tools built with integrity, ensuring outputs respect intellectual property rights, and safeguarding user data. He also underscores the necessity of obtaining proper permissions and maintaining transparency in AI-generated content to uphold brand trust. But here’s the thing:
You don’t have to be trying to mislead to end up misleading someone. All you have to do is care less than you should.
At the Women in UX event, the thing that struck me most wasn’t fear, it was optimism. People wanted to learn how to use AI well. They weren’t resisting the tool. They were resisting the temptation to let the tool lower their standards. They knew that AI could be brilliant at first drafts. They just didn’t want it to be their last.
That’s the line.
When you use AI, use it with discipline
That’s why Patrick’s book, UXGPT, is so useful. It’s not a collection of hacks. It’s a method, a way to use AI to support clarity, not replace it. It gives structure to the chaos, and it centers the human. It helps people ask better questions, which is the entire point of good UX.
We need more frameworks like that. Because as AI continues to shape our workflows, our challenge isn’t “How do we stay ahead of the robots?” It’s “How do we stay accountable to our users, our teams, and ourselves?”
The People + AI Guidebook from Google reinforces this with its human-in-the-loop model: AI is there to assist, not decide. And yet, we’re seeing more and more tools automate decisions people care deeply about. Alan Cooper marked it as a red flag in his design philosophy when he said: “Don’t automate decisions people care about.” If we’re not applying that to our AI-infused workflows, we’re not doing UX, we’re doing automation theater.
If we want to be trusted, we have to care
This matters not just because of quality, but because of trust. As Sharath Jegan puts it, “By prioritizing transparency and explainability, designers promote trust, empower users, and uphold ethical principles in AI-driven UX design.” If we want to be trusted, we need to care, not just about privacy, but about polish.
So yes, keep talking about privacy. Keep demanding explainability. But also look at the work you’re shipping and ask:
- Did I do the hard part?
- Did I put this through a critical lens?
- Did I protect the user from my own shortcuts?
The real risk
AI can help us move faster. But it can also help us slide. That’s the ethical conversation we need to be having, not just in policy documents, but in pitch decks, design reviews, and production pushes.
Because the real risk isn’t that AI will take our jobs. It’s that we’ll let it take our standards.
And if that sounds like a Black Mirror episode, that’s because it is, Season 6, Episode 1, to be exact. But you could just as easily argue this is The Jetsons with fewer flying cars and more UX debt. Or HAL 9000 from 2001: A Space Odyssey, except instead of locking us out of an airlock, the algorithm just auto-publishes a homepage headline that makes no sense.
We’re not building dystopia on purpose. But we are inching toward it with every unreviewed, unrefined, uncritical “good enough” that makes it into production.
So let’s hold the line
We still have a choice. We can decide where the line is. And we can decide to hold it.
More importantly, we can model what it looks like to use AI the right way. We can show younger designers, skeptical stakeholders, and overwhelmed teams that AI doesn’t mean compromise, it means discipline. It means knowing when to hit “Generate” and when to say, “Not good enough yet.” It means raising the standard, not just because the user deserves it, but because we do.
If we do this well, AI doesn’t replace UX. It reinforces it.
When it comes down to it, design still matters. Judgment still matters. Craft still matters. And that is the hill I’ll die on, even if it was landscaped by Midjourney.
About Dan Maccarone
Dan Maccarone is a UX strategist, product designer, and co-founder of Charming Robot. He’s also the author of The Barstool MBA, an Audible Original on real-world product strategy. You can connect with Dan on LinkedIn and Twitter (he’s not calling it X).
Craft vs. Complacency: the ethics of laziness in AI-driven UX was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
Leave a Reply