The Interview That No Longer Makes Sense
Design interviews still test tool fluency. But when AI handles most of the execution, what we should actually be evaluating has shifted entirely.

I remember sitting across from a creative director a few years ago, walking through a prototype I'd spent two weeks refining. He nodded at the interactions, asked about my component library, wanted to know my handoff process. I had answers for all of it. And I got the job.
That interview made sense. The craft was the bottleneck. How fluently you moved through Figma, Miro, or whatever your stack was — that was a meaningful signal. Mastery of the tool translated directly into the quality of the output.
That logic is breaking.
Not slowly, gradually breaking — but quickly and irreversibly. An AI-connected Figma, paired with the right MCP server, can now generate prototypes, produce design variations, and populate entire component libraries in a fraction of the time it used to take a skilled human hand. The deliverables that once required weeks of craft can be initiated in an afternoon.
Which leaves a question that design hiring hasn't yet confronted seriously: if the output is no longer the differentiator, what are we actually interviewing for?
What the Tool Can't Hold
There's something I've noticed after eleven years of moving through the layers of this work — from business analyst writing specifications, to UX designer conducting user research, to no-code builder closing the gap between design and live product, to where I am now, directing AI to build things I could only have imagined before.
The tools changed at every phase. What didn't change was the thinking underneath them.
A Figma file, no matter how AI-assisted, doesn't know why a certain flow matters. It doesn't carry the memory of a user research session where someone paused for twelve seconds before clicking the wrong button. It doesn't feel the moment when a technically correct interface is experientially wrong — when the logic works and the human doesn't.
That context lives in the designer. It always has. But we kept measuring the output instead.
There's a version of design thinking that can be prompted. You can ask an AI to "design a user-friendly onboarding flow for a B2B SaaS product" and receive something functional. What you can't prompt is the judgment about which problem is worth solving in the first place. You can't outsource the understanding that comes from sitting with users in their actual environments, watching them navigate around solutions you were sure were obvious.
That judgment is slower to acquire. It doesn't show up cleanly in a portfolio.
What the Interview Should Be Testing
I think about this every time I hear someone ask whether designers need to "learn to code" now that AI writes the code. It's the same category error we've always made: focusing on the execution layer when the differentiating layer is upstream.
The designers I find most compelling at this moment in time are not the fastest executors. They're the ones who ask the question before the question. Who can step into an unfamiliar problem space and quickly build a mental model of what the user actually needs, not just what they said they needed. Who know when to stop building and why.
That's not a Figma skill. It's a thinking skill. And design interviews, for the most part, still aren't built to test it.
What would a more honest interview look like? I genuinely don't know the full answer. But I suspect it would involve less portfolio presentation and more unstructured problem framing. Less "show me what you made" and more "walk me through how you decided what to make." It would reward uncertainty handled well, not certainty performed confidently.
The Deeper Shift
There's something larger underneath this, beyond hiring.
If the tools can handle most of the execution, then the human role in design is becoming less about craft and more about architecture. About holding the shape of the problem and the shape of the user simultaneously. About directing, not just doing.
I've been living in this shift for a while now. I call myself a Digital Product Architect partly because "designer" no longer captures what actually happens when I work. I'm not the one building every pixel. I'm the one ensuring the pixels add up to something that genuinely serves someone.
That role requires a different kind of intelligence. Not faster execution. Not better tool fluency. Clearer thinking, better questions, and a deep enough understanding of users that no AI can substitute for it.
The interview hasn't caught up. Maybe it will. Maybe the question it needs to ask is simpler than we think: not "what can you build?" but "what do you understand?"
That might be the only question that still matters.