One of the most honest takes I’ve seen on AI in design tools wasn’t a formal article—it was a LinkedIn reflection from a UX researcher experimenting with Figma Make. They described how AI‑generated prototypes could turn research share‑outs into live, interactive workshops instead of static decks, shrinking the gap between insight and design iteration. But in the same breath, they admitted it often took “hours of iterative, granular prompting” to get ideas to come to life.
That phrase—hours of iterative, granular prompting—hit me harder than any polished product announcement. It captures a hidden UX cost I’ve started calling the prompt tax: the cognitive and emotional overhead of trying to wrangle a conversational or generative system into doing what you mean, not just what you say. On paper, the system is “natural language” and “intuitive.” In practice, you spend a lot of time reverse‑engineering how to talk to it.
From a UX perspective, this is a familiar pattern. We’ve seen it with early voice assistants (“sorry, I didn’t catch that”) and with chatbots that require oddly specific phrasing. The twist here is that the stakes are higher: we’re talking about tools for expert work, where precision, repeatability, and explainability matter. When a researcher says they see the potential but also feel the grind of prompting, that’s a clear signal that the interaction model needs more than just good language models.
For my thesis, this post validates a core hunch: conversational interfaces in design tools can’t stand alone. They need supporting structures that reduce the prompt tax—like surfacing relevant controls at the right moment, remembering personal vocabulary, or letting users “draw” corrections instead of re‑prompting. The goal isn’t fewer prompts; it’s fewer frustrating prompts.
Relevant link: https://www.linkedin.com/posts/nicholas-santer-7b7055127_config2025-make-uxresearch-activity-7356748034061316099-wUNW