Design is shifting from drawing interfaces to defining systems. Tools that cannot operate at that level will fall behind.
When Anthropic released Claude Design, a tool that can generate full prototypes, marketing assets, and UI from simple prompts, Figma’s stock dropped sharply within days, signalling how seriously investors take the threat.
This is particularly striking given how strong Figma’s starting position was. The company went public in July 2025 at $33 per share, surged more than 200% on day one, and briefly reached valuations above $60 billion, making it one of the most successful tech IPOs in recent years.
Within months, that momentum reversed. The stock fell sharply from its peak, dropping roughly 70–80% from post-IPO highs, with continued downward pressure driven largely by uncertainty around how AI will reshape design workflows.

This sets the context for everything that follows. Claude Design, and more broadly Claude Code, is not just another competitor. It signals a shift away from tools that help designers draw interfaces towards systems that generate them. Claude is not fully there yet, but it makes the direction of travel obvious.
What makes this more notable is that Anthropic was not an unknown outsider. Figma had already worked closely with them, and the overlap between the two ecosystems was clear. That proximity amplifies the signal. This is not disruption from the edge, but from adjacent territory, from a partner working on the same problem with a fundamentally different model.
The opportunity remains. Figma has distribution, an ecosystem, and deep process lock-in across product teams. If it moves decisively towards AI-native, system-level workflows, it could still define this next era. If it continues to layer AI onto its existing product through MCP server and Code Connect, it risks following the same path as Sketch as teams look for more efficient ways to manage complex, multi-disciplinary workflows.
A native design system
Much of the inefficiency in design begins at the starting point. Even experienced designers rebuild the same foundations repeatedly, from tokens and spacing systems to base components. Design systems exist, but teams often maintain them poorly, which leads to duplication and drift.
This becomes most visible at the start of new work. Despite years of progress, teams still recreate the same primitives - colour tokens, typography scales, spacing rules, and base components - across projects, companies, and industries, even though most of these foundations are functionally identical.
Figma’s emphasis on flexibility treats this repetition as a feature rather than a flaw. In practice, it introduces friction and inconsistency. Many experienced designers avoid pre-built UI kits because they often lack structure, consistency, and reliable naming.
A more useful direction would introduce an integrated system that provides a strong starting point, allowing designers to refine rather than rebuild. Figma could, for example, partner with Untitled UI, giving the system access to a well-structured, tokenised foundation that an embedded agent understands. This would remove the need to construct systems through prompts, which remains slower and more error-prone than building from scratch.
An integrated design agent
Maintaining system quality today depends heavily on manual effort. Designers enforce consistency, refactor components, and catch issues, often under time pressure. Systems degrade over time as a result.
An embedded design agent could reduce that burden by operating continuously within the system. It could detect duplication, flag inconsistencies, and suggest structural changes as patterns emerge.
A typical interaction might look like:
- A designer creates multiple variations of a component
- The agent detects overlap and suggests consolidating into variants
- The designer adjusts spacing
- The agent prompts whether to apply the change system-wide
- A colour change introduces a contrast issue
- The agent flags it and suggests alternatives
It would not just react to issues, but learn from the system itself. As patterns stabilise, it could reinforce them. As new patterns emerge, it could help formalise them. Over time, the system would become more structured without requiring dedicated clean-up passes.
Feeding research directly into the design system
In most teams, research happens early and gradually loses influence. Teams gather, synthesise, and document insights, but once design begins, those insights become static artefacts. Designers rely on memory, and over time the product drifts from the original understanding.
A better model would allow designers to link research sources directly into Figma. Interviews, usability findings, analytics, and support logs could be parsed into themes such as trust, confusion, or friction, giving the system a clearer understanding of the problem.
As work progresses, that context could surface in place. Microcopy would reflect real user language. Flows that resemble known drop-off patterns could be flagged. Layouts could draw on patterns that have already performed well.
AI would not just surface information, but interpret it. It could identify patterns across interviews, cluster recurring pain points, and connect those directly to specific parts of the interface. Instead of manually translating research into design decisions, that connection would persist as the product evolves.
Product analytics embedded in the design
The gap between design and real-world performance remains fragmented. Designers move between Figma and analytics tools, interpret dashboards, and map insights back to UI. This introduces delay and abstraction.
A more integrated model would connect Figma directly to analytics platforms and map production screens to design frames. Each screen could become a live layer of insight. A checkout page could show drop-off, a form field could reveal validation issues, and a filter could expose abandonment patterns.
The designer could identify friction directly on the interface, adjust structure or interaction, and observe how those changes perform over time in the same context. AI could strengthen this loop by highlighting underperforming areas, suggesting improvements, and connecting UI decisions to measurable outcomes.
A real interaction model
Figma’s interaction model breaks because it treats behaviour as links between frames, while real interactions are stateful and conditional. Designers simulate logic by duplicating screens and wiring them together. This approach works initially, but becomes difficult to maintain as complexity grows.
A more robust model would treat interaction as structured. Designers could define states, transitions, and conditions directly at the component and flow level. A button would become a set of states responding to input and system feedback. A form would become a system reacting to validation, submission, and failure.
AI could support this by identifying gaps in behaviour. If a success state exists without a failure path, or a loading state lacks feedback, the system could surface those gaps early.
Logic and constraints as part of the system
Business logic remains disconnected from design. Validation rules, eligibility conditions, pricing structures, and regulatory constraints live in documents, tickets, or code.
Designers reference them, but rarely work with them directly. As a result, the interface and logic drift apart.
A better model would embed logic directly into the system. Designers could define conditions alongside states and flows. A form field would include validation rules. A payment flow would include real success and failure conditions.
For example:
- A user attempts a transaction above a limit
- The system checks eligibility
- The flow branches into a restriction state
- Alternative actions are presented
AI could help maintain alignment by identifying missing conditions or inconsistencies. If a flow handles success but not failure, or validation rules conflict, those issues could surface early.
Motion as a system
Figma’s current approach limits motion to basic transitions, which encourages shallow interaction design. More complex behaviour often moves into external tools, fragmenting the workflow.
A better model would integrate motion directly into the system. Designers could define timelines, keyframes, and sequencing alongside components, allowing motion to reflect real behaviour.
AI could assist by generating a starting point based on intent. Instead of building animations from scratch, designers could refine timing, easing, and sequencing suggested by the system. As patterns evolve, inconsistencies could be identified and aligned.
From system to build: a unified delivery layer
The design-dev gap persists because outputs remain fragmented across tools. Screens, flows, documentation, and implementation exist separately, introducing interpretation and drift.
A unified model would collapse these into a single system. Components could map to production code, flows could be inferred, and behaviour could be defined structurally.
If a designer removed a step in a checkout flow, simplified validation, or improved error handling, those changes could propagate automatically. Flow diagrams, documentation, and implementation would remain aligned.
This would also change collaboration. Product could define constraints in the same system. Design could shape structure. Engineering could work from something that already reflects implementation.
An all-seeing AI
In a genuinely agent-driven design environment, the system would not operate on isolated inputs. It would have access to business requirements, research, behavioural data, constraints, and logic, all connected within the same model.
That context would allow it to understand how flows connect, where constraints apply, and how decisions affect the wider product.
It could identify where logic breaks down, where flows are incomplete, and where patterns introduce friction. Missing error states or inconsistent conditions could be surfaced early.
When connected to APIs and data sources, the system could reflect real conditions rather than approximations. The gap between design and product would narrow, not through better handoff, but through alignment.
A narrow window to lead again
The market reaction to Claude Design reflects direction, not maturity.
Claude Code shows that design and build could converge into a single workflow. It is not complete, but it points clearly to where the industry is heading.
Figma still has an advantage in adoption and tooling, but its current approach to AI remains narrow. It still has a window to respond, but doing so requires embracing the full design pipeline, from research to production.
If it succeeds, it could define the next era. If it does not, it becomes one tool in a workflow that has already moved on.

No Comments.