Canva has made the clearest statement yet from a major design software company about where it thinks the industry is heading.
The Australian firm is repositioning itself from a design platform with AI features to an AI platform with design tools, a formulation chief executive Melanie Perkins repeated in an interview with Decoder host Nilay Patel.
The distinction matters more than it might first appear.
Architecture, not features
Most software companies have added AI generation tools over the past two years.
The typical pattern is a text prompt that produces an image, a slide or a draft, often as a flat output that is difficult to refine.
Canva is arguing that this approach misses the point.
Its update lets users describe what they want and has the system assemble presentations, documents and other assets by pulling content from sources such as Slack and email.
The result comes back as a standard Canva file with editable layers, not a static image or a single-shot generation.
Perkins said the system operates at a "concept" layer above existing pixel and object editors, producing initial drafts that teams can then refine using the familiar Canva interface.
The company describes this as the output of a design foundational model that works across presentations, whiteboards, documents and videos.
The key claim is that the AI produces structured, layered files rather than flattened outputs.
If that holds up in practice, it addresses one of the persistent frustrations with AI-generated content: the gap between an impressive first draft and something a team can actually use.
A decade of format investment
Perkins emphasised that the capability rests on a decade of engineering investment in an interoperable design format.
Hundreds of people contributed to the project, she said, and the system draws on more than 100 million stock photos and illustrations.
She framed AI-generated design as a starting point for human refinement rather than a finished product, saying "to design is to mock an idea."
The approach enables what Canva calls conversational editing, an iterative, agent-like workflow where users dictate ideas on mobile and refine results through the editor.
That positions Canva closer to the agentic AI pattern gaining traction across enterprise software than to the simple generation tools that defined the first wave of AI design features.
The enterprise play
The new features are in beta, but Perkins said she is confident enterprise adoption will accelerate as companies seek automation for routine tasks such as building presentations.
That confidence reflects a broader trend.
Enterprise buyers are increasingly asking not whether AI can generate content but whether it can generate content their teams can work with.
Editable, layered output that plugs into existing workflows is a more compelling answer than impressive but inflexible one-shot generation.
The risk for Canva is execution.
Building an AI system that reliably produces structured, multi-format files from natural language input is substantially harder than generating a single image or slide.
The company is also competing with Adobe, Microsoft and a growing field of AI-native design startups, all chasing the same enterprise budget.
But the strategic logic is sound.
The companies that own the format layer, the structure that makes AI output usable rather than disposable, are better positioned than those offering generation alone.
Related reading
- OpenAI closes $122bn funding round at $852bn valuation as IPO pressure builds
- Adobe deepens NVIDIA and WPP ties to embed autonomous AI agents in marketing
- NVIDIA and partners showcase AI factories and robotics at Hannover Messe
Canva is placing itself on the right side of that divide.
Whether the technology delivers on the promise is the question that beta testing will answer.
The recap
- Canva shifts to an AI platform with integrated design tools.
- New AI can pull data from Slack and email.
- Features are in beta; company expects increased enterprise adoption.