Future of Interfaces
July 1, 2025
Grids, buttons, forms, dropdowns. We've been dragging these patterns around for decades — polishing, refining, porting them from desktop to mobile. We invented a mouse to interact with the UI in a more user-friendly way, discovering new possibilities for interfaces. Then sensor screens gained popularity, enabling everyone to control the interface with just a stylus finger using their everyday pocket device.
But something weird is happening right now.
The interface isn’t just responding to us anymore — it’s starting to think. Is this a time when we discover other, even more user-centric ways to interact with UI?
* * *
A very short story of UI
A user interface (UI) is the space where interactions between a user and a digital system occur, enabling input and providing output.
We started with only keyboards. Interfaces were terminals. Text in, text out. You had to be a hacker to use a computer. To simplify, we created GUIs: buttons, windows, icons. Suddenly, computing was visual. You could point and click instead of typing memorized commands. We had to teach users to use something called a mouse.
In 1985, Microsoft included Paintbrush (Paint) in Windows 1.0, helping users build muscle memory using the mouse in a fun, creative way.
The web brought links. The CSS brought style
. Then JavaScript brought interactivity. Responsive design made things fluid, UI frameworks simplified the development, and smartphones transitioned UI to a mobile-first paradigm.
But the process was always similar:
- We design interfaces.
- We build them.
- Users interact.
- We analyze & improve.
That cycle has stayed pretty stable.
Until now?
Where we are now
Right now, most of us are still writing UIs by hand.
We build design systems. We stitch components together, developing interfaces for watches, phones, tablets, desktops, TVs, cars. We test every screen state manually (if we’re being honest). And AI already helps us to achieve this faster, filling in the blanks.
Most interfaces still result in designer-developer collaboration. The real shift is this: AI doesn’t just fill in blanks anymore. It can generate them.
We’re about to see UIs that:
- Personalize themselves based on user context and preferences
- Generate layouts and components on the fly
- Adapt tone, complexity, and layout based on who’s using it
- Respond to voice, images, gestures, not just clicks and keyboards
It’s not science fiction anymore. We are reinventing interactions with the interfaces to make them as user-centric as possible.
But how?
Ephemeral Interfaces
Ephemeral Interfaces are those where interaction surfaces aren’t fixed entry points anymore, but temporary structures that materialize around intent and context, dissolving once their task is complete and leaving behind the distilled learnings of the exchange.
Imagine this: instead of opening a fixed app layout, you tell your device what you want to do. You say, or even think:
I want to summarize this document and compare it to yesterday’s.
The UI builds itself — pulls the right data, shows you a chart, buttons, filters, and generates a summary. The interface is dynamic — generated based on your intent, context, and past behavior. This is where we're maybe heading: from User Interfaces to User-Generated Interfaces.
Built not by devs, but by models. Shaped not by layouts, but by goals and intents.
What changes for us?
If you're a frontend developer or a web designer, this might sound terrifying.
Have we just prompted our way out of jobs?
Not really.
The future still needs:
- Designers who think in terms of flows and systems, not just pixels
- Engineers who build systems and tools that AI can pull from
- Design engineers who mediate those two above, ensure good UX and consistency, and develop primitives
I believe that in this world, we won’t stop designing. But instead of hardcoding every edge case, we’ll teach systems how to do it themselves. Less pixel-pushing. More systems thinking. We’ll define creativity, primitives, constraints, tokens, and training data — and let the AI handle the last mile.
We're not ready yet
However, we’re not there yet.
AI-generated interfaces are clumsy. Accessibility is often an afterthought. AI models lack taste, can hallucinate, and they surprise users in bad ways.
But isn't this how the early web felt, too? Remember Table websites? Adobe Flash with loading bars? Internet Explorer? We tried a lot of approaches. We’re in that phase again. Early. Weird. Full of possibility.
What I’m excited about
Here’s what I want to see:
- UIs that adjust complexity based on your skill level
- Custom Interfaces that adapt to your screen, your intent, your mood
- Tools like V0 that let designers and non-designers describe an idea and watch it come to life, using any device, instantly
- New types of interaction, like Voice, Look, Thinking, ...?
Conclusions
The interface might no longer be a fixed thing we ship. It’s becoming a living layer between humans and systems — one that learns, adapts, and helps.
We’ll still need creative design.
We’ll still need clean code.
But the job is shifting.
We're not just builders anymore. We should learn how to effectively orchestrate AI in a way that is both productive and user-friendly.
Develop. Shape the tools. Let the UI dance.
- Julian Fleck, Ephemeral Interfaces. Speculative notes on what comes after static interfaces and the architectures that might carry us there, envisioning adaptive, on‑demand systems shaped by context and user intent.
- Vercel AI SDK – Generative User Interfaces. Toolset enabling AI-native apps through Generative UI — LLM‑driven streaming UI components delivered via RSC.
- Web Design Museum – Web Design History. A curated digital museum of iconic web design from 1991–2006 through thousands of archived site screenshots.