Grids, buttons, forms, dropdowns. We've been dragging these patterns around for decades — polishing, refining, porting them from desktop to mobile, TVs, and even watches. We invented a mouse to interact with the UI in a more user-friendly way, discovering new possibilities for interfaces. Then sensor screens gained popularity, enabling everyone to control the interface with just a stylus finger using their everyday pocket device.

But something weird is happening right now.

The interface isn’t just responding to us anymore — it’s starting to think. Is this a time when we discover other, even more user-centric ways to interact with UI?



* * *


A very short story of UI

A user interface (UI) is the space where interactions between a user and a digital system occur, enabling input and providing output.

We started with only keyboards. Interfaces were terminals. Text in, text out. You had to be a hacker to use a computer. To simplify, we created GUIs: buttons, windows, icons. Suddenly, computing was visual. You could point and click instead of typing memorized commands.

We had to teach users to use something called a mouse.

Bus Mouse

Bus Mouse, 1986

There was nothing “natural” about the mouse. Early systems like MS-DOS were entirely keyboard-driven — fast, efficient, but opaque to non-technical users. Companies like Apple with the Macintosh, and later Microsoft with Windows, didn’t just introduce a new input device — they reshaped behavior. Through visual interfaces, icons, and affordances like buttons and menus, they trained users to point instead of remember commands. It was slower in raw speed, but easier to learn, easier to adopt, and massively more scalable. Over time, the industry optimized for approachability over efficiency — and users followed.

Microsoft Paintbrush

In 1985, Microsoft included Paintbrush (Paint) in Windows 1.0, helping users build muscle memory using the mouse in a fun, creative way.

The HTML brought links. The CSS brought style. Then JavaScript brought interactivity. Responsive design made things fluid, UI frameworks simplified the development, and smartphones transitioned UI to a mobile-first paradigm.

But the process was always similar:

  • We design interfaces.
  • We build them.
  • Users interact.
  • We analyze & improve.

That cycle has stayed pretty stable.

Until now?

Where we are now

Right now, most of us are still writing UIs by hand.

We build design systems. We stitch components together, developing interfaces for watches, phones, tablets, desktops, TVs, cars. We test every screen state manually (if we’re being honest). And AI already helps us to achieve this faster, filling in the blanks.

Liquid Glass

Liquid Glass

Most interfaces still result in designer-developer collaboration. The real shift is this: AI doesn’t just fill in blanks anymore. It can generate them.

We’re about to see UIs that: 

  • Personalize themselves based on user context and preferences
  • Generate layouts and components on the fly
  • Adapt tone, complexity, and layout based on who’s using it
  • Respond to voice, images, gestures, not just clicks and keyboards

It’s not science fiction anymore. We are reinventing interactions with the interfaces to make them as user-centric as possible.

But how?

Meet Ephemeral Interfaces

Ephemeral Interfaces are those in which interaction surfaces aren’t fixed entry points anymore, but temporary structures that materialize around intent and context, dissolving once their task is complete and leaving behind the distilled learnings of the exchange.

Generative UI scheme

Imagine this: instead of opening a fixed app layout, you tell your device what you want to do. You say, or even think:

I want to summarize this document and compare it to yesterday’s.

The UI builds itself — pulls the right data, shows you a chart, buttons, filters, and generates a summary. The interface is dynamic — generated based on your intent, context, and past behavior. This is where we're maybe heading: from User Interfaces to User-Generated Interfaces.

Built not by devs, but by models. Shaped not by layouts, but by goals and intents.

What changes for us?

If you're a developer or a designer, this might sound terrifying.
Have we just prompted our way out of jobs?

Not really.

What is disappearing is the part of the job that was never the point to begin with:

  • fighting CSS edge cases
  • fixing browser inconsistencies
  • wiring the same form validation for the hundredth time
  • chasing pixel-perfection across breakpoints
  • debugging states that only break on one device at 2 AM (o_O)

That work doesn’t vanish completely — but it stops being the bottleneck.

Ephemeral Development

Ephemeral Development: a lot goes into the bin; "worth it" line is moving

The future still needs:

  • Designers who think in terms of flows and systems, not just pixels
  • Engineers who build systems and tools that AI can pull from
  • Design engineers who mediate those two above, ensure good UX and consistency, and develop primitives

But the center of gravity shifts. This is where it gets interesting. Because when you remove the constant friction of implementation, you get something back: time and cognitive space. Time to explore more directions instead of committing early; prototype ideas instantly, instead of negotiating feasibility; iterate on interaction, not just layout; focus on meaning, not mechanics.

Instead of asking:

Can we build this?

We start asking:

What should this feel like?

Instead of spending hours aligning elements, fixing spacing bugs, handling loading states, we can focus on crafting better flows, defining clearer user intent, designing systems that adapt intelligently. Your job becomes less about drawing the final screen…

…and more about defining:

  • rules (how UI behaves under different contexts)
  • constraints (what good looks like)
  • primitives (the building blocks AI can use)
  • tokens (design language at scale)
  • training signals (what the system should optimize for)
Ephemeral Development

What engineers do: "worth it" line is moving

In other words, you stop designing screens. You start designing possibility spaces. And creativity actually expands. You can explore ideas that were previously “too expensive” to build. Weird interactions. Niche flows. Hyper-personalized experiences. Things that would never make it into a roadmap — because now they don’t need to.

I believe that in this world, we won’t stop designing. But instead of hardcoding every edge case, we’ll teach systems how to do it themselves. Less pixel-pushing. More systems thinking. We’ll define creativity, primitives, constraints, tokens, and training data — and let the AI handle the last mile.

We're not ready yet

However, we’re not there yet.

AI-generated interfaces are clumsy. Accessibility is often an afterthought. AI models lack taste, can hallucinate, surprising users in bad ways. If you generate images with something like Gemini, you wait a couple of minutes until it’s done. This is probably the biggest problem here — I believe it’s too early to think that AI is capable of generating UI and that users will enjoy it. But one day, we will have enough computing power and a bunch of frameworks to implement something like this.

For me, it actually means that we’re in that phase again. Early. Weird. Full of possibility.

What I’m excited about

Codex Windows XP

Just a beautiful concept

People don’t use interfaces the way we expect. They use them the way they understand them. Watch someone older on Android. Instead of going “home” with ● they’ll hammer the back button ◄  until the app disappears. Not because it’s wrong — because it matches their mental model: go back = undo everything. The system fights them, so they brute-force it. Now look at younger users. One stream isn’t enough. Video + subtitles + gameplay in the corner + notifications popping in. It’s not a distraction — it’s baseline stimulation.

UIs that adjust complexity based on your skill level

Most apps are frozen at one level of complexity. Beginners get overwhelmed. Power users get slowed down.

We already see hints of adaptive complexity:

  • Figma hides advanced controls until you need them, but still exposes everything if you know where to look
  • Notion starts as a blank page, then unfolds into databases, relations, and formulas

But this is still manual discovery. What’s missing is progression:

  • first-time use — guided, constrained, almost “dumb”
  • repeated patterns — shortcuts appear
  • expert usage — UI gets out of the way entirely

Same tool. Different surface.

Custom interfaces that adapt to your screen, intent, and of generatingmood

Responsive design solved the layout. Not behavior. We still ship the same interface everywhere, just rearranged. But intent matters more than screen size. Think:

  • late night, tired — fewer options, larger targets, less noise
  • focused work — dense, information-rich, keyboard-first
  • quick check — minimal, glanceable, ephemeral

Some products are getting close:

  • TikTok optimizes for passive consumption vs active creation with completely different flows
  • Spotify shifts between discovery, background listening, and active control depending on context

But it’s still coarse. Mostly mode switches. The next step is continuous adaptation — interfaces that reshape themselves moment to moment.

New interaction primitives

Keyboard, mouse, touch — these were stable for decades. Now the input layer is breaking apart again.

  • Voice — Siri, Google Assistant
  • Vision — Apple Vision Pro tracks where you look
  • Gesture —hands as controllers, no physical input
  • Context — location, time, activity shaping behavior
  • Thought (?) — early experiments with brain-computer interfaces

None of these is perfect yet. Voice is unreliable. Vision is awkward. Gesture is tiring. But together, they point to something else: Interfaces that don’t wait for explicit input. They infer.

Put it together:

  • a beginner gets a simplified UI
  • an expert gets speed and density
  • the system reshapes itself based on context
  • the UI can be generated on demand
  • input is no longer limited to taps and clicks

Same product. Different reality per user. That is a new generation of responsive and adaptive design.

Conclusions

The interface might no longer be a fixed thing we ship. It’s becoming a living layer between humans and systems — one that learns, adapts, and helps.

We’ll still need creative design.
We’ll still need clean code.
But the job is shifting.

We're not just builders anymore. We should learn how to effectively orchestrate AI in a way that is both productive and user-friendly.

Develop. Shape the tools. Let the UI dance.

  1. Ephemeral Interfaces — speculative notes by Julian Fleck about the future of interfaces.
  2. Vercel AI SDK – Generative User Interfaces. Toolset enabling AI-native apps through Generative UI — LLM‑driven streaming UI components delivered via RSC.
  3. Web Design Museum – Web Design History. A curated digital museum of iconic web design from 1991–2006 through thousands of archived site screenshots.
  4. Cheng, R., Barik, T., Leung, A., Hohman, F., & Nichols, J. (2024, September). BISCUIT: Scaffolding LLM-generated code with ephemeral UIs in computational notebooks. In 2024 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC) (pp. 13-23). IEEE.
  5. Agrawal, K. (2024). The Next Big AI-UX Trend — It’s Not Conversational UI. — Identifies “Ephemeral Interfaces” as one of four key AI UX trends, indicating industry awareness of the term.
  6. Bernhard, R. (2024). The Ephemeral Interface: How AI Agents Are Redefining Our Digital Lives. — Introduces “disposable apps” as “transient yet hyper-intentional tools” created by AI agents on demand (ephemeral apps/interfaces).
  7. Kobetz, R. (2023). Decoding The Future: The Evolution Of Intelligent Interfaces. — Discusses future interfaces that are “compiled in real-time, based on context… UI that appears when needed and hidden when not,” calling them intelligent, contextual, and ephemeral.
Last updated: