Google released version 0.9 of the A2UI protocol, a framework-agnostic standard for generative user interfaces. The standard allows agents to build UI elements on the fly from existing components on React, Flutter, Lit, or Angular. This enables agents to assemble necessary widgets and control panels instead of just using text chat.

The technical core of release 0.9 is the new Agent SDK, which is currently available for Python, with Go and Kotlin versions coming soon. According to the documentation at A2UI.org, the protocol now supports client-server data syncing and improved error handling. For developers, this means the agent can decide which interface element is relevant in the moment, using a shared library of renderers.

The ecosystem expansion includes integrations with Vercel's json-renderer and Oracle's Agent Spec. Compatibility with AG2 and A2A 1.0 is also included. The first cases from Rebel App Studio (Personal Health Companion) and Very Good Ventures (Life Goal Simulator) show that dynamic UI works by pulling from an application's existing components across web, mobile, and other platforms.

Business logic suggests a transition from text to action through visualization. If you plan to implement AI, you should check your component libraries for compatibility with the Agent SDK. Instead of designing rigid user paths, developers will create flexible sets of elements that the AI will use depending on the context.

AI AgentsGenerative AIDigital TransformationGoogle DeepMind