Google has released the Android Studio Otter 3 Feature Drop as a stable update, introducing a broad set of changes centered on AI-assisted development, testing workflows, and build tooling. The update reflects a shift toward greater flexibility in how developers integrate large language models (LLMs) and agent-based tools into the Android development process.
A key addition in this release is support for “bring your own model,” allowing developers to choose which LLM powers AI features inside Android Studio. Developers can now connect remote models from third-party providers or run supported local models, depending on their privacy, cost, or infrastructure requirements. Google’s Gemini remains the default option, but the IDE no longer enforces a single-model approach.
The update also expands Agent Mode, Android Studio’s semi-autonomous assistant. Agent Mode can now deploy apps to connected devices, inspect on-screen UI states, capture screenshots, review Logcat output, and interact directly with running applications. These capabilities are intended to support iterative debugging and validation workflows that involve repeated build-and-run cycles.
To improve oversight of AI-driven code changes, Android Studio now includes a centralized changes drawer. This view allows developers to review, accept, or revert modifications made by the agent across multiple files, with diff-level visibility inside the editor. The IDE also introduces support for multiple concurrent conversation threads, enabling developers to separate tasks and manage AI context more precisely.
The Otter 3 Feature Drop adds a new testing capability called Journeys, which allows developers to define end-to-end UI tests using natural language. These instructions are translated into on-device interactions performed by Gemini, including visual reasoning based on what is rendered on screen. Journeys run as Gradle tasks and provide step-by-step execution details, screenshots, and reasoning logs to aid debugging.
Android Studio can now also connect to remote Model Context Protocol (MCP) servers, allowing Agent Mode to access external tools such as design files or documentation systems. This integration is designed to reduce context switching by making third-party resources directly available within the IDE.
On the UI development side, AI features have been embedded into the Compose Preview workflow. Developers can generate UI code from screenshots, iteratively match layouts to reference images, make visual changes using natural language, and run automated checks for accessibility and UI quality issues. The same agent-driven tooling can also generate missing Compose previews and diagnose preview rendering errors.
Beyond AI-driven features, the release includes updates to core developer tooling. Logcat now automatically retraces obfuscated stack traces from R8-optimized builds, removing the need for manual retracing steps. In addition, a new Fused Library plugin allows multiple Android library modules to be packaged and published as a single AAR, simplifying dependency management and distribution.


Comments
Loading…