Android expands agentic AI tools with AppFunctions and On-Device Automation Framework

Google is rolling out new developer capabilities designed to embed AI agents more directly into the Android app ecosystem, shifting how users interact with apps on their devices.

The initiative introduces early-stage tools aimed at enabling AI systems, including Google Gemini, to execute tasks across third-party Android apps. The changes reflect a broader transition from traditional app navigation toward agent-driven interactions, where users issue natural language requests and AI systems coordinate actions behind the scenes.

AppFunctions: Structured Access to App Capabilities

At the center of the update is Android AppFunctions, a new framework that allows developers to expose specific app features and data directly to AI agents. Through a Jetpack library and supporting platform APIs, developers can define structured, self-describing functions that AI assistants can discover and execute locally on the device.

Unlike cloud-based integrations, AppFunctions operates on-device, enabling AI agents to interpret user intent and trigger predefined app capabilities without routing data through external servers. The approach mirrors backend-style function declarations but applies them within the Android environment.

An early example of the framework in use is the integration between Gemini and Samsung Gallery on the Samsung Galaxy S26 series. Users can request specific photos via voice or text, and Gemini retrieves images directly from the Gallery app without requiring users to open it manually. Retrieved content can then be used in subsequent actions, such as sharing images through messaging apps.

The integration is currently available on Galaxy S26 devices and is expected to expand to additional Samsung devices running OneUI 8.5 or later. Google says similar structured integrations are already being used for categories including calendar management, note-taking, and task organization across devices from multiple manufacturers.

UI Automation for Broader App Access

In parallel with structured integrations, Google is testing a UI automation framework designed to allow AI agents to interact with apps that do not yet support AppFunctions.

The automation system enables AI assistants to carry out multi-step workflows inside installed apps by simulating user interactions. Developers are not required to modify their apps to participate, making it a lower-effort path to AI compatibility.

An early preview of this capability is launching in beta on the Galaxy S26 series and select Pixel 10 devices. Through the Gemini app, users can initiate delegated tasks—such as placing complex food delivery orders, arranging multi-stop rides, or reordering groceries—by long-pressing the power button.

The initial rollout will support a limited set of apps in food delivery, grocery, and rideshare categories in the United States and South Korea.

Google states that user oversight is built into the automation flow. Notifications and a “live view” interface allow users to monitor task progress, intervene at any point, or manually complete an action. For sensitive actions such as purchases, Gemini prompts users for confirmation before finalizing the transaction.

Roadmap Toward Android 17

The company plans to expand both AppFunctions and UI automation capabilities with the release of Android 17. Current efforts focus on working with a small group of developers to refine user experience and security safeguards before broader availability.

The move positions Android toward a model in which AI agents operate as intermediaries between users and apps, reducing the need for direct app navigation while increasing reliance on structured integrations and system-level automation.

Further details on developer access and ecosystem expansion are expected later this year.

Written by Maya Robertson

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Loading…

Mobile gaming revenue stalls at $82B as downloads decline in 2025