Posted by Terence Zhang – Developer Relations Engineer
At Google I/O, we unveiled a imaginative and prescient of Android reimagined with AI at its core. As Android builders, you are on the forefront of this thrilling shift. By embracing generative AI (Gen AI), you may craft a brand new breed of Android apps that provide your customers unparalleled experiences and pleasant options.
Gemini fashions are powering new generative AI apps each over the cloud and instantly on-device. Now you can construct with Gen AI utilizing our most succesful fashions over the Cloud with the Google AI consumer SDK or Vertex AI for Firebase in your Android apps. For on-device, Gemini Nano is our really helpful mannequin. We have now additionally built-in Gen AI into developer instruments – Gemini in Android Studio supercharges your developer productiveness.
Let’s stroll by the foremost bulletins for AI on Android from this 12 months’s I/O classes in additional element!
#1: Construct AI apps leveraging cloud-based Gemini fashions
To kickstart your Gen AI journey, design the prompts in your use case with Google AI Studio. As soon as you might be happy along with your prompts, leverage the Gemini API instantly into your app to entry Google’s newest fashions equivalent to Gemini 1.5 Professional and 1.5 Flash, each with a million token context home windows (with two million out there by way of waitlist for Gemini 1.5 Professional).
If you wish to be taught extra about and experiment with the Gemini API, the Google AI SDK for Android is a superb start line. For integrating Gemini into your manufacturing app, think about using Vertex AI for Firebase (at present in Preview, with a full launch deliberate for Fall 2024). This platform provides a streamlined strategy to construct and deploy generative AI options.
We’re additionally launching the primary Gemini API Developer competitors (phrases and circumstances apply). Now’s the most effective time to construct an app integrating the Gemini API and win unimaginable prizes! A customized Delorean, anybody?
#2: Use Gemini Nano for on-device Gen AI
Whereas cloud-based fashions are extremely succesful, on-device inference permits offline inference, low latency responses, and ensures that information received’t depart the gadget.
At I/O, we introduced that Gemini Nano will likely be getting multimodal capabilities, enabling gadgets to know context past textual content – like sights, sounds, and spoken language. This can assist energy experiences like Talkback, serving to people who find themselves blind or have low imaginative and prescient work together with their gadgets by way of contact and spoken suggestions. Gemini Nano with Multimodality will likely be out there later this 12 months, beginning with Google Pixel gadgets.
We additionally shared extra about AICore, a system service managing on-device basis fashions, enabling Gemini Nano to run on-device inference. AICore gives builders with a streamlined API for operating Gen AI workloads with nearly no impression on the binary dimension whereas centralizing runtime, supply, and significant security elements for Gemini Nano. This frees builders from having to keep up their very own fashions, and permits many purposes to share entry to Gemini Nano on the identical gadget.
Gemini Nano is already reworking key Google apps, together with Messages and Recorder to allow Good Compose and recording summarization capabilities respectively. Exterior of Google apps, we’re actively collaborating with builders who’ve compelling on-device Gen AI use circumstances and signed up for our Early Entry Program (EAP), together with Patreon, Grammarly, and Adobe.
Adobe is one among these trailblazers, and they’re exploring Gemini Nano to allow on-device processing for a part of its AI assistant in Acrobat, offering one-click summaries and permitting customers to converse with paperwork. By strategically combining on-device and cloud-based Gen AI fashions, Adobe optimizes for efficiency, price, and accessibility. Easier duties like summarization and suggesting preliminary questions are dealt with on-device, enabling offline entry and value financial savings. Extra complicated duties equivalent to answering consumer queries are processed within the cloud, guaranteeing an environment friendly and seamless consumer expertise.
That is only the start – later this 12 months, we’ll be investing closely to allow and goal to launch with much more builders.
To be taught extra about constructing with Gen AI, take a look at the I/O talks Android on-device GenAI below the hood and Add Generative AI to your Android app with the Gemini API, together with our new documentation.
#3: Use Gemini in Android Studio that will help you be extra productive
Apart from powering options instantly in your app, we’ve additionally built-in Gemini into developer instruments. Gemini in Android Studio is your Android coding companion, bringing the ability of Gemini to your developer workflow. Due to your suggestions since its preview as Studio Bot finally 12 months’s Google I/O, we’ve advanced our fashions, expanded to over 200 international locations and territories, and now embody this expertise in secure builds of Android Studio.
At Google I/O, we previewed numerous options out there to strive within the Android Studio Koala preview launch, like natural-language code recommendations and AI-assisted evaluation for App High quality Insights. We additionally shared an early preview of multimodal enter utilizing Gemini 1.5 Professional, permitting you to add photographs as a part of your AI queries — enabling Gemini that will help you construct absolutely useful compose UIs from a wireframe sketch.
You may learn extra in regards to the updates right here, and ensure to take a look at What’s new in Android improvement instruments.