Android Developers Blog
android-developers.googleblog.com.web.brid.gy
Android Developers Blog
@android-developers.googleblog.com.web.brid.gy
News and insights on the Android platform, developer tools, and events.

[bridged from https://android-developers.googleblog.com/ on the web: https://fed.brid.gy/web/android-developers.googleblog.com ]
Fully Optimized: Wrapping up Performance Spotlight Week
_Posted by Ben Weiss, Senior Developer Relations Engineer and Sara Hamilton, Product Manager_ _ _ _ _ _ _ We spent the past week diving deep into sharing best practices and guidance that helps to make Android apps faster, smaller, and more stable. From the foundational powers of the R8 optimizer and Profile Guided Optimizations, to performance improvements with Jetpack Compose, to a new guide on levelling up your app's performance, we've covered the low effort, high impact tools you need to build a performant app. This post serves as your index and roadmap to revisit these resources whenever you need to optimize. Here are the five key takeaways from our journey together. # Use the R8 optimizer to speed up your app The single most impactful, low-effort change you can make is fully enabling the R8 optimizer. It doesn't just reduce app size; it performs deep, whole-program optimizations to fundamentally rewrite your code for efficiency. Revisit your Keep Rules and get R8 back into your engineering tasks. Our newly updated and expanded documentation on the R8 optimizer is here to help. Reddit observed a 40% faster cold startup and 30% fewer ANR errors after enabling R8 full mode. You can read the full case study on our blog. Engineers at Disney+ invest in app performance and are optimizing the app's user experience. Sometimes even seemingly small changes can make a huge impact. While inspecting their R8 configuration, the team found that the -dontoptimize flag was being used. After enabling optimizations by removing this flag, the Disney+ team saw significant improvements in their app's performance. So next time someone asks you what you could do to improve app performance, just link them to this post. Read more in our Day 1 blog: Use R8 to shrink, optimize, and fast-track your app # Guiding you to better performance Baseline Profiles effectively remove the need for Just in Time compilation, improving startup speed, scrolling, animation and overall rendering performance. Startup Profiles make app startup more even more lightweight by bringing an intelligent order to your app's classes.dex files. And to learn more about just how important Baseline Profiles are for app performance, read Meta's engineering blog where they shared how Baseline Profiles improved various critical performance metrics by up to 40% across their apps. We continue to make Jetpack Compose more performant for you in Jetpack Compose 1.10. Features like pausable composition and the customizable cache window are crucial for maintaining zero scroll jank when dealing with complex list items.Take a look at the latest episode of #TheAndroidShow where we explain this in more detail. Read more in our Wednesday's blog: Deeper Performance Considerations # Measuring performance can be easy as 1, 2, 3 You can't manage what you don't measure. Our Performance Leveling Guide breaks down your measurement journey into five steps, starting with easily available data and building up to advanced local tooling. Starting at level 1, we’ll teach you how to use readily available data from Android Vitals, which provides you with field data on ANRs, crashes, and excessive battery usage. We’ll also teach you how to level up. For example, we’ll demonstrate how to reach level 3 with local performance testing using Jetpack Macrobenchmark and the new UiAutomator 2.4 API to accurately measure and verify any change in your app's performance. Read more in our Thursday's blog # Debugging performance just got an upgrade Advanced optimization shouldn't mean unreadable crash reports. New features are designed to help you confidently debug R8 and background work: Automatic Logcat Retrace Starting in Android Studio Narwhal, stack traces can automatically be de-obfuscated in the Logcat window. This way you can immediately see and debug any crashes in a production-ready build. Narrow Keep Rules On Tuesday we demystified the Keep Rules needed to fix runtime crashes, emphasizing writing specific, member-level rules over overly-broad wildcards. And because it's an important topic, we made you a video as well. And with the new lint check for wide Keep Rules, the Android Studio Otter 3 Feature Drop has you covered here as well. We also released new guidance on testing and troubleshooting your R8 configuration to help you get the configuration right with confidence. Read more in our Tuesday's blog: Configure and troubleshoot R8 Keep Rules Background Work We shared guidance on debugging common scenarios you may encounter when scheduling tasks with WorkManager. Background Task Inspector gives you a visual representation and graph view of WorkManager tasks, helping debug why scheduled work is delayed or failed. And our refreshed Background Work documentation landing page highlights task-specific APIs that are optimized for particular use cases, helping you achieve more reliable execution. Read more in our Wednesday's blog: Background work performance considerations # Performance optimization is an ongoing journey If you successfully took our challenge to enable R8 full mode this week, your next step is to integrate performance into your product roadmap using the App Performance Score. This standardized framework helps you find the highest leverage action items for continuous improvement. We capped off the week with the #AskAndroid Live Q&A session, where engineers answered your toughest questions on R8, Profile Guided Optimizations, and more. If you missed it, look for the replay! Thank you for joining us! Now, get building and keep that momentum going.
android-developers.googleblog.com
November 22, 2025 at 2:01 PM
Leveling Guide for your Performance Journey
_Posted by Alice Yuan - Senior Developer Relations Engineer_ Welcome to day 4 of Performance Spotlight Week. Now that you've learned about some of the awesome tools and best practices we've introduced recently such as the R8 Optimizer, and Profile Guided Optimization with Baseline Profiles and Startup Profiles, you might be wondering where to start your performance improvement journey. We've come up with a step-by-step performance leveling guide to meet your mobile development team where you are—whether you're an app with a single developer looking to get started with performance, or you have an entire team dedicated to improving Android performance. The performance leveling guide features 5 levels. We'll start with level 1, which introduces minimal adoption effort performance tooling, and we'll go up to level 5, ideal for apps that have the resourcing to maintain a bespoke performance framework. Feel free to jump to the level that resonates most with you: * Level 1: Use Play Console provided field monitoring * Level 2: Follow the App Performance Score action items * Level 3: Leverage local performance test frameworks * Level 4: Use trace analysis tools like Perfetto * Level 5: Build your own performance tracking framework # Level 1:  Use Play Console provided field monitoring We recommend first leveraging Android vitals within the Play Console for viewing automatically collected field monitoring data, giving you insights about your application with minimal effort. Android vitals is Google's initiative to automatically collect and surface this field data for you. Here's an explanation of how we deliver this data: 1. Collect Data: When a user opts-in, their Android device automatically logs key performance and stability events from all apps, including yours. 2. Aggregate Data: Google Play collects and anonymizes this data from your app's users. 3. Surface Insights: The data is presented to you in the Android vitals dashboard within your Google Play Console. The Android vitals dashboard tracks many metrics, but a few are designated as Core Vitals. These are the most important because they can affect your app's visibility and ranking on the Google Play Store. ## The Core Vitals GOOGLE PLAY'S CORE TECHNICAL QUALITY METRICS To maximize visibility on Google Play, keep your app below the bad behavior thresholds for these metrics. --- User-perceived crash rate | The percentage of daily active users who experienced at least one crash that is likely to have been noticeable User-perceived ANR rate | The percentage of daily active users who experienced at least one ANR that is likely to have been noticeable Excessive battery usage | The percentage of watch face sessions where battery usage exceeds 4.44% per hour New: Excessive partial wake locks | The percentage of user sessions where cumulative, non-exempt wake lock usage exceeds 2 hours The core vitals include user-perceived crash rate, ANR rate, excessive battery usage and the newly introduced metric on excessive partial wake locks. ## User-Perceived ANR Rate You can use the Android Vitals ANR dashboard, to see stack traces of issues that occur in the field and get insights and recommendations on how to fix the issue. You can drill down into a specific ANR that occurred, to see the stack trace as well as insights on what might be causing the issue. Also, check out our ANR guidance to help you diagnose and fix the common scenarios where ANRs might occur. ## User-Perceived Crash Rate Use the Android vitals crflevelash dashboard to further debug crashes and view a sample of stack traces that occur within your app. ** ** Our documentation also has guidance around troubleshooting specific crashes. For example, the Troubleshoot foreground services guide discusses ways to identify and fix common scenarios where crashes occur. ## Excessive Battery Usage To decrease watch face sessions with excessive battery usage on Wear OS, check out the Wear guide on how to improve and conserve battery. ## [new] Excessive Partial Wake Locks We recently announced that apps that exceed the excessive partial wake locks threshold may see additional treatment starting on March 1st 2026. For mobile devices, the Android vitals metric applies to non-exempted wake locks acquired while the screen is off and the app is in the background or running a foreground service. Android vitals considers partial wake lock usage excessive if wake locks are held for at least two hours within a 24-hour period and it affects more than 5% of your app's sessions, averaged over 28 days. To debug and fix excessive wake lock issues, check out our technical blog post. Consult our Android vitals documentation and continue your journey to better leverage Android vitals. # Level 2: Follow the App Performance Score action items Next, move onto using the App Performance Score to find the high leverage action items to uplevel your app performance. The Android App Performance Score is a standardized framework to measure your app's technical performance. It gives you a score between 0 and 100, where a lower number indicates more room for improvement. To get easy wins, you should first start with the Static Performance Score first. These are often configuration changes or tooling updates that provide significant performance boosts. ## Step 1: Perform the Static Assessment The static assessment evaluates your project's configuration and tooling adoption. These are often the quickest ways to improve performance. Navigate to the Static Score section of the scoreboard page and do the following: 1. Assess Android Gradle Plugin (AGP) Version. 2. Adopt R8 Minification incrementally or ideally, use R8 in full mode to minify and optimize the app code. 3. Adopt Baseline Profiles which improves code execution speed from the first launch providing performance enhancements for every new app install and every app update. 4. Adopt Startup Profiles to improve Dex Layout. Startup Profiles are used by the build system to further optimize the classes and methods they contain by improving the layout of code in your APK's DEX files. 5. Upgrade to the newest version of Jetpack Compose ## Step 2: Perform the Dynamic Assessment Once you have applied the static easy wins, use the dynamic assessment to validate the improvements on a real device. You can first do this manually with a physical device and a stop watch. Navigate to the Dynamic Score section of the scoreboard page and do the following: 1. Set up your test environment with a physical device. Consider using a lower-end device to exaggerate performance issues, making them easier to spot. 2. Measure startup time from the launcher. Cold start your app from the launcher icon and measure the time until it is interactive. 3. Measure app startup time from a notification, with the goal to reduce notification startup time to be below a couple seconds. 4. Measure rendering performance by scrolling through your core screens and animations. Once you've completed these steps, you will receive a score between 1 - 100 for the static and dynamic scores, giving you an understanding of your app's performance and where to focus on. ## # Level 3: Leverage local performance test frameworks Once you've started to assess dynamic performance, you may find it too tedious to measure performance manually. Consider automating your performance testing using performance test frameworks such as Macrobenchmarks and UiAutomator. ## Macrobenchmark 💚 UiAutomator Think of Macrobenchmark and UiAutomator as two tools that work together: Macrobenchmark is the measurement tool. It's like a stopwatch and a frame-rate counter that runs outside your app. It is responsible for starting your app, recording metrics (like startup time or dropped frames), and stopping the app. UiAutomator is the robot user. The library lets you write code to interact with the device's screen. It can find an icon, tap a button,  scroll on a list and more. ### How to write a test When you write a test, you wrap your UiAutomator code inside a Macrobenchmark block. 1. Define the Test: Use the @MacrobenchmarkRule 2. Start Measuring: Call benchmarkRule.measureRepeated. 3. Drive the UI: Inside that block, use UiAutomator code to launch your app, find UI elements, and interact with them. Here's an example code snippet of what it looks like to test a compose list for scrolling jank. benchmarkRule.measureRepeated(     // ...     metrics = listOf(         FrameTimingMetric(),     ),     startupMode = StartupMode.COLD,     iterations = 10, ) {     // 1. Launch the app's main activity     startApp()     // 2. Find the list using its resource ID and scroll down     onElement { viewIdResourceName == "$packageName.my_list" }         .fling(Direction.DOWN) } 4. Review the results: Each test run provides you with precisely measured information to give you the best data on your app's performance. timeToInitialDisplayMs  min  1894.4,   median 2847.4,   max  3355.6 frameOverrunMs          P50 -3.2,  P90  6.2, P95  10.4, P99  119.5 ### Common use cases Macrobenchmark provides several core metrics out of the box. StartupTimingMetric allows you to accurately measure app startup. The FrameTimingMetric enables you to understand an app's rendering performance during the test. We have a detailed and complete guide to using Macrobenchmarks and UiAutomator alongside code samples available for you to continue learning. ## # Level 4: Use trace analysis tools like Perfetto Trace analysis tools like Perfetto are used when you need to see beyond your own application code. Unlike standard debuggers or profilers that only see your process, Perfetto captures the entire device state—kernel scheduling, CPU frequency, other processes, and system services—giving you complete context for performance issues. Check our Performance Debugging youtube playlist for video instructions on performance debugging using system traces, Android Studio Profiler and Perfetto. ## How to use Perfetto to debug performance The general workflow for debugging performance using trace analysis tools is to record, load and analyze the trace. ### Step 1: Record a trace You can record a system trace using several methods: * Recording a trace manually on the device directly from the developer options. * Using the Android Studio CPU Profiler * Using the Perfetto UI ### Step 2: Load the trace Once you have the trace file, you need to load it into the analysis tool. 1. Open Chrome and navigate to ui.perfetto.dev. 2. Drag and drop your .perfetto-trace (or .pftrace) file directly into the browser window. 3. The UI will process the file and display the timeline. ### Step 3: Analyze the trace You can use Perfetto UI or Android Studio Profiler to investigate performance issues. Check out this episode of the MAD Skills series on Performance, where our performance engineer Carmen Jackson discusses the Perfetto traceviewer. ## Scenarios for inspecting system traces using Perfetto Perfetto is an expert tool and can provide information about everything that happened on the Android device while a trace was captured. This is particularly helpful when you cannot identify the root cause of a slowdown using standard logs or basic profilers. ### Debugging Jank (Dropped Frames) If your app stutters while scrolling, Perfetto can show you exactly why a specific frame missed its deadline. If it’s due to the app, you might see your main thread running for a long duration doing heavy parsing; this indicates scenarios where you should move the work into asynchronous processing. If it’s due to the system, you might see your main thread ready to run, but the CPU kernel scheduler gave priority to a different system service, leaving your app waiting (CPU contention). This indicates scenarios where you may need to optimize usage of platform APIs. ### Analyzing Slow App Startup Startup is complex, involving system init, process forking, and resource loading. Perfetto visualizes this timeline precisely. You can see if you are waiting on Binder calls (inter-process communication). If your onCreate waits a long time for a response from the system PackageManager, Perfetto will show that blocked state clearly. You can also see if your app is doing more work than necessary during the app startup. For example, if you are creating and laying out more views than the app needs to show, you can see these operations in the trace. ### Investigating Battery Drain & CPU Usage Because Perfetto sees the whole system, it's perfect for finding invisible power drains. You can identify which processes are holding wake locks, preventing the device from sleeping under the “Device State” tracks. Learn more in our wake locks blog post. Also, use Perfetto to see if your background jobs are running too frequently or waking up the CPU unnecessarily. # # Level 5: Build your own performance tracking framework The final level is for apps that have teams with resourcing to maintain a performance tracking framework. Building a custom performance tracking framework on Android involves leveraging several system APIs to capture data throughout the application lifecycle, from startup to exit, and during specific high-load scenarios. By using ApplicationStartInfo, ProfilingManager, and ApplicationExitInfo, you can create a robust telemetry system that reports on how your app started, detailed info on what it did while running, and why it died. ## ApplicationStartInfo: Tracking how the app started Available from Android 15 (API 35), ApplicationStartInfo provides detailed metrics about app startup in the field. The data includes whether it was a cold, warm, or hot start, and the duration of different startup phases. This helps you develop a baseline startup metric using production data to further optimize that might be hard to reproduce locally. You can use these metrics to run A/B tests optimizing the startup flow. The goal is to accurately record launch metrics without manually instrumenting every initialization phase. You can query this data lazily some time after application launch. ## ProfilingManager: Capturing why it was slow ProfilingManager (API 35) allows your app to programmatically trigger system traces on user devices. This is powerful for catching transient performance issues in the wild that you can't reproduce locally. The goal is to automatically record a trace when a specific highly critical user journey is detected as running slowly or experiencing performance issues. You can register a listener that triggers when specific conditions are met or trigger it manually when you detect a performance issue such as jank, excessive memory, or battery drain. Check our documentation on how to capture a profile, retrieve and analyze profiling data and use debug commands. ## ApplicationExitInfo: Tracking why the app died ApplicationExitInfo (API 30) tells you why your previous process died. This is crucial for finding native crashes, ANRs, or system kills due to excessive memory usage (OOM). You'll also be able to get a detailed tombstone trace by using the API getTraceInputStream. The goal of the API is to understand stability issues that don't trigger standard Java crash reporters (like Low Memory Kills). You should trigger this API on the next app launch. # Next Steps Improving Android performance is a step-by-step journey. We're so excited to see how you level up your performance using these tools! ## Tune in tomorrow for Ask Android You have shrunk your app with R8 and optimized your runtime with Profile Guided Optimization. And measure your app's performance. Join us tomorrow for the live Ask Android session. Ask your questions now using #AskAndroid and get them answered by the experts.
android-developers.googleblog.com
November 21, 2025 at 5:54 AM
Jetpack Navigation 3 is stable
_Posted by Don Turner - Developer Relations Engineer_ _ _ _ _ Jetpack Navigation 3 version 1.0 is stable 🎉. Go ahead and use it in your production apps today. JetBrains are already using it in their KotlinConf app. Navigation 3 is a new navigation library built from the ground up to embrace Jetpack Compose state. It gives you full control over your back stack, helps you retain navigation state, and allows you to easily create adaptive layouts (like list-detail). There's even a cross-platform version from JetBrains. Why a new library? The original Jetpack Navigation library (now Nav2) was designed 7 years ago and, while it serves its original goals well and has been improved iteratively, the way apps are now built has fundamentally changed. Reactive programming with a declarative UI is now the norm. Nav3 embraces this approach. For example, NavDisplay (the Nav3 UI component that displays your screens) simply observes a list of keys (each one representing a screen) backed by Compose state and updates its UI when that list changes. Figure 1. NavDisplay observes changes to a list backed by Compose state. Nav2 can also make it difficult to have a single source of truth for your navigation state because it has its own internal state. With Nav3, you supply your own state, which gives you complete control. Lastly, you asked for more flexibility and customizability. Rather than having a single, monolithic API, Nav3 provides smaller, decoupled APIs (or "building blocks") that can be combined together to create complex functionality. Nav3 itself uses these building blocks to provide sensible defaults for well-defined navigation use cases. This approach allows you to: * Customize screen animations at both a global and individual level * Display multiple panes at the same time, and create flexible layouts using the Scenes API * Easily replace Nav3 components with your own implementations if you want custom behavior Read more about its design and features in the launch blog. Migrating from Navigation 2 If you're already using Nav2, specifically Navigation Compose, you should consider migrating to Nav3. To assist you with this, there is a migration guide. The key steps are: 1. Add the navigation 3 dependencies. 2. Update your navigation routes to implement NavKey. Your routes don't have to implement this interface to use Nav3, but if they do, you can take advantage of Nav3's rememberNavBackStack function to create a persistent back stack. 3. Create classes to hold and modify your navigation state - this is where your back stacks are held. 4. Replace NavController with these classes. 5. Move your destinations from NavHost's NavGraph into an entryProvider. 6. Replace NavHost with NavDisplay. Experimenting with AI agent migration You may want to experiment with using an AI agent to read the migration guide and perform the steps on your project. To try this with Gemini in Android Studio's Agent Mode: * Save this markdown version of the guide into your project. * Paste this prompt to the agent (but don't hit enter): "Migrate this project to Navigation 3 using ". * Type @migration-guide.md - this will supply the guide as context to the agent. As always, make sure you carefully review the changes made by the AI agent - it can make mistakes! We'd love to hear how you or your agent performed, please send your feedback here. Tasty navigation recipes for common scenarios For common but nuanced use cases, we have a recipes repository. This shows how to combine the Nav3 APIs in a particular way, allowing you to choose or modify the recipe to your particular needs. If a recipe turns out to be popular, we'll consider "graduating" the non-nuanced parts of it into the core Nav3 library or add-on libraries. Figure 2. Useful code recipes can graduate into a library. There are currently 19 recipes, including for: * Multiple back stacks * Modularization and dependency injection * Passing navigation arguments to ViewModels (including using Koin) * Returning results from screens by events and by shared state We're currently working on a deeplinks recipe, plus a Koin integration, and have plenty of others planned. An engineer from JetBrains has also published a Compose Multiplatform version of the recipes. If you have a common use case that you'd like to see a recipe for, please file a recipe request. Summary To get started with Nav3, check out the docs and the recipes. Plus, keep an eye out for a whole week of technical content including: * A deep dive video on the API covering modularization, animations and adaptive layouts. * A live Ask Me Anything (AMA) with the engineers who built Nav3. Nav3 Spotlight Week starts Dec 1st 2025. As always, if you find any issues, please file them here.
android-developers.googleblog.com
November 19, 2025 at 9:13 PM
Stronger threat detection, simpler integration: Protect your growth with the Play Integrity API
_Posted by Dom Elliott – Group Product Manager, Google Play and Eric Lynch - Senior Product Manager, Android Security_ In the mobile ecosystem, abuse can threaten your revenue, growth, and user trust. To help developers thrive, Google Play offers a resilient threat detection service, Play Integrity API. Play Integrity API helps you verify that interactions and server requests are genuine—coming from your unmodified app on a certified Android device, installed by Google Play. The impact is significant: apps using Play integrity features see 80% lower unauthorized usage on average compared to other apps. Today, leaders across diverse categories—including Uber, TikTok, Stripe, Kabam, Wooga, Radar.com, Zimperium, Paytm, and Remini—use it to help safeguard their businesses. We’re continuing to improve the Play Integrity API, making it easier to integrate, more resilient against sophisticated attacks, and better at recovering users who don’t meet integrity standards or encounter errors with new Play in-app remediation prompts. ### Detect threats to your business The Play Integrity API offers verdicts designed to detect specific threats that impact your bottom line during critical interactions. * Unauthorized access: The accountDetails verdict helps you determine whether the user installed or paid for your app or game on Google Play. * Code tampering: The appIntegrity verdict helps you determine whether you're interacting with your unmodified binary that Google Play recognizes. * Risky devices and emulated environments: The deviceIntegrity verdict helps you determine whether your app is running on a genuine Play Protect certified Android device or a genuine instance of Google Play Games for PC. * Unpatched devices: For devices running Android 13 and higher, MEETS_STRONG_INTEGRITY response in the deviceIntegrity verdict helps you determine if a device has applied recent security updates. You can also opt in to deviceAttributes to include the attested Android SDK version in the response. * Risky access by other apps: The appAccessRiskVerdict helps you determine whether apps are running that could be used to capture the screen, display overlays, or control the device (for example, by misusing the accessibility permission). This verdict automatically excludes apps that serve genuine accessibility purposes. * Known malware: The playProtectVerdict helps you determine whether Google Play Protect is turned on and whether it has found risky or dangerous apps installed on the device. * Hyperactivity: The recentDeviceActivity level helps you determine whether a device has made an anomalously high volume of integrity token requests recently, which could indicate automated traffic and could be a sign of attack. * Repeat abuse and reused devices: deviceRecall (beta) helps you determine whether you're interacting with a device that you've previously flagged, even if your app was reinstalled or the device was reset. With device recall, you can customize the repeat actions you want to track. The API can be used across Android form factors including phones, tablets, foldables, Android Auto, Android TV, Android XR, ChromeOS, Wear OS, and on Google Play Games for PC. ### Make the most of Play Integrity API Apps and games have found success with the Play Integrity API by following the security considerations and taking a phased approach to their anti-abuse strategy. Step 1: Decide what you want to protect: Decide what actions and server requests in your apps and games are important to verify and protect. For example, you could perform integrity checks when a user is launching the app, signing in, joining a multiplayer game, generating AI content, or transferring money. Step 2: Collect integrity verdict responses: Perform integrity checks at important moments to start collecting verdict data, without enforcement initially. That way you can analyze the responses for your install base and see how they correlate with your existing abuse signals and historical abuse data. Step 3: Decide on your enforcement strategy: Decide on your enforcement strategy based on your analysis of the responses and what you are trying to protect. For example, you could change risky traffic at important moments to protect sensitive functionality. The API offers a range of responses so you can implement a tiered enforcement strategy based on the trust level you give to each combination of responses. Step 4: Gradually rollout enforcement and support your users: Gradually roll out enforcement. Have a retry strategy when verdicts have issues or are unavailable and be prepared to support good users who have issues. The new Play in-app remediation prompts, described below, make it easier than ever to get users with issues back to a good state. ### NEW: Let Play recover users with issues automatically Deciding how to respond to different integrity signals can be complex, you need to handle various integrity responses and API error codes (like network issues or outdated Play services). We’re simplifying this with new Play in-app remediation prompts. You can show a Google Play prompt to your users to automatically fix a wide range of issues directly within your app. This reduces integration complexity, ensures a consistent user interface, and helps get more users back to a good state. GET_INTEGRITY automatically _detects the issue_ _(in this example, a network error)_ _and resolves it._ You can trigger the GET_INTEGRITY dialog, available in Play Integrity API library version 1.5.0+, after a range of issues to automatically guide the user through the necessary fixes including: * Unauthorized access: GET_INTEGRITY guides the user back to a Play licensed response in accountDetails. * Code tampering: GET_INTEGRITY guides the user back to a Play recognized response in appIntegrity. * Device integrity issues: GET_INTEGRITY guides the user on how to get back to the MEETS_DEVICE_INTEGRITY state in deviceIntegrity. * Remediable error codes: GET_INTEGRITY resolves remediable API errors, such as prompting the user to fix network connectivity or update Google Play Services. We also offer specialized dialogs including GET_STRONG_INTEGRITY (which works like GET_INTEGRITY while also getting the user back to the MEETS_STRONG_INTEGRITY state with no known malware issues in the playProtectVerdict), GET_LICENSED (which gets the user back to a Play licensed and Play recognized state), and CLOSE_UNKNOWN_ACCESS_RISK and CLOSE_ALL_ACCESS_RISK (which prompt the user to close potentially risky apps). ### Choose modern integrity solutions In addition to Play Integrity API, Google offers several other features to consider as part of your overall anti-abuse strategy. Both Play Integrity API and Play’s automatic protection offer user experience and developer benefits for safeguarding app distribution. We encourage existing apps to migrate to these modern integrity solutions instead of using the legacy Play licensing library. Automatic protection: Prevent unauthorized access with Google Play’s automatic protection and ensure users continue getting your official app updates. Turn it on and Google Play will automatically add an installer check to your app’s code, with no developer integration work required. If your protected app is redistributed or shared through another channel, then the user will be prompted to get your app from Google Play. Eligible Play developers also have access to Play’s advanced anti-tamper protection, which uses obfuscation and runtime checks to make it harder and costlier for attackers to modify and redistribute protected apps. Android platform key attestation: Play Integrity API is the recommended way to benefit from hardware-backed Android platform key attestation. Play Integrity API takes care of the underlying implementation across the device ecosystem, Play automatically mitigates key-related issues and outages, and you can use the API to detect other threats. Developers who directly implement key attestation instead of relying on Play Integrity API should prepare for the upcoming Android Platform root certificate rotation in February 2026 to avoid disruption (developers using Play Integrity API do not need to take any action). Firebase App Check: Developers using Firebase can use Firebase App Check to receive an app and device integrity verdict powered by Play Integrity API on certified Android devices, along with responses from other platform attestation providers. To detect all other threats and use other Play features, integrate Play Integrity API directly. reCAPTCHA Enterprise: Enterprise customers looking for a complete fraud and bot management solution can purchase reCAPTCHA Enterprise for mobile. reCAPTCHA Enterprise uses some of Play Integrity API’s anti-abuse signals, and combines them with reCAPTCHA signals out of the box. ### Safeguard your business today With a strong foundation in hardware-backed security and new automated remediation dialogs simplifying integration, the Play Integrity API is an essential tool for protecting your growth. Get started with the Play Integrity API documentation.
android-developers.googleblog.com
November 19, 2025 at 9:13 PM
Deeper Performance Considerations
_Posted by Ben Weiss - Senior Developer Relations Engineer, Breana Tate - Developer Relations Engineer, Jossi Wolf - Software Engineer on Compose_ Compose yourselves and let us guide you through more background on performance. Welcome to day 3 of Performance Spotlight Week. Today we're continuing to share details and guidance on important areas of app performance. We're covering Profile Guided Optimization, Jetpack Compose performance improvements and considerations on working behind the scenes. Let's dive right in. # Profile Guided Optimization Baseline Profiles and Startup Profiles are foundational to improve an Android app's startup and runtime performance. They are part of a group of performance optimizations called Profile Guided Optimization. When an app is packaged, the d8 dexer takes classes and methods and populates your app's classes.dex files. When a user opens the app, these dex files are loaded, one after the other until the app can start. By providing a Startup Profile you let d8 know which classes and methods to pack in the first classes.dex files. This structure allows the app to load fewer files, which in turn improves startup speed. Baseline Profiles effectively move the Just in Time (JIT) compilation steps away from user devices and onto developer machines. The generated Ahead Of Time (AOT) compiled code has proven to reduce startup time and rendering issues alike. # Trello and Baseline Profiles We asked engineers on the Trello app how Baseline Profiles affected their app's performance. After applying Baseline Profiles to their main user journey, Trello saw a significant 25 % reduction in app startup time. Trello was able to improve their app's startup time by 25 % by using baseline profiles. # Baseline Profiles at Meta Also, engineers at Meta recently published an article on how they are accelerating their Android apps with Baseline Profiles. Across Meta's apps the teams have seen various critical metrics improve by up to 40 % after applying Baseline Profiles. ** ** Technical improvements like these help you improve user satisfaction and business success as well. Sharing this with your product owners, CTOs and decision makers can also help speed up your app's performance. # Get started with Baseline Profiles To generate either a Baseline or Startup Profile, you write a macrobenchmark test that exercises the app. During the test profile data is collected which will be used during app compilation. The tests are written using the new UiAutomator API, which we'll cover tomorrow. Writing a benchmark like this is straightforward and you can see the full sample on GitHub. @Test fun profileGenerator() {     rule.collect(         packageName = TARGET_PACKAGE,         maxIterations = 15,         stableIterations = 3,         includeInStartupProfile = true     ) {         uiAutomator {             startApp(TARGET_PACKAGE)         }     } } ## Considerations Start by writing a macrobenchmark tests Baseline Profile and a Startup Profile for the path most traveled by your users. This means the main entry point that your users take into your app which usually is after they logged in. Then continue to write more test cases to capture a more complete picture only for Baseline Profiles. You do not need to cover everything with a Baseline Profile. Stick to the most used paths and measure performance in the field. More on that in tomorrow's post. ## Get started with Profile Guided Optimization To learn how Baseline Profiles work under the hood, watch this video from the Android Developers Summit: And check out the Android Build Time episode on Profile Guided Optimization for another in-depth look: We also have extensive guidance on Baseline Profiles and Startup Profiles available for further reading. # Jetpack Compose performance improvements The UI framework for Android has seen the performance investment of the engineering team pay off. From version 1.9 of Jetpack Compose, scroll jank has dropped to 0.2 % during an internal long scrolling benchmark test. These improvements were made possible because of several features packed into the most recent releases. ## Customizable cache window By default, lazy layouts only compose one item ahead of time in the direction of scrolling, and after something scrolls off screen it is discarded. You can now customize the amount of items to retain through a fraction of the viewport or dp size. This helps your app perform more work upfront, and after enabling pausable composition in between frames, using the available time more efficiently. To start using customizable cache windows, instantiate a LazyLayoutCacheWindow and pass it to your lazy list or lazy grid. Measure your app's performance using different cache window sizes, for example 50% of the viewport. The optimal value will depend on your content's structure and item size. val dpCacheWindow = LazyLayoutCacheWindow(ahead = 150.dp, behind = 100.dp) val state = rememberLazyListState(cacheWindow = dpCacheWindow) LazyColumn(state = state) {     // column contents } ## Pausable composition This feature allows compositions to be paused, and their work split up over several frames. The APIs landed in 1.9 and it is now used by default in 1.10 in lazy layout prefetch. You should see the most benefit with complex items with longer composition times. ## More Compose performance optimizations In the versions 1.9 and 1.10 of Compose the team also made several optimizations that are a bit less obvious. Several APIs that use coroutines under the hood have been improved. For example, when using Draggable and Clickable, developers should see faster reaction times and improved allocation counts. Optimizations in layout rectangle tracking have improved performance of Modifiers like onVisibilityChanged() and onLayoutRectChanged(). This speeds up the layout phase, even when not explicitly using these APIs. Another performance improvement is using cached values when observing positions via onPlaced(). ## Prefetch text in the background Starting with version 1.9, Compose adds the ability to prefetch text on a background thread. This enables you to pre-warm caches to enable faster text layout and is relevant for app rendering performance. During layout, text has to be passed into the Android framework where a word cache is populated. By default this runs on the Ui thread. Offloading prefetching and populating the word cache onto a background thread can speed up layout, especially for longer texts. To prefetch on a background thread you can pass a custom executor to any composable that's using BasicText under the hood by passing a LocalBackgroundTextMeasurementExecutor to a CompositionLocalProvider like so. val defaultTextMeasurementExecutor = Executors.newSingleThreadExecutor() CompositionLocalProvider(     LocalBackgroundTextMeasurementExecutor provides DefaultTextMeasurementExecutor ) {     BasicText("Some text that should be measured on a background thread!") } Depending on the text, this can provide a performance boost to your text rendering. To make sure that it improves your app's rendering performance, benchmark and compare the results. # Background work performance considerations Background Work is an essential part of many apps. You may be using libraries like WorkManager or JobScheduler to perform tasks like: * Periodically uploading analytical events * Syncing data between a backend service and a database * Processing media (i.e. resizing or compressing images) A key challenge while executing these tasks is balancing performance and power efficiency. WorkManager allows you to achieve this balance. It's designed to be power-efficient, and allow work to be deferred to an optimal execution window influenced by a number of factors, including constraints you specify or constraints imposed by the system. WorkManager is not a one-size-fits-all solution, though. Android also has a number of power-optimized APIs that are designed specifically with certain common Core User Journeys (CUJs) in mind. Reference the Background Work landing page for a list of just a few of these,  including updating a widget and getting location in the background. ## Local Debugging tools for Background Work: Common Scenarios To debug Background Work and understand why a task may have been delayed or failed, you need visibility into how the system has scheduled your tasks. To help with this, WorkManager has several related tools to help you debug locally and optimize performance (some of these work for JobScheduler as well)! Here are some common scenarios you might encounter when using WorkManager, and an explanation of tools you can use to debug them. ## Debugging why scheduled work is not executing Scheduled work being delayed or not executing at all can be due to a number of factors, including specified constraints not being met or constraints having been imposed by the system. The first step in investigating why scheduled work is not running is to confirm the work was successfully scheduled.  After confirming the scheduling status, determine whether there are any unmet constraints or preconditions preventing the work from executing. There are several tools for debugging this scenario. ### Background Task Inspector The Background Task Inspector is a powerful tool integrated directly into Android Studio. It provides a visual representation of all WorkManager tasks and their associated states (Running, Enqueued, Failed, Succeeded). To debug why scheduled work is not executing with the Background Task Inspector, consult the listed Work status(es). An ‘Enqueued' status indicates your Work was scheduled, but is still waiting to run. Benefits: Aside from providing an easy way to view all tasks, this tool is especially useful if you have chained work. The Background Task inspector offers a graph view that can visualize if a previous task failing may have impacted the execution of the following task. Background Task Inspector list view ** ** Background Task Inspector graph view ### adb shell dumpsys jobscheduler This command returns a list of all active JobScheduler jobs (which includes WorkManager Workers) along with specified constraints, and system-imposed constraints. It also returns job history. Use this if you want a different way to view your scheduled work and associated constraints. For WorkManager versions earlier than WorkManager 2.10.0, adb shell dumpsys jobscheduler will return a list of Workers with this name: [package name]/androidx.work.impl.background.systemjob.SystemJobService If your app has multiple workers, updating to WorkManager 2.10.0 will allow you to see Worker names and easily distinguish between workers: #WorkerName#@[package name]/androidx.work.impl.background.systemjob.SystemJobService Benefits: This command is useful for understanding if there were any system-imposed constraints, which you cannot determine with the Background Task Inspector. For example, this will return your app's standby bucket, which can affect the window in which scheduled work completes. ### Enable Debug logging You can enable custom logging to see verbose WorkManager logs, which will have WM— attached. Benefits: This allows you to gain visibility into when work is scheduled, constraints are fulfilled, and lifecycle events, and you can consult these logs while developing your app. ### WorkInfo.StopReason If you notice unpredictable performance with a specific worker, you can programmatically observe the reason your worker was stopped on the previous run attempt with WorkInfo.getStopReason. It's a good practice to configure your app to observe WorkInfo using getWorkInfoByIdFlow to identify if your work is being affected by background restrictions, constraints, frequent timeouts, or even stopped by the user. Benefits: You can use WorkInfo.StopReason to collect field data about your workers' performance. ## Debugging WorkManager-attributed high wake lock duration flagged by Android vitals Android vitals features an excessive partial wake locks metric, which highlights wake locks contributing to battery drain. You may be surprised to know that WorkManager acquires wake locks to execute tasks, and if the wake locks exceed the threshold set by Google Play, can have impacts to your app's visibility. How can you debug why there is so much wake lock duration attributed to your work? You can use the following tools. ### Android vitals dashboard First confirm in the Android vitals excessive wake lock dashboard that the high wake lock duration is from WorkManager and not an alarm or other wake lock. You can use the Identify wake locks created by other APIs documentation to understand which wake locks are held due to WorkManager. ### Perfetto Perfetto is a tool for analyzing system traces. When using it for debugging WorkManager specifically, you can view the “Device State” section to see when your work started, how long it ran, and how it contributes to power consumption. Under “Device State: Jobs” track,  you can see any workers that have been executed and their associated wake locks. Device State section in Perfetto, showing CleanupWorker and BlurWorker execution. ### Resources Consult the Debug WorkManager page for an overview of the available debugging methods for other scenarios you might encounter. And to try some of these methods hands on and learn more about debugging WorkManager, check out the Advanced WorkManager and Testing codelab. # Next steps Today we moved beyond code shrinking and explored how the Android Runtime and Jetpack Compose actually render your app. Whether it’s pre-compiling critical paths with Baseline Profiles or smoothing out scroll states with the new Compose 1.9 and 1.10 features, these tools focus on the feel of your app. And we dove deep into best practices on debugging background work. ## Ask Android On Friday we're hosting a live AMA on performance. Ask your questions now using #AskAndroid and get them answered by the experts. ## The challenge We challenged you on Monday to enable R8. Today, we are asking you to generate one Baseline Profile for your app. With Android Studio Otter, the Baseline Profile Generator module wizard makes this easier than ever. Pick your most critical user journey—even if it’s just your app startup and login—and generate a profile. Once you have it, run a Macrobenchmark to compare CompilationMode.None vs. CompilationMode.Partial. Share your startup time improvements on social media using #optimizationEnabled. ## Tune in tomorrow You have shrunk your app with R8 and optimized your runtime with Profile Guided Optimization. But how do you prove these wins to your stakeholders? And how do you catch regressions before they hit production? Join us tomorrow for Day 4: The Performance Leveling Guide, where we will map out exactly how to measure your success, from field data in Play Vitals to deep local tracing with Perfetto.
android-developers.googleblog.com
November 19, 2025 at 9:13 PM
How Uber is reducing manual logins by 4 million per year with the Restore Credentials API
_Posted by Niharika Arora - Senior Developer Relations Engineer at Google, Thomás Oliveira Horta - Android Engineer at Uber_ Uber is the world’s largest ridesharing company, getting millions of people from here to there while also supporting food delivery, healthcare transportation, and freight logistics. Simplicity of access is crucial to its success; when users switch to a new device, they expect a seamless transition without needing to log back into the Uber app or go through SMS-based one-time password authentication. This frequent device turnover presents a challenge, as well as an opportunity for strong user retention. To maintain user continuity, Uber’s engineers turned to the Restore Credentials feature, an essential tool for a time when 40% of people in the United States replace their smartphone every year. Following an assessment of user demand and code prototyping, they introduced Restore Credentials support in the Uber rider app. To validate that restoring credentials helps remove friction for re-logins, the Uber team ran a successful A/B experiment for a five-week period. The integration led to a reduction in manual logins that, when projected across Uber's massive user base, is estimated to eliminate 4 million manual logins annually. Eliminating login friction with Restore Credentials **** The Restore Credentials API eliminates the multi-step manual sign in process on new devices. ** ** There were past attempts at account restoration on new devices using solutions like regular data backup and BlockStore, though both solutions required sharing authentication tokens directly, from source device to destination device. Since token information is highly sensitive, these solutions are only used to some extent, to pre-fill login fields on the destination device and reduce some friction during the sign-in flows. Passkeys are also used to provide a secure and fast login method, but their user-initiated nature limits their impact on seamless device transitions. “Some users don’t use the Uber app on a daily basis, but they expect it will just work when they need it,” said Thomás Oliveira Horta, an Android engineer at Uber. “Finding out you’re logged out just as you open the app to request a ride on your new Android phone can be an unpleasant, off-putting experience.” With Restore Credentials, the engineers were able to bridge this gap. The API generates a unique token on the old device, which is seamlessly and silently moved to the new device when the user restores their app data during the standard onboarding process. This process leverages Android OS’s native backup and restore mechanism, ensuring the safe transfer of the restore key along with the app's data. The streamlined approach guarantees a simple and safe account transfer, meeting Uber's security requirements without any additional user input or development overhead. Note: Restore keys and passkeys use the same underlying server implementation. However, when you save them in your database, you must differentiate between them. This distinction is crucial because user-created passkeys can be managed directly by the user, while restore keys are system-managed and hidden from the user interface. --- “With the adoption of Restore Credentials on Uber’s rider app, we started seeing consistent usage,” Thomás said. “An average of 10,000 unique daily users have signed in with Restore Credentials in the current rollout stage, and they’ve enjoyed a seamless experience when opening the app for the first time on a new device. We expect that number to double once we expand the rollout to our whole userbase.” ** ** **Implementation Considerations** “Integration was pretty easy with minor adjustments on the Android side by following the sample code and documentation,” Thomás said. “Our app already used Credential Manager for passkeys, and the backend required just a couple of small tweaks. Therefore, we simply needed to update the Credential Manager dependency to its latest version to get access to the new Restore Credentials API. We created a restore key via the same passkey creation flow and when our app is launched on a new device, the app proactively checks for this key by attempting a silent passkey retrieval. If the restore key is found, it is immediately utilized to automatically sign the user in, bypassing any manual login.” Throughout the development process, Uber's engineers navigated a few challenges during implementation—from choosing the right entry point to managing the credential lifecycle on the backend. ** ** Choosing the Restore Credentials entry point The engineers carefully weighed the tradeoffs between a perfectly seamless user experience and implementation simplicity when selecting which Restore Credentials entry point to use for recovery. Ultimately, they prioritized a solution that offered an ideal balance. “This can take place during App Launch or in the background during device restoration and setup, using BackupAgent,” Thomás said. “The background login entry point is more seamless for the user, but it presented challenges with background operations and required usage of the BackupAgent API, which would have led to increased complexity in a codebase as large as Uber’s.” They decided to implement the feature during the first app launch, which was significantly faster than the manual login. ** ** Addressing server-side challenges A few server-side challenges arose during integration with the backend WebAuthn APIs, as their design assumed user verification would always be required, and that all credentials would be listed in a user's account settings; neither of these assumptions worked for the non-user-managed Restore Credential keys. The Uber team resolved this by making minor changes to the WebAuthn services, creating new credential types to distinguish passkeys from Restore Credentials and process them appropriately. ** ** Managing the Restore Credentials lifecycle Uber’s engineers faced several challenges in managing the credential keys on the backend, with specialized support from backend engineer Ryan O’Laughlin: * Preventing orphaned keys: A significant challenge was defining a strategy for deleting registered Public Keys to prevent them from becoming "orphaned." For example, uninstalling the app deletes the local credential, but because this action doesn't signal the backend, it leaves an unused key on the server. * Balancing key lifespan: Keys needed a "time to live" that was long enough to handle edge cases. For example, if a user goes through a backup and restore, then manually logs out from the old device, the key is deleted from that old device. However, the key must remain valid on the server so the new device can still use it. * Supporting multiple devices: Since a user might have multiple devices (and could initiate a backup and restore from any of them), the backend needed to support multiple Restore Credentials per user (one for each device). Uber’s engineers addressed these challenges by establishing rules for server-side key deletion based on new credential registration and credential usage. The feature went from design to delivery in a rapid two-month development and testing process. Afterward, a five-week A/B experiment (time to validate the feature with users) went smoothly and yielded undeniable results. ** ** Preventing user drop-off with Restore Credentials By eliminating manual logins on new devices, Uber retained users who might have otherwise abandoned the sign-in flow on a new device. This boost in customer ease was reflected in a wide array of improvements, and though they may seem slight at a glance, the impact is massive at the scale of Uber’s user base: * 3.4% decrease in manual logins (SMS OTP, passwords, social login). * 1.2% reduction in expenses for logins requiring SMS OTP. * 0.575% increase in Uber’s access rate (% of devices that successfully reached the app home screen). * 0.614% rise in devices with completed trips. Today, Restore Credentials is well on its way to becoming a standard part of Uber’s rider app, with over 95% of users in the trial group registered. ** ** [UI flow] During new device setup, users can restore app data and credentials from a backup. After selecting Uber for restoration and the background process finishes, the app will automatically sign the user in on the new device's first launch. **The invisible yet massive impact of Restore Credentials** In the coming months, Uber plans to expand the integration of Restore Credentials. Projecting from the trial’s results, they estimate the change will eliminate 4 million manual logins annually. By simplifying app access and removing a key pain point, they are actively building a more satisfied and loyal customer base, one ride at a time. "Integrating Google's RestoreCredentials allowed us to deliver the seamless 'it just works' experience our users expect on a new device,” said Matt Mueller, Lead Product Manager (Core Identity) at Uber. “This directly translated to a measurable increase in revenue, proving that reducing login friction is key to user engagement and retention." ** ** Ready to enhance your app's login experience? Learn how to facilitate a seamless login experience when switching devices with Restore Credentials and read more in the blog post. In the latest canary of the Android Studio Otter you can validate your integration, as new features help mock the backup and restoring mechanisms. If you are new to Credential Manager, you can refer to our official documentation, codelab and samples for help with integration.
android-developers.googleblog.com
November 19, 2025 at 9:13 PM
Configure and troubleshoot R8 Keep Rules
_Posted by Ajesh R Pai - Developer Relations Engineer & Ben Weiss - Senior Developer Relations Engineer_ In modern Android development, shipping a small, fast, and secure application is a fundamental user expectation. The Android build system's primary tool for achieving this is the R8 optimizer, the compiler that handles dead code and resource removal for shrinking, code renaming or minification, and app optimization. Enabling R8 is a critical step in preparing an app for release, but it requires developers to provide guidance in the form of "Keep Rules." After reading this article, check out the Performance Spotlight Week video on enabling, debugging and troubleshooting the R8 optimizer on YouTube. # Why Keep Rules are needed The need to write Keep Rules stems from a core conflict: R8 is a static analysis tool, but Android apps often rely on dynamic execution patterns like reflection or calls in and out of native code using the JNI (Java Native Interface). R8 builds a graph of used code by analyzing direct calls. When code is accessed in a dynamic way, R8's static analysis cannot predict that and it will identify that code as unused and remove it, leading to runtime crashes. A keep rule is an explicit instruction to the R8 compiler, stating: "This specific class, method, or field is an entry point that will be accessed dynamically at runtime. You must keep it, even if you cannot find a direct reference to it." See the official guide for more details on Keep Rules. # Where to write Keep Rules Custom Keep Rules for an application are written in text file. By convention, this file is named proguard-rules.pro and is located in the root of the app or library module. This file is then specified in your module's build.gradle.kts file's release build type. release {     isShrinkResources = true     isMinifyEnabled = true     proguardFiles(         getDefaultProguardFile("proguard-android-optimize.txt"),         "proguard-rules.pro",     ) } ** ** ### Use the correct default file The getDefaultProguardFile method imports a default set of rules provided by the Android SDK. When using the wrong file your app might not be optimized. Make sure to use proguard-android-optimize.txt. This file provides the default Keep Rules for standard Android components and enables R8's code optimizations. The outdated proguard-android.txt only provides the Keep Rules but does not enable R8's optimizations. Since this is a serious performance problem, we are starting to warn developers about using the wrong file, starting in Android Studio Narwhal 3 Feature Drop. And starting with the Android Gradle Plugin Version 9.0 we're no longer supporting the outdated proguard-android.txt file. So make sure you upgrade to the optimized version. # How to write Keep Rules A keep rule consists of three main parts: 1. An option like -keep or -keepclassmembers 2. Optional modifiers like allowshrinking 3. A class specification that defines the code to match For the complete syntax and examples, refer to the guidance to add Keep Rules. # Keep Rule anti-patterns It's important to know about best practices, but also about anti-patterns. These anti-patterns often arise from misunderstandings or troubleshooting shortcuts and can be catastrophic for a production build's performance. ## Global options These flags are global toggles that should never be used in a release build. They are only for temporary debugging to isolate a problem. Using -dontotptimize effectively disables R8's performance optimizations leading to a slower app. When using -dontobfuscate you disable all renaming and using -dontshrink turns off dead code removal. Both of these global rules increase app size. Avoid using these global flags in a production environment wherever possible for a more performant app user experience. ## Overly broad keep rules The easiest way to nullify R8's benefits is to write overly-broad Keep Rules. Keep rules like the one below instruct the R8 optimizer to not shrink, not obfuscate, and not optimize any class in this package or any of its sub-packages. This completely removes R8's benefits for that entire package. Try to write narrow and specific Keep Rules instead. -keep class com.example.package.** { *;} // WIDE KEEP RULES CAUSE PROBLEMS ## ## The inversion operator (!) The inversion operator (!) seems like a powerful way to exclude a package from a rule. But it's not that simple. Take this example: -keep class !com.example.my_package.** { *; } // USE WITH CAUTION You might think that this rule means "do not keep classes in com.example.package." But it actually means "keep every class, method and property in the entire application that is not in com.example.package." If that came as a surprise to you, best check for any negations in your R8 configuration. ## Redundant rules for Android components Another common mistake is to manually add Keep Rules for your app's Activities, Services, or BroadcastReceivers. This is unnecessary. The default proguard-android-optimize.txt file already includes the relevant rules for these standard Android components to work out of the box. Also many libraries bring their own Keep Rules. So you should not have to write your own rules for these. In case there is a problem with Keep Rules from a library you're using, it is best to reach out to the library author to see what the problem is. # Keep Rule best practices Now that you know what not to do, let's talk about best practices. # Write narrow Keep Rules Good Keep Rules should be as narrow and specific as possible. They should preserve only what is necessary, allowing R8 to optimize everything else. Rule| Quality ---|--- -keep class com.example.** { ; } | Low: Keeps an entire package and its subpackages -keep class com.example.MyClass { ; } | Low: Keeps an entire class which is likely still too wide -keepclassmembers class com.example.MyClass {    private java.lang.String secretMessage;    public void onNativeEvent(java.lang.String);} | High: Only relevant methods and properties from a specific class are kept ** ** ## Use common ancestors Instead of writing separate Keep Rules for multiple different data models, write one rule that targets a common base class or interface. The below rule tells R8 to keep any members of classes that implement this interface and is highly scalable. # Keep all fields of any class that implements SerializableModel -keepclassmembers class * implements com.example.models.SerializableModel {     <fields>; } ** ** ## Use Annotations to target multiple classes Create a custom annotation (e.g., @Serialize) and use it to "tag" classes that need their fields preserved. This is another clean, declarative, and highly scalable pattern. You can create Keep Rules for already existing annotations from frameworks you're using as well. # Keep all fields of any class annotated with @Serialize -keepclassmembers class * {     @com.example.annotations.Serialize <fields>; } # Choose the right Keep Option The Keep Option is the most critical part of the rule. Choosing the wrong one can needlessly disable optimization. Keep Option| What It Does ---|--- -keep| Prevents the class and members mentioned in the declaration from being removed or renamed. -keepclassmembers| Prevents the specified members from being removed or renamed, but allows the class itself to be removed but only on classes which are not otherwise removed. -keepclasseswithmembers | A combination: Keeps the class and its members, only if all the specified members are present. You can find more about the keep option in our documentation for Keep Options. ## Allow optimization with Modifiers Modifiers like allowshrinking and allowobfuscation relax a broad -keep rule, giving optimization power back to R8. For example, if a legacy library forces you to use -keep on an entire class, you might be able to reclaim some optimization by allowing shrinking and obfuscation: # Keep this class, but allow R8 to remove it if it's unused and allow R8 to rename it. -keep,allowshrinking,allowobfuscation class com.example.LegacyClass ** ** ## Add global options for additional optimization Beyond Keep Rules, you can add global flags to your R8 configuration file to encourage even more optimization. -repackageclasses is a powerful option that instructs R8 to move all obfuscated classes into a single package. This saves significant space in the DEX file by removing redundant package name strings. -allowaccessmodification allows R8 to widen access (e.g., private to public) to enable more aggressive inlining. This is now enabled by default when using proguard-android-optimize.txt. Warning: Library authors must never add these global optimization flags to their consumer rules, as they would be forcibly applied to the entire app. And to make it even more clear, in version 9.0 of the Android Gradle Plugin we're going to start ignoring global optimization flags from libraries altogether. # Best practices for libraries Every Android app relies on libraries one way or another. So let's talk about best practices for libraries. ## For library developers If your library uses reflection or JNI, you have the responsibility to provide the necessary Keep Rules to its consumers. These rules are placed in a consumer-rules.pro file, which is then automatically bundled inside the library's AAR file. android {     defaultConfig {         consumerProguardFiles("consumer-rules.pro")     }     ... } ## ## For library consumers ### Filter out problematic Keep Rules If you must use a library that includes problematic Keep Rules, you can filter them out in your build.gradle.kts file starting with AGP 9.0 This tells R8 to ignore the rules coming from a specific dependency. release {     optimization.keepRules {         // Ignore all consumer rules from this specific library         it.ignoreFrom("com.somelibrary:somelibrary")     } } ### ### The best Keep Rule is no Keep Rule The ultimate R8 configuration strategy is to remove the need to write Keep Rules altogether. For many apps can be achieved by choosing modern libraries that favor code generation over reflection. With code generation, the optimizer can more easily determine what code is actually used at runtime and what code can be removed. Also not using any dynamic reflection means no "hidden" entry points, and therefore, no Keep Rules are needed. When choosing a new library, always prefer a solution that uses code generation over reflection. For more information about how to choose libraries, check choose library wisely. # Debugging and troubleshooting your R8 configuration When R8 removes code it should have kept, or your APK is larger than expected, use these tools to diagnose the problem. ## Find duplicate and global Keep Rules Because R8 merges rules from dozens of sources, it can be hard to know what the "final" ruleset is. Adding this flag to your proguard-rules.pro file generates a complete report: # Outputs the final, merged set of rules to the specified file -printconfiguration build/outputs/logs/configuration.txt You can search this file to find redundant rules or trace a problematic rule (like -dontoptimize) back to the specific library that included it. ## Ask R8: Why are you keeping this? If a class you expected to be removed is still in your app, R8 can tell you why. Just add this rule: # Asks R8 to explain why it's keeping a specific class class com.example.MyUnusedClass -whyareyoukeeping During the build, R8 will print the exact chain of references that caused it to keep that class, allowing you to trace the reference and adjust your rules. For a full guide, check out the troubleshoot R8 section. # Next steps R8 is a powerful tool for enhancing Android app performance. Its effectiveness, depends on a correct understanding of its operation as a static analysis engine. By writing specific, member-level rules, leveraging ancestors and annotations, and carefully choosing the right keep options, you can preserve exactly what is necessary. The most advanced practice is to eliminate the need for rules entirely by choosing modern, codegen-based libraries over their reflection-based predecessors. As you're following along Performance Spotlight Week, make sure to check out today's Spotlight Week video on YouTube and continue with our R8 challenge. Use #optimizationEnabled for any questions on enabling or troubleshooting R8. We're here to help. It's time to see the benefits for yourself. We challenge you to enable R8 full mode for your app today. 1. Follow our developer guides to get started: Enable app optimization. 2. Check if you still use proguard-android.txt and replace it with proguard-android-optimize.txt. 3. Then, measure the impact. Don't just feel the difference, verify it. Measure your performance gains by adapting the code from our Macrobenchmark sample app on GitHub to measure your startup times before and after. We're confident you'll see a meaningful improvement in your app's performance. While you're at it, use the social tag #AskAndroid to bring your questions. Throughout the week our experts are monitoring and answering your questions. Stay tuned for tomorrow, where we'll talk about Profile Guided Optimization with Baseline and Startup Profiles, share how Compose rendering performance improved over the past releases and share performance considerations for background work.
android-developers.googleblog.com
November 18, 2025 at 9:11 PM
Gemini 3 is now available for AI assistance in Android Studio
_Posted by Tor Norbye - Senior Director of Engineering_ The Gemini 3 Pro model, released today and engineered for better coding and agentic experiences, is now available for AI assistance in the latest version of Android Studio Otter. Android Studio is the best place for professional Android developers to use Gemini 3 for superior performance in Agent Mode, streamlined development workflows, and advanced problem solving capabilities. With agentic AI assistance to help you with boilerplate and complex development tasks, Android Studio helps you focus on what you do best—creating high quality apps for your users. To get started with Gemini 3 Pro for Android development, download or update to the latest version of Android Studio Otter.  For developers using Gemini in Android Studio at no-cost (Default Model), we are rolling out limited access to Gemini 3 with a 1 million token size window.  For higher usage rate limits and longer sessions with Agent Mode, use a Gemini API key to leverage Gemini 3 in Android Studio for the highest tier of AI capability. Adding a Gemini API key in Android Studio This week we’re rolling out Gemini 3 access for organizations, starting with users who have Gemini Code Assist Enterprise licenses. Your IT administrator will need to enable access to preview models through the Google Cloud console, and you’ll need to sign up for the waitlist. Try Gemini 3 Pro in Android Studio, and let us and the Android developer community know what you think. You can follow us across LinkedIn, Blog,  YouTube, and X. We can't wait to see what you build!
android-developers.googleblog.com
November 18, 2025 at 9:11 PM
How Reddit used the R8 optimizer for high impact performance improvements
_Posted by Ben Weiss - Senior Developer Relations Engineer_ In today's world of mobile applications, a seamless user experience is not just a feature—it's a necessity. Slow load times, unresponsive interfaces, and instability can be significant barriers to user engagement and retention. During their work with the Android Developer Relations team, the engineering team at Reddit used the App Performance Score to evaluate their app. After assessing their performance, they identified significant improvement potential and decided to take the steps to enable the full power of R8, the Android app optimizer. This focused initiative led to remarkable improvements in startup times, reductions in slow or frozen frames and ANRs, and an overall increase in Play Store ratings. This case study breaks down how Reddit achieved these impressive results. # How the R8 Optimizer helped Reddit The R8 Optimizer is a foundational tool for performance optimization on Android. It takes various steps to improve app performance.Let's take a quick look at the most impactful ones. * Tree shaking is the most important step to reduce an app's size. Here, unused code from app dependencies and the app itself is removed. * Method inlining replaces method calls with the actual code, making the app more performant. * Class merging, and other strategies are applied to make the code more compact. At this point it's not about human readability of source code any more, but making compiled code work fast. So abstractions, such as interfaces or class hierarchies don't matter here and will be removed. * Identifier minification changes the names of classes, fields, and methods to shorter, meaningless names. So instead of MyDataModel you might end up with a class called a. * Resource shrinking removes unused resources such as xml files and drawables to further reduce app size. ** ** Caption: Main stages of R8 Optimization ** ** # From hard data to user satisfaction: Identifying success in production Reddit saw improved performance results immediately after a new version of the app was rolled out to users. By using Android Vitals and Crashlytics, Reddit was able to capture performance metrics on real devices with actual users, allowing them to compare the new release against previous versions. Caption: How R8 improved Reddit's app performance The team observed a 40% faster cold startup, a 30% reduction in "Application Not Responding" (ANR) errors, a 25% improvement in frame rendering, and a 14% reduction in app size. These enhancements are crucial for user satisfaction. A faster startup means less waiting and quicker access to content. Fewer ANRs lead to a more stable and reliable app, reducing user frustration. Smoother frame rendering removes UI jank, making scrolling and animations feel fluid and responsive. This positive technical impact was also clearly visible in user sentiment. User satisfaction indicators of the optimization's success were directly visible on the Google Play Store. Following the rollout of the R8-optimized version, the team saw a dramatic and positive shift in user sentiment and engagement. Drew Heavner: "Enabling R8's full potential tool less than 2 weeks" Most impressively, this was accomplished with a focused effort. Drew Heavner, the Staff Software Engineer at Reddit who worked on this initiative, noted that implementing the changes to enable R8's full potential took less than two weeks. # Confirming the gains: A deep dive with macrobenchmarks After observing the significant real-world improvements, Reddit's engineering team and the Android Developer Relations team at Google conducted detailed benchmarks to scientifically confirm the gains and experiment with further optimizations. For this analysis, Reddit engineering provided two versions of their app: one without optimizations and another that applied R8 and two more foundational performance optimization tools: Baseline Profiles, and Startup Profiles. Baseline Profiles effectively move the Just in Time (JIT) compilation steps away from user devices and onto developer machines. The generated Ahead Of Time (AOT) compiled code has proven to reduce startup time and rendering issues alike. When an app is packaged, the d8 dexer takes classes and methods and constructs your app's classes.dex files. When a user opens the app, these dex files are loaded, one after the other until the app can start. By providing a Startup Profile you let d8 know which classes and methods to pack in the first classes.dex files. This structure allows the app to load fewer files, which in turn improves startup speed. Jetpack Macrobenchmark was the core tool for this phase, allowing for precise measurement of user interactions in a controlled environment. To simulate a typical user journey, they used the UIAutomator API to create a test that opened the app, scrolled down three times, and then scrolled back up. In the end all that was needed to write the benchmark was this: uiAutomator {   startApp(REDDIT)   repeat(3) {     onView { isScrollable }.fling(Direction.DOWN) }   repeat(3) {     onView {isScrollable }.fling(Direction.UP)   } } The benchmark data confirmed the field observations and provided deeper insights. The fully optimized app started 55% faster and users could begin to browse 18% sooner. The optimized app also showed a two-thirds reduction in Just in Time (JIT) compilation occurrences and a one-third decrease in JIT compilation time. Frame rendering improved, resulting in 19% more frames being rendered over the benchmarked user journey. Finally, the app's size was reduced by over a third. Caption: Reddit's overall performance improvements You can measure the JIT compilation time with a custom Macrobenchmark trace section metric like this: val jitCompilationMetric = TraceSectionMetric("JIT Compiling %", label = "JIT compilation") ** ** # Enabling the technology behind the transformation: R8 To enable R8 in full mode, you configure your app/build.gradle.kts file by setting minifyEnabled and shrinkResources to true in the release build type. android {     ...     buildTypes {         release {             isMinifyEnabled = true             isShrinkResources = true             proguardFiles(                 getDefaultProguardFile("proguard-android-optimize.txt"),                 "keep-rules.pro",             )         }     } } This step has to be followed by holistic end to end testing, as performance optimizations can lead to unwanted behavior, which you better catch before your users do. As shown earlier in this article, R8 performs extensive optimizations in order to maximize your performance benefits. R8 makes substantial modifications to the code including renaming, moving, and removing classes, fields and methods. If you observe that these modifications cause errors, you need to specify which parts of the code R8 shouldn't modify by declaring those in keep rules. # Follow Reddit's example in your app Reddit's success with R8 serves as a powerful case study for any development team looking to make a significant, low-effort impact on their app's performance. The direct correlation between the technical improvements and the subsequent rise in user satisfaction underscores the value of performance optimization. By following the blueprint laid out in this case study—using tools like the App Performance Score to identify opportunities, enabling R8's full optimization potential, monitoring real-world data, and using benchmarks to confirm and deepen understanding—other developers can achieve similar gains. To get started with R8 in your own app, refer to the freshly updated official documentation and guidance on enabling, configuring and troubleshooting the R8 optimizer.
android-developers.googleblog.com
November 17, 2025 at 9:11 PM
Use R8 to shrink, optimize, and fast-track your app
_Posted by Ben Weiss - Senior Developer Relations Engineer_ Welcome to day one of Android Performance Spotlight Week! We're kicking things off with the single most impactful, low-effort change you can make to improve your app's performance: enabling the R8 optimizer in full mode. You probably already know R8 as a tool to shrink your app's size. It does a fantastic job of removing unused code and resources, reducing your app's size. But its real power, the one it's really g-R8 at, is as an optimizer. When you enable full mode and allow optimizations, R8 performs deep, whole-program optimizations, rewriting your code to be fundamentally more efficient. This isn't just a minor tweak. After reading this article, check out the Performance Spotlight Week introduction to the R8 optimizer on YouTube. # How R8 makes your app more performant Let's shine a spotlight on the largest steps that the R8 optimizer takes to improve app performance. Tree shaking is the most important step to reduce app size. During this phase the R8 optimizer removes unused code from libraries that your app depends on as well as dead code from your own codebase. Method inlining replaces a method call with the actual code, which improves runtime performance. Class merging, and other strategies are applied to make the code more compact. All your beautiful abstractions, such as interfaces and class hierarchies don't matter at this point and are likely to be removed. Code minification is used to change the names of classes, fields, and methods to shorter, meaningless ones. So instead of MyDataModel you might end up with a class called a. This is what causes the most confusion when reading stack traces from an R8 optimized app. (Note that we have improved this in AGP 9.0!) Resource shrinking further reduces an app's size by removing unused resources such as xml files and drawables. By applying these steps the R8 optimizer improves app startup times, enables smoother UI rendering, with fewer slow and frozen frames and improves overall on-device resource usage. # Case Study: Reddit's performance improvements with R8 As one example of the performance improvements that R8 can bring, let's take a look at an example from Reddit. After enabling R8 in full mode, the Reddit for Android app saw significant performance improvements in various areas. Caption: How R8 improved Reddit's app performance ** ** The team observed a 40% faster cold startup, a 30% reduction in "Application Not Responding" (ANR) errors, a 25% improvement in frame rendering, and a 14% reduction in app size . ** ** These enhancements are crucial for user satisfaction. A faster startup means less waiting and quicker access to content. Fewer ANRs lead to a more stable and reliable app, reducing user frustration. Smoother frame rendering removes UI jank, making scrolling and animations feel fluid and responsive. This positive technical impact was also clearly visible in user sentiment. You can read more about their improvements on our blog. ** ** # Non-technical side effects of using R8 During our work with partners we have seen that these technical improvements have a direct impact on user satisfaction and can be reflected in user retention, engagement and session length. User stickiness, which can be measured with daily, weekly or monthly active users, has also been positively affected by technical performance improvements. And we've seen app ratings on the Play Store rise in correlation with R8 adoption. Sharing this with your product owners, CTOs and decision makers can help speed up your app's performance. ** ** ** ** So let's call it what it is: Deliberate performance optimization is a virtue. # Guiding you to a more performant app We heard that our developer guidance for R8 needed to be improved. So we went to work. The developer guidance for the R8 optimizer now is much more actionable and provides comprehensive guidance to enable and debug R8. The documentation guides you on the high-level strategy for adoption, emphasizing the importance of choosing optimization-friendly libraries and, crucially, adopting R8's features incrementally to ensure stability. This phased approach allows you to safely unlock the benefits of R8 while providing you with guidance on difficult-to-debug issues. We have significantly expanded our guidance on Keep Rules, which are the primary mechanism for controlling the R8 optimizer. We now provide a section on what Keep Rules are, how to apply them and guide you with best practices for writing and maintaining them. We also provide practical and actionable use cases and examples, helping you understand how to correctly prevent R8 from removing code that is needed at runtime, such as code accessed via reflection or use of the JNI native interface. The documentation now also covers essential follow-up steps and advanced scenarios. We added a section on testing and troubleshooting, so you can verify the performance gains and debug any potential issues that arise. The advanced configurations section explains how to target specific build variants, customize which resources are kept or removed, and offers special optimization instructions for library authors, ensuring you can provide an optimized and R8-friendly package for other developers to use. # Enable the R8 optimizer's full potential The R8 optimizer defaults to using "full mode" since version 8.0 of the Android Gradle Plugin. If your project has been developed over many years, it might still include a legacy flag to disable it. Check your gradle.properties file for this line and remove it. android.enableR8.fullMode=false // delete this line to enable R8's full potential Now check whether you have enabled R8 in your app's build.gradle.kts file for the release variant. It's enabled by setting isMinifyEnabled and isShrinkResources to true. You can also pass default and custom configuration files at this step. release {    isMinifyEnabled = true    isShrinkResources = true    proguardFiles(        getDefaultProguardFile("proguard-android-optimize.txt"),        "keep-rules.pro"    ) } ** ** # Case Study: Disney+ performance improvements Engineers at Disney+ invest in app performance and are optimizing the app's user experience. Sometimes even seemingly small changes can make a huge impact. While inspecting their R8 configuration, the team found that the -dontoptimize flag was being used. It was brought in by a default configuration file, which is still used in many apps today. ** ** After replacing proguard-android.txt with proguard-android-optimize.txt, the Disney+ team saw significant improvements in their app's performance. ** ** After a new version of the app containing this change was rolled out to users, Disney+ saw 30% faster app startup and 25% fewer user-perceived ANRs. Today many apps still use the proguard-android.txt file which contains the -dontoptimize flag. And that's where our tooling improvements come in. # Tooling support Starting with Android Studio Narwhal 3 Feature Drop, you will see a lint warning when using proguard-android.txt And from AGP 9.0 onwards we are entirely dropping support for the file. This means you will have to migrate to proguard-android-optimize.txt. We've also invested in new Android Studio features to make debugging R8-optimized code easier than ever. Starting in AGP 9.0 you can now automatically de-obfuscate stack traces within Android Studio's logcat for R8-processed builds, helping you pinpoint the exact line of code causing an issue, even in a fully optimized app. This will be covered in more depth in tomorrow's blog post on this Android Performance Spotlight Week. # Next Steps Check out the Performance Spotlight Week introduction to the R8 optimizer on YouTube. ** ** ## 📣 Take the Performance Challenge! It's time to see the benefits for yourself. We challenge you to enable R8 full mode for your app today. 1. Follow our developer guides to get started: Enable app optimization. 2. Check if you still use proguard-android.txt and replace it with proguard-android-optimize.txt. 3. Then, measure the impact. Don't just feel the difference, verify it. Measure your performance gains by adapting the code from our Macrobenchmark sample app on GitHub to measure your startup times before and after. We're confident you'll see a meaningful improvement in your app's performance. Use #optimizationEnabled for any questions on enabling or troubleshooting R8. We're here to help. ## Bring your questions for the Ask Android session on Friday Use the social tag #AskAndroid to bring any performance questions. Throughout the week we are monitoring your questions and will answer several in the Ask Android session on performance on Friday, November 21. Stay tuned for tomorrow, where we'll dive even deeper into debugging and troubleshooting. But for now, get started with R8 and get your app on the fast track.
android-developers.googleblog.com
November 17, 2025 at 9:11 PM
Get your app on the fast track with Android Performance Spotlight Week!
_Posted by Ben Weiss - Senior Developer Relations Engineer, Performance Paladin_ When working on new features, app performance often takes a back seat. However, while it's not always top of mind for developers, users can see exactly where your app's performance lags behind. When that new feature takes a long time to load or is slow to render, your users can become frustrated. And unhappy users are more likely to abandon the feature you spent so much time on. App performance is a core part of user experience and app quality, and recent studies and research shows that it's highly correlated with increased user satisfaction, higher retention, and better review scores. And we're here to help… Welcome to Android Performance Spotlight Week! All week long, we're providing you with low-effort, high-impact tools and guidance to get your app on the fast track to better performance. We help you lay the foundation and then dive deeper into helping your app become a better version of itself. The R8 optimizer and Profile Guided Optimizations are foundational tools to improve overall app performance. And that's why we just released significant improvements to Android Studio tooling for performance and with the Android Gradle Plugin 9.0 we're introducing new APIs to make it easier for you to do the right thing when configuring the R8 Android app optimizer. Jetpack Compose version 1.10, which is now in beta, ships with several features that improve app rendering performance. In addition to these updates, we're bringing you a refresher on improving app health and performance monitoring. Some of our partners are going to tell their performance improvement stories as well. Stay tuned to the blog all week as we'll be updating this post with a digest of all the content released. We're excited to share these updates and help you improve your app's performance. Here's a closer look at what we'll be covering: ## Monday: Deliberate performance optimization with R8 November 17, 2025 We're kicking off with a deep dive into the R8 optimizer. It's not just about shrinking your app's size, it's about gaining a fundamental understanding of how the R8 optimizer can improve performance in your app and why you should use it right away. We just published the largest overhaul of new technical guidance to date. The guides cover how to enable, configure and troubleshoot the R8 optimizer. On Monday you'll also see case studies from top partners showing the real-world gains they achieved. [ ](https://www.youtube.com/watch?v=QqO2jZ-NZko) [ ](https://www.youtube.com/watch?v=QqO2jZ-NZko) Read the blog post and developer guide. ## Tuesday: Debugging and troubleshooting R8 November 18, 2025 We tackle the "Why does my app crash after enabling R8?" question head-on. We know advanced optimization can sometimes reveal edge cases, so we're focusing on debugging and troubleshooting R8 related issues. We'll show you how to use new features in Android Studio to de-obfuscate stack traces, identify common configuration problems, and implement best practices to get the most out of R8. We want you to feel confident, not just hopeful, when you flip the switch. Content coming on November 18, 2025 ## Wednesday: Deeper performance considerations November 19, 2025 Mid-week, we explore high-impact performance offerings beyond the R8 optimizer. We'll show you how to supercharge your app's startup and interactions using Profile Guided Optimization with Baseline Profiles and Startup Profiles. They are ready and proven to deliver another massive boost. We also have exciting news on Jetpack Compose rendering performance improvements. Plus, we'll share how to optimize your app's health by managing background work effectively. Content coming on November 19, 2025 ## Thursday: Measure and improve November 20, 2025 It's not an improvement if you can't prove it. Thursday is dedicated to performance measurement. We'll share our complete guide, starting from local measurement and debugging with tools like Jetpack Macrobenchmark and the new UiAutomator API to capture jank and startup times, all the way to monitoring your app in the wild. You'll learn about Play Vitals and other new APIs to understand your real user performance and quantify your success. Content coming on November 20, 2025 ## Friday: Ask Android Live November 21, 2025 We cap off the week with an in-depth, live conversation. This is your chance to talk directly with the engineers and Developer Relations team who build and use these tools every day. We'll have a panel of experts from the R8 and other performance teams ready to answer your toughest questions live. Get your questions ready! Content coming on November 21, 2025 * * * ### 📣 Take the Performance Challenge! We're not just sharing guidance. We're challenging you to put it into action! Here's our challenge for you this week: Enable R8 full mode for your app. 1. Follow our developer guides to get started: Enable app optimization. 2. Then, measure the impact. Don't just feel the difference, verify it. Measure your performance gains by using or adapting the code from our Macrobenchmark sample app on GitHub to measure your startup times before and after. We're confident you'll see a meaningful improvement in your app's performance. While you're at it, use the social tags #AskAndroid to bring your questions. Throughout the week our experts are monitoring and answering your questions.
android-developers.googleblog.com
November 17, 2025 at 9:11 PM
Introducing CameraX 1.5: Powerful Video Recording and Pro-level Image Capture
_Posted by Scott Nien, Software Engineer_ _ _ The CameraX team is thrilled to announce the release of version 1.5! This latest update focuses on bringing professional-grade capabilities to your fingertips while making the camera session easier to configure than ever before. For video recording, users can now effortlessly capture stunning slow-motion or high-frame-rate videos. More importantly, the new Feature Group API allows you to confidently enable complex combinations like 10-bit HDR and 60 FPS, ensuring consistent results across supported devices. On the image capture front, you gain maximum flexibility with support for capturing unprocessed, uncompressed DNG (RAW) files. Plus, you can now leverage Ultra HDR output even when using powerful Camera Extensions. Underpinning these features is the new SessionConfig API, which streamlines camera setup and reconfiguration. Now, let's dive into the details of these exciting new features. ## Powerful Video Recording: High-Speed and Feature Combinations CameraX 1.5 significantly expands its video capabilities, enabling more creative and robust recording experiences. ### Slow Motion & High Frame Rate Video One of our most anticipated features, slow-motion video, is now available. You can now capture high-speed video (e.g., 120 or 240 fps) and encode it directly into a dramatic slow-motion video. Alternatively, you can record at the same high frame rate to produce exceptionally smooth video. Implementing this is straightforward if you're familiar with the VideoCapture API. 1. Check for High-Speed Support: Use the new Recorder.getHighSpeedVideoCapabilities() method to query if the device supports this feature. val cameraInfo = cameraProvider.getCameraInfo(cameraSelector) val highSpeedCapabilities = Recorder.getHighSpeedVideoCapabilities(cameraInfo) if (highSpeedCapabilities == null) { // This camera device does not support high-speed video. return } 2. Configure and Bind the Use Case: Use the returned videoCapabilities (which contains supported video quality information) to build a HighSpeedVideoSessionConfig. You must then query the supported frame rate ranges via cameraInfo.getSupportedFrameRateRanges() and set the desired range. Invoke setSlowMotionEnabled(true) to record slow motion videos, otherwise it will record high-frame-rate videos. The final step is to use the regular Recorder.prepareRecording().start() to begin recording the video. val preview = Preview.Builder().build() val quality = highSpeedCapabilities .getSupportedQualities(DynamicRange.SDR).first() val recorder = Recorder.Builder() .setQualitySelector(QualitySelector.from(quality))) .build() val videoCapture = VideoCapture.withOutput(recorder) val frameRateRange = cameraInfo.getSupportedFrameRateRanges( HighSpeedVideoSessionConfig(videoCapture, preview) ).first() val sessionConfig = HighSpeedVideoSessionConfig( videoCapture, preview, frameRateRange = frameRateRange, // Set true for slow-motion playback, or false for high-frame-rate isSlowMotionEnabled = true ) cameraProvider.bindToLifecycle( lifecycleOwner, cameraSelector, sessionConfig) // Start recording slow motion videos. val recording = recorder.prepareRecording(context, outputOption) .start(executor, {}) Compatibility and Limitations High-speed recording requires specific CameraConstrainedHighSpeedCaptureSession and CamcorderProfile support. Always perform the capability check, and enable high-speed recording only on supported devices to prevent bad user experience. Currently, this feature is supported on the rear cameras of almost all Pixel devices and select models from other manufacturers. Check the blog post for more details. ### Combine Features with Confidence: The Feature Group API CameraX 1.5 introduces the Feature Group API, which eliminates the guesswork of feature compatibility. Based on Android 15's feature combination query API, you can now confidently enable multiple features together, guaranteeing a stable camera session. The Feature Group currently supports: HDR (HLG), 60 fps, Preview Stabilization, and Ultra HDR. For instance, you can enable HDR, 60 fps, and Preview Stabilization simultaneously on Pixel 10 and Galaxy S25 series. Future enhancements are planned to include 4K recording and ultra-wide zoom. The feature group API enables two essential use cases: Use Case 1: Prioritizing the Best Quality If you want to capture using the best possible combination of features, you can provide a prioritized list. CameraX will attempt to enable them in order, selecting the first combination the device fully supports. val sessionConfig = SessionConfig( useCases = listOf(preview, videoCapture), preferredFeatureGroup = listOf( GroupableFeature.HDR_HLG10, GroupableFeature.FPS_60, GroupableFeature.PREVIEW_STABILIZATION ) ).apply { // (Optional) Get a callback with the enabled features to update your UI. setFeatureSelectionListener { selectedFeatures -> updateUiIndicators(selectedFeatures) } } processCameraProvider.bindToLifecycle(activity, cameraSelector, sessionConfig) In this example, CameraX tries to enable features in this order: 1. HDR + 60 FPS + Preview Stabilization 2. HDR + 60 FPS 3. HDR + Preview Stabilization 4. HDR 5. 60 FPS + Preview Stabilization 6. 60 FPS 7. Preview Stabilization 8. None Use Case 2: Building a User-Facing Settings UI You can now accurately reflect which feature combinations are supported in your app's settings UI, disabling toggles for unsupported options like the picture below. To determine whether to gray out a toggle, use the following codes to check for feature combination support. Initially, query the status of every individual feature. Once a feature is enabled, re-query the remaining features with the enabled features to see if their toggles must now be grayed out due to compatibility constraints. fun disableFeatureIfNotSuported( enabledFeatures: Set<GroupableFeature>, featureToCheck:GroupableFeature ) { val sessionConfig = SessionConfig( useCases = useCases, requiredFeatureGroup = enabledFeatures + featureToCheck ) val isSupported = cameraInfo.isFeatureGroupSupported(sessionConfig) if (!isSupported) { // disable the toggle for featureToCheck } } Please refer to the Feature Group blog post for more information. ### More Video Enhancements * Concurrent Camera Improvements: With CameraX 1.5.1, you can now bind Preview + ImageCapture + VideoCapture use cases concurrently for each SingleCameraConfig in non-composition mode. Additionally, in composition mode (same use cases with CompositionSettings),  you can now set the CameraEffect that is applied to the final composition result. * Dynamic Muting: You can now start a recording in a muted state using PendingRecording.withAudioEnabled(boolean initialMuted) and allow the user to unmute later using Recording.mute(boolean muted). * Improved Insufficient Storage Handling: CameraX now reliably dispatches the VideoRecordEvent.Finalize.ERROR_INSUFFICIENT_STORAGE error, allowing your app to gracefully handle low storage situations and inform the user. * Low Light Boost: On supported devices (like the Pixel 10 series), you can enable CameraControl.enableLowLightBoostAsync to automatically brighten the preview and video streams in dark environments. ## Professional-Grade Image Capture CameraX 1.5 brings major upgrades to ImageCapture for developers who demand maximum quality and flexibility. ### Unleash Creative Control with DNG (RAW) Capture For complete control over post-processing, CameraX now supports DNG (RAW) capture. This gives you access to the unprocessed, uncompressed image data directly from the camera sensor, enabling professional-grade editing and color grading. The API supports capturing the DNG file alone, or capturing simultaneous JPEG and DNG outputs. See the sample code below for how to capture JPEG and DNG files simultaneously. val capabilities = ImageCapture.getImageCaptureCapabilities(cameraInfo) val imageCapture = ImageCapture.Builder().apply { if (capabilities.supportedOutputFormats .contains(OUTPUT_FORMAT_RAW_JPEG)) { // Capture both RAW and JPEG formats. setOutputFormat(OUTPUT_FORMAT_RAW_JPEG) } }.build() // ... bind imageCapture to lifecycle ... // Provide separate output options for each format. val outputOptionRaw = /* ... configure for image/x-adobe-dng ... */ val outputOptionJpeg = /* ... configure for image/jpeg ... */ imageCapture.takePicture( outputOptionRaw, outputOptionJpeg, executor, object : ImageCapture.OnImageSavedCallback { override fun onImageSaved(results: OutputFileResults) { // This callback is invoked twice: once for the RAW file // and once for the JPEG file. } override fun onError(exception: ImageCaptureException) {} } ) ### Ultra HDR for Camera Extensions Get the best of both worlds: the stunning computational photography of Camera Extensions (like Night Mode) combined with the brilliant color and dynamic range of Ultra HDR. This feature is now supported on many recent premium Android phones, such as the Pixel 9/10 series and Samsung S24/S25 series. // Support UltraHDR when Extension is enabled. val extensionsEnabledCameraSelector = extensionsManager .getExtensionEnabledCameraSelector( CameraSelector.DEFAULT_BACK_CAMERA, ExtensionMode.NIGHT) val imageCapabilities = ImageCapture.getImageCaptureCapabilities( cameraProvider.getCameraInfo(extensionsEnabledCameraSelector) val imageCapture = ImageCapture.Builder() .apply { if (imageCapabilities.supportedOutputFormats .contains(OUTPUT_FORMAT_JPEG_ULTRA_HDR) { setOutputFormat(OUTPUT_FORMAT_JPEG_ULTRA_HDR) } }.build() ## Core API and Usability Enhancements ### A New Way to Configure: SessionConfig As seen in the examples above, SessionConfig is a new concept in CameraX 1.5. It centralizes configuration and simplifies the API in two key ways: 1. No More Manual unbind() Calls: CameraX APIs are lifecycle-aware. It will implicitly “unbind” your use cases when the activity or other LifecycleOwner is destroyed. But updating use cases or switching cameras still requires you to call unbind() or unbindAll() before rebinding. Now with CameraX 1.5, when you bind a new SessionConfig, CameraX seamlessly updates the session for you, eliminating the need for unbind calls. 2. Deterministic Frame Rate Control: The new SessionConfig API introduces a deterministic way to manage the frame rate. Unlike the previous setTargetFrameRate, which was only a hint, this new method guarantees the specified frame rate range will be applied upon successful configuration. To ensure accuracy, you must query supported frame rates using CameraInfo.getSupportedFrameRateRanges(SessionConfig). By passing the full SessionConfig, CameraX can accurately determine the supported ranges based on stream configurations. ### Camera-Compose is Now Stable We know how much you enjoy Jetpack Compose, and we're excited to announce that the camera-compose library is now stable at version 1.5.1! This release includes critical bug fixes related to CameraXViewfinder usage with Compose features like moveableContentOf and Pager, as well as resolving a preview stretching issue. We will continue to add more features to camera-compose in future releases. ### ImageAnalysis and CameraControl Improvements * Torch Strength Adjustment: Gain fine-grained control over the device's torch with new APIs. You can query the maximum supported strength using CameraInfo.getMaxTorchStrengthLevel() and then set the desired level with CameraControl.setTorchStrengthLevel(). * NV21 Support in ImageAnalysis: You can now request the NV21 image format directly from ImageAnalysis, simplifying integration with other libraries and APIs. This is enabled by invoking ImageAnalysis.Builder.setOutputImageFormat(OUTPUT_IMAGE_FORMAT_NV21). ## Get Started Today Update your dependencies to CameraX 1.5 today and explore the exciting new features. We can't wait to see what you build. To use CameraX 1.5,  please add the following dependencies to your libs.versions.toml. (We recommend using 1.5.1 which contains many critical bug fixes and concurrent camera improvements.) [versions] camerax = "1.5.1" [libraries] .. androidx-camera-core = { module = "androidx.camera:camera-core", version.ref = "camerax" } androidx-camera-compose = { module = "androidx.camera:camera-compose", version.ref = "camerax" } androidx-camera-view = { module = "androidx.camera:camera-view", version.ref = "camerax" } androidx-camera-lifecycle = { group = "androidx.camera", name = "camera-lifecycle", version.ref = "camerax" } androidx-camera-camera2 = { module = "androidx.camera:camera-camera2", version.ref = "camerax" } androidx-camera-extensions = { module = "androidx.camera:camera-extensions", version.ref = "camerax" } And then add these to your module build.gradle.kts dependencies: dependencies { .. implementation(libs.androidx.camera.core) implementation(libs.androidx.camera.lifecycle) implementation(libs.androidx.camera.camera2) implementation(libs.androidx.camera.view) // for PreviewView implementation(libs.androidx.camera.compose) // for compose UI implementation(libs.androidx.camera.extensions) // For Extensions } Have questions or want to connect with the CameraX team? Join the CameraX developer discussion group or file a bug report: * CameraX developers discussion group * File a bug
android-developers.googleblog.com
November 13, 2025 at 9:10 PM
#WeArePlay: Meet the game creators who entertain, inspire and spark imagination
_Posted by Robbie McLachlan, Developer Marketing_ _ _ In our latest #WeArePlay stories, we meet the game creators who entertain, inspire and spark imagination in players around the world on Google Play. From delivering action-packed 3D kart racing to creating a calming, lofi world for plant lovers - here are a few of our favourites: _Ralf and Matt, co-founders of Vector Unit_ _San Rafael (CA), U.S._ With over 557 million downloads, Ralf and Matt’s game, Beach Buggy Racing, brings the joy of classic, action-packed kart racing to gamers worldwide. After meeting at a California game company back in the late ’90s, Matt and Ralf went on to work at major studios. Years later, they reunited to form Vector Unit, a new company where they could finally have full creative freedom. They channeled their passion for classic kart-racers into Beach Buggy Racing, a vibrant 3D title that brought a console-quality feel to phones. The fan reception was immense, with players celebrating by baking cakes and dressing up for in-game events. Today, the team keeps Beach Buggy Racing 2 updated with global collaborations and is already working on a new prototype, all to fulfill their mission: sparking joy. _Camilla, founder of Clover-Fi Games_ _Batangas, Philippines_ Camilla’s game, Window Garden, lets players slow down by decorating and caring for digital plants. While living with her mother during the pandemic, tech graduate Camilla made the leap from software engineer to self-taught game developer. Her mom’s indoor plants sparked an idea: Window Garden. She created the lofi idle game to encourage players to slow down. In the game, players water flowers and fruits, and decorate cozy spaces in their own style. With over 1 million downloads to date, this simple loop has become a calming daily ritual since its launch. The game's success earned it a “Best of 2024” award from Google Play, and Camilla now hopes to expand her studio and collaborate with other creatives. _Rodrigo, founder of Kolb Apps_ _Curitiba, Brazil_ Rodrigo's game, Real Drum, puts a complete, realistic-sounding virtual drum set in your pocket, making it easy for anyone to play. Rodrigo started coding at just 12 years old, creating software for his family's businesses. This technical skill later combined with his hobby as an amateur musician. While pursuing a career in programming, he noticed a clear gap: there were no high-quality percussion apps. He united his two passions, technology and rhythm, to create Real Drum. The result is a realistic, easy-to-use virtual set that has amassed over 437 million downloads, letting people around the world play drums and cymbals without the noise. His game has made learning music accessible to many and inspired new artists. Now, Rodrigo's team plans to launch new apps for children to continue nurturing musical creativity. Discover other inspiring app and game founders featured in #WeArePlay.
android-developers.googleblog.com
November 13, 2025 at 9:11 PM
Android developer verification: Early access starts now as we continue to build with your feedback
_Posted by Matthew Forsythe Director - Product Management, Android App Safety_ We recently announced new developer verification requirements, which serve as an additional layer of defense in our ongoing effort to keep Android users safe. We know that security works best when it accounts for the diverse ways people use our tools. This is why we announced this change early: to gather input and ensure our solutions are balanced. We appreciate the community's engagement and have heard the early feedback – specifically from students and hobbyists who need an accessible path to learn, and from power users who are more comfortable with security risks. We are making changes to address the needs of both groups. To understand how these updates fit into our broader mission, it is important to first look at the specific threats we are tackling. Why verification is important Keeping users safe on Android is our top priority. Combating scams and digital fraud is not new for us — it has been a central focus of our work for years. From Scam Detection in Google Messages to Google Play Protect and real-time alerts for scam calls, we have consistently acted to keep our ecosystem safe. However, online scams and malware campaigns are becoming more aggressive. At the global scale of Android, this translates to real harm for people around the world – especially in rapidly digitizing regions where many are coming online for the first time. Technical safeguards are critical, but they cannot solve for every scenario where a user is manipulated. Scammers use high-pressure social engineering tactics to trick users into bypassing the very warnings designed to protect them. For example, a common attack we track in Southeast Asia illustrates this threat clearly. A scammer calls a victim claiming their bank account is compromised and uses fear and urgency to direct them to sideload a "verification app" to secure their funds, often coaching them to ignore standard security warnings. Once installed, this app — actually malware — intercepts the victim's notifications. When the user logs into their real banking app, the malware captures their two-factor authentication codes, giving the scammer everything they need to drain the account. While we have advanced safeguards and protections to detect and take down bad apps, without verification, bad actors can spin up new harmful apps instantly. It becomes an endless game of whack-a-mole. Verification changes the math by forcing them to use a real identity to distribute malware, making attacks significantly harder and more costly to scale. We have already seen how effective this is on Google Play, and we are now applying those lessons to the broader Android ecosystem to ensure there is a real, accountable identity behind the software you install. Supporting students and hobbyists We heard from developers who were concerned about the barrier to entry when building apps intended only for a small group, like family or friends. We are using your input to shape a dedicated account type for students and hobbyists. This will allow you to distribute your creations to a limited number of devices without going through the full verification requirements. Empowering experienced users While security is crucial, we’ve also heard from developers and power users who have a higher risk tolerance and want the ability to download unverified apps. Based on this feedback and our ongoing conversations with the community, we are building a new advanced flow that allows experienced users to accept the risks of installing software that isn't verified. We are designing this flow specifically to resist coercion, ensuring that users aren't tricked into bypassing these safety checks while under pressure from a scammer. It will also include clear warnings to ensure users fully understand the risks involved, but ultimately, it puts the choice in their hands. We are gathering early feedback on the design of this feature now and will share more details in the coming months. Getting started with early access Today, we’re excited to start inviting developers to the early access for developer verification in Android Developer Console for developers that distribute exclusively outside of Play, and will share invites to the Play Console experience soon for Play developers. We are looking forward to your questions and feedback on streamlining the experience for all developers. Watch our video below for a walkthrough of the new Android Developer Console experience and see our guides for more details and FAQs. We are committed to working with you to keep the ecosystem safe while getting this right.
android-developers.googleblog.com
November 13, 2025 at 9:11 PM
Raising the bar on battery performance: excessive partial wake locks metric is now out of beta
_Posted by Karan Jhavar - Product Manager, Android Frameworks, Dan Brown - Product Manager, Google Play, and Eric Brenner - Software Engineer, Google Play_ _ _ A great user experience is built on a foundation of strong technical performance. We are committed to helping you create stable, responsive, and efficient apps that users love. Excessive battery drain is top of mind for your users, and together, we are taking significant steps to help you build more power-efficient apps. Earlier this year, we introduced a new beta metric in Android vitals, excessive partial wake locks, to help you identify and address sources of battery drain. This initial beta metric was co-developed in close collaboration with Samsung, combining their deep, real-world insights into user experience with battery consumption with Android's platform data. We want to thank you for providing invaluable feedback during the beta period. Powered by your input and our continued collaboration with Samsung, we have further refined the algorithm to be even more accurate and representative. We are excited to announce that this refined metric is now generally available as a new core vitals metric to all developers in Android vitals. We have defined a bad behavior threshold for excessive wake locks. Starting March 1, 2026, if your title does not meet this quality threshold, we may exclude the title from prominent discovery surfaces such as recommendations. In some cases, we may display a warning on your store listing to indicate to users that your app may cause excessive battery drain. GOOGLE PLAY'S CORE TECHNICAL QUALITY METRICS To maximize visibility on Google Play, keep your app below the bad behavior thresholds for these metrics. --- User-perceived crash rate | The percentage of daily active users who experienced at least one crash that is likely to have been noticeable User-perceived ANR rate | The percentage of daily active users who experienced at least one ANR that is likely to have been noticeable Excessive battery usage | The percentage of watch face sessions where battery usage exceeds 4.44% per hour New: Excessive partial wake locks | The percentage of user sessions where cumulative, non-exempt wake lock usage exceeds 2 hours Excessive partial wake locks newly join the technical quality bars that Play expects all titles to maintain for a great user experience This is the first in a series of new metrics designed to provide deeper insight into your app's resource utilization, enabling you to improve the experience for your users across the entire Android ecosystem. ### 1. Aligning our definition of excessive wake locks with user expectations Apps can hold wake locks to prevent the user's device from entering sleep mode, letting the apps perform background work while the screen is off. We consider a user session excessive if it holds more than 2 cumulative hours of non-exempt wake locks in a 24 hour period. These excessive sessions are a heavy contributor to battery drain. A wake lock is exempted if it is a system held wake lock that offers clear user benefits that cannot be further optimized, such as audio playback or user-initiated data transfer. The bad behaviour threshold is crossed when 5% of an app’s user sessions over the last 28 days are excessive. If your app exceeds this threshold, you will be alerted directly on your Android vitals overview page. You can read more information about our definition on the Android Developer pages. Android vitals will alert you to excessive wake lock issues and provide a table of wake lock tags to P90/ P99 duration to help you identify the source by wake lock name. To help you understand your app’s partial wake lock usage, we are enhancing the excessive partial wake locks page in Android vitals with a new wake lock names table. This table breaks down wake lock sessions by their specific tag names and durations, allowing you to easily identify long wake locks in your local development environment, like Android Studio, for easier debugging. You should investigate any wake locks with P90 or P99 durations above 60 minutes. ### 2.  Excessive wake locks and their impact on Google Play visibility If your title exceeds the bad behavior threshold for excessive wake locks, it may be ineligible for some discovery surfaces where users find new apps and games. In some cases, we may also show a warning on your store listing to inform users that your app may cause their device's battery to drain faster. Users may see a warning on your store listing if your app exceeds the bad behavior threshold. Note: The exact text and design are subject to change. We know making technical changes to your app’s code and how it works can be time consuming, so we are making the metric available for you to diagnose and fix potential issues now, with time before the Store visibility changes begin, starting from March 1, 2026. ### 3. What to do next We encourage you to take the following steps to ensure your app delivers a great experience for users: 1. Visit Android vitals: Review your app's performance on the new excessive partial wake locks metric. The metric is now visible to all developers whose apps have wake lock sessions. 2. Discover excessive partial wake locks: Use the new wake lock names table to identify excessive partial wake locks. 3. Consult the documentation: For detailed guidance on best practices and fixing common issues, please check out our technical blog post, technical video and updated developer documentation on wake locks. Thank you for your continued partnership in building high-quality, performant experiences that users can rely on every day.
android-developers.googleblog.com
November 11, 2025 at 9:11 PM
#WeArePlay: Meet the people making apps & games to improve your health
_Posted by Robbie McLachlan - Developer Marketing_ In our latest #WeArePlay stories, we meet the founders building apps and games that are making health and wellness fun and easy for everyone on Google Play. From getting heavy sleepers jumping into their mornings, to turning mental wellness into an immersive adventure game. Here are a few of our favorites: _Jay, founder of Delightroom  _ _Seoul, South Korea  _ With over 90 million downloads, Jay‘s app Alarmy helps heavy sleepers to get moving with smart, challenge-based alarms. While studying computer science, Jay’s biggest challenge wasn’t debugging code, it was waking up for his morning classes. This struggle sparked an idea: what if there were an app that could help anyone get out of bed? Jay built a basic version and showcased it at a tech event, where it quickly drew attention. That prototype evolved into Alarmy, an app that uses creative missions, like solving math problems, doing squats, or snapping a photo, to get people moving so they fully wake up. Now available in over 30 languages and 170+ countries, Jay and his team are expanding beyond alarms, adding sleep tracking and wellness features to help even more people start their day right. _Ellie and Hazel_ _, co-founders of Mind Monsters Games   _ _Cambridge, UK_ Ellie and Hazel’s game, Betwixt, makes mental wellness more fun by using an interactive story to reduce anxiety. While working in London’s tech scene and later writing about psychology, Ellie noticed a pattern: many people turned to video games to ease stress but struggled to engage with traditional meditation. That’s when she came up with the idea to combine the two. While curating a book on mental health, she met Hazel—a therapist, former world champion boxer, and game lover and together they created Betwixt, an interactive fantasy adventure that guides players on a journey of self-discovery. By blending storytelling with evidence-based techniques, the game helps reduce anxiety and promote well-being. Now, with three new projects in development, Ellie and Hazel strive to turn play into a mental health tool. _Kevin and Robin, co-founders of MapMyFitness   _ _Boulder (CO), U.S.  _ Kevin and Robin’s app, MapMyFitness, helps a global community of runners and cyclists map their routes and track their training. Growing up across the Middle East, the Philippines, and Africa, Kevin developed a fascination with maps. In San Diego, while training for his second marathon, he built a simple MapMyRun website to map his routes. When other runners joined, former professional cyclist Robin reached out with a vision to also help cyclists discover and share maps. Together they founded MapMyFitness in 2007 and launched MapMyRide soon after, blending Kevin’s technical expertise and Robin's athletic know-how. Today, the MapMy suite powers millions of walkers, runners, and riders with adaptive training plans, guided workouts, live safety tracking, and community challenges—all in support of their mission to “get everybody outside". Discover more #WeArePlay stories from founders across the globe.
android-developers.googleblog.com
November 6, 2025 at 9:11 PM
Health Connect Jetpack v1.1.0 is now available!
_Posted by Brenda Shaw, Health & Home Partner Engineering Technical Writer_ Health Connect is Android’s on-device platform designed to simplify connectivity between health and fitness apps, allowing developers to build richer experiences with secure, centralized data. Today, we’re thrilled to announce three major updates that empower you to create more intelligent, connected, and nuanced applications: the stable release of the Health Connect Jetpack library 1.1.0 and the expanded device type support. ## Health Connect Jetpack Library 1.1.0 is Now Stable We are excited to announce that the Health Connect Jetpack library has reached its 1.1.0 stable release. This milestone provides you with the confidence and reliability needed to build production-ready health and fitness experiences at scale. Since its inception, Health Connect has grown into a robust platform supporting over 50 different data types across activity, sleep, nutrition, medical records, and body measurements. The journey to this stable release has been marked by significant advancements driven by developer feedback. Throughout the alpha and beta phases, we introduced critical features like background reads for continuous data monitoring, historical data sync to provide users with a comprehensive long-term view of their health, and support for critical new data types like Personal Health records, Exercise Routes, Training Plans, and Skin Temperature. This stable release encapsulates all of these enhancements, offering a powerful and dependable foundation for your applications. ## Expanded Device Type Support Accurate data representation is key to building trust and delivering precise insights. To that end, we have significantly expanded the list of supported device types in Health Connect. This will be available in 1.2.0-alpha02. When data is written to the platform, specifying the source device is crucial metadata that helps data readers understand its context and quality. The newly supported device types include: * Consumer Medical Device: For over-the-counter medical hardware like Continuous Glucose Monitors (CGMs) and Blood Pressure Cuffs. * Glasses: For smart glasses and other head-mounted optical devices. * Hearables: For earbuds, headphones, and hearing aids with sensing capabilities. * Fitness Machine: For stationary equipment like treadmills and indoor cycles, as well as outdoor equipment like bicycles. This expansion ensures data is represented more accurately, allowing you to build more nuanced experiences based on the specific hardware used to record it. ## What's Next? We encourage all developers to upgrade to the stable 1.1.0 Health Connect Jetpack library to take full advantage of these new features and improvements. * Learn more in the official documentation and release notes. * Provide feedback and report issues on our public issue tracker. We are committed to the continued growth of the Health Connect platform. We can’t wait to see the incredible experiences you build!
android-developers.googleblog.com
November 3, 2025 at 8:13 PM
ML Kit’s Prompt API: Unlock Custom On-Device Gemini Nano Experiences
_Posted by  Caren Chang, Developer Relations Engineer, Chengji Yan, Software Engineer, and Penny Li, Software Engineer_ AI is making it easier to create personalized app experiences that transform content into the right format for users. We previously enabled developers to integrate with Gemini Nano through ML Kit GenAI APIs tailored for specific use cases like summarization and image description. Today marks a major milestone for Android's on-device generative AI. We're announcing the Alpha release of the ML Kit GenAI Prompt API. This API allows you to send natural language and multimodal requests to Gemini Nano, addressing the demand for more control and flexibility when building with generative models. Partners like Kakao are already building with Prompt API, creating unique experiences with real-world impact. You can experiment with Prompt API's powerful features today with minimal code. **Move beyond pre-built to custom on-device GenAI ****** Prompt API moves beyond pre-built functionality to support custom, app-specific GenAI use cases, allowing you to create unique features with complex data transformation. Prompt API uses Gemini Nano on-device to process data locally, enabling offline capability and improved user privacy. Key use cases for Prompt API: Prompt API allows for highly customized GenAI use cases. Here are some recommended examples: * Image understanding: Analyzing photos for classification (e.g., creating a draft social media post or identifying tags such as "pets," "food," or "travel"). * Intelligent document scanning: Using a traditional ML model to extract text from a receipt, and then categorizing each item with Prompt API. * Transforming data for the UI: Analyzing long-form content to create a short, engaging notification title. * Content prompting: Suggesting topics for new journal entries based on a user’s preference for themes. * Content analysis: Classifying customer reviews into a positive, neutral, or negative category. * Information extraction: Extracting important details about an upcoming event from an email thread. **Implementation ** Prompt API lets you create custom prompts and set optional generation parameters with just a few lines of code: Generation.getClient().generateContent( generateContentRequest( ImagePart(bitmapImage), TextPart("Categorize this image as one of the following: car, motorcycle, bike, scooter, other. Return only the category as the response."), ) { // Optional parameters temperature = 0.2f topK = 10 candidateCount = 1 maxOutputTokens = 10 }, ) For more detailed examples of implementing Prompt API, check out the official documentation and sample on Github. Gemini Nano, performance, and prototyping Prompt API currently performs best on the Pixel 10 device series, which runs the latest version of Gemini Nano (nano-v3). This version of Gemini Nano is built on the same architecture as Gemma 3n, the model we first shared with the open model community at I/O. The shared foundation between Gemma 3n and nano-v3 enables developers to more easily prototype features. For those without a Pixel 10 device, you can start experimenting with prompts today by prototyping with Gemma 3n locally or accessing it online through Google AI Studio. For the full list of devices that support GenAI APIs, refer to our device support documentation. Learn more Start implementing Prompt API in your Android apps today with guidance from our official documentation and the sample on Github.
android-developers.googleblog.com
October 31, 2025 at 2:11 AM
Kakao Mobility uses Gemini Nano on-device to reduce costs and boost call conversion by 45%
_Posted by Sa-ryong Kang and Caren Chang, Developer Relations Engineers_ _ _ Kakao Mobility is South Korea's leading mobility business, offering a range of transportation and delivery services, including taxi-hailing, navigation, bike and scooter-sharing, parking, and parcel delivery, through its Kakao T app. The team at Kakao Mobility utilized Gemini Nano via ML Kit’s GenAI Prompt API to offer parking assistance for its bike-sharing service and an improved address entry experience for its navigation and delivery services. The Kakao T app serves over 30 million total users, and its bike-sharing service is one of its most popular services. But unfortunately, many users were improperly parking the bikes or scooters when not in use. This behavior led to an influx of parking violations and safety concerns, resulting in public complaints, fines, and towing. These issues began to negatively affect public perception of both Kakao Mobility and its bike-sharing services. ** ** “By leveraging the ML Kit’s GenAI Prompt API and Gemini Nano, we were able to quickly implement features that improve social value without compromising user experience. Kakao Mobility will continue to actively adopt on-device AI to provide safer and more convenient mobility services.” — Wisuk Ryu, Head of Client Development Div To address these concerns, the team initially designed an image recognition model to notify users if their bike or scooter was parked correctly according to local laws and safety standards. Running this model through the cloud would have incurred significant server costs. In addition, the users’ uploaded photos contained information about their parking location, so the team wanted to avoid any privacy or security concerns. The team needed to find a more reliable and cost-effective solution. ** ** The team also wanted to improve the entity extraction experience for the parcel delivery service within the Kakao T app. Previously, users were able to easily order parcel delivery on a chat interface, but drivers needed to enter the address into an order form manually to initiate the delivery order—a process which was cumbersome and prone to human error. The team sought to streamline this process, making order forms faster and less frustrating for delivery personnel. ** ** Enhancing the user experience with ML Kit’s GenAI Prompt API ** ** The team tested and compared cloud-based Gemini models against Gemini Nano, accessed via ML Kit’s GenAI Prompt API. “After reviewing privacy, cost, accuracy, and response speed, ML Kit’s GenAI Prompt API was clearly the optimal choice,” said Jinwoo Park, Android application developer at Kakao Mobility. ** ** To address the issue of improperly parked bikes or scooters, the team used Gemini Nano's multimodal capability via the ML Kit GenAI API SDK to detect when a bike or scooter violates local regulations by parking on yellow tactile paving. With a carefully crafted prompt, they were able to evaluate more than 200 labeled images of parking photos while continually refining the inputs. This evaluation, measured through well-known metrics like accuracy, precision, recall, and the F1 score, ensured the feature met production-level quality and reliability standards. Now users can take a photo of their parked bike or scooter, and the app will inform them if it is parked properly, or provide guidance if it is not. The entire process happens in seconds on the device, protecting the user’s location and information. ** ** ** ** To create a streamlined entity extraction feature, the team again used ML Kit's GenAI Prompt API to process users' delivery orders written in natural language. If they had employed traditional machine learning, it would have required a large learning dataset and special expertise in machine learning. Instead, they could simply start with a prompt like, "Extract the recipient's name, address, and phone number from the message." The team prepared around 200 high-quality evaluation examples, and evaluated their prompt through many rounds of iteration to get the best result. The most effective method employed was a technique called few-shot prompting, and the results were carefully analyzed to ensure the output contained minimal hallucinations. “ML Kit’s Prompt API reduces developer overhead while offering strong security and reliability on-device. It enables rapid prototyping, lowers infrastructure dependency, and incurs no additional cost. There is no reason not to recommend it.” — Jinwoo Park, Android application developer at Kakao Mobility ** ** Delivering big results with ML Kit’s GenAI Prompt API ** ** As a result, the entity extraction feature correctly identifies the necessary details of each order, even when multiple names and addresses are entered. To maximize the feature's reach and provide a robust fallback, the team also implemented a cloud-based path using Gemini Flash. ** ** Implementing ML Kit’s GenAI Prompt API has yielded a significant amount of cost savings for the Kakao Mobility team by shifting to on-device AI. While the bike parking analysis feature has not yet launched, the address entry improvement has already delivered excellent results: * Order completion time for delivery orders has been reduced by 24%. * The conversion rate has increased by 45% for new users and 6% for existing users. * During peak seasons, AI-powered orders increase by over 200%. ** ** “Small business owners in particular have shared very positive feedback, saying the feature has made their work much more efficient and significantly reduced stress,” Wisuk added. ** ** After the image recognition feature for bike and scooter parking launches, the Kakao Mobility team is eager to improve it further. Urban parking environments can be challenging, and the team is exploring ways to filter out unnecessary regions from images. ** ** “ML Kit’s GenAI Prompt API offers high-quality features without additional overhead,” said Jinwoo. “This reduced developer effort, shortened overall development time, and allowed us to focus on prompt tuning for higher-quality results.” ** ** Try ML Kit’s GenAI Prompt API for yourself Build and deploy on-device AI in your app with ML Kit’s GenAI Prompt API to harness the capabilities of Gemini Nano.
android-developers.googleblog.com
October 31, 2025 at 2:11 AM
redBus uses Gemini Flash via Firebase AI Logic to boost the length of customer reviews by 57%
_Posted by  __Thomas Ezan, Developer Relations Engineer_ As the world's largest online bus ticketing platform, redBus serves millions of travelers across India, Southeast Asia, and Latin America. The service is predominantly mobile-first, with over 90% of all bookings occurring through its app. However, this presents a significant challenge in gathering helpful feedback from a user base that speaks dozens of different languages. Typing reviews is inconvenient for many users, and a review written in Tamil, for instance, offers little value to a bus operator who only speaks Hindi. To improve the quality and volume of user feedback, developers at redBus used Gemini Flash, a Google AI model providing low latency, to instantly transcribe and translate user voice recordings. To connect this powerful AI to their app without dealing with complex backend work, they used Firebase AI Logic. This new feature removed language barriers and simplified the review process, leading to a significant increase in user engagement and feedback quality. Simplifying user feedback with a voice-first approach The previous in-app review experience on redBus was text-based, which presented some key challenges. At our scale, reliable user reviews are critical: they build trust for travelers and give operators actionable insights. While our existing text-based system served us well, we found that customers often struggled to articulate their full experience, which resulted in our user feedback lacking the necessary detail and volume we needed to deliver maximum value to both travelers and operators. What's more, language barriers limited the usefulness of reviews, as reviews in one language were not helpful for users or bus operators who spoke another. "Our primary motivation was to leverage the expressive power of voice and overcome the language barrier to capture more authentic and detailed user feedback,” said Abhi Muktheeswarar, a senior tech lead in mobile engineering at redBus. The developer team wanted to create a frictionless, voice-first experience, so they designed a new flow where users could simply speak their review in their native language. To encourage adoption, the team implemented a prominent, animated mic button paired with a text mentioning: “Your voice matters, share your review in your own language.” This mention appears in the user’s native language, consistent with their app language settings. Using Gemini Flash, the application processes the user’s voice recording. It first transcribes the speech into text, then translates it into English, and finally analyzes the sentiment to automatically generate a star rating and predict relevant tags based on the review content. It then creates a concise summary and autofills the review form fields with the generated content. Developers chose Firebase AI Logic because it allowed them to build and ship the feature without the help from the backend team, dramatically reducing development time and complexity. “The Firebase AI SDK was a key differentiator because it was the only solution that empowered our frontend team to build and ship the feature independently,” Abhi explained. This approach enabled the team to go from concept to launch in just 30 days. During implementation, the engineers used structured output, enabling the Gemini Flash model to return well-formed JSON responses, including the transcription, translation, sentiment analysis, and star rating, making it easy to then populate the UI. This ensured a seamless user experience. Users are then shown both the original transcribed text in their own language and the translated, summarized version in English. Most importantly, the user is given full control to review and edit all AI-generated text and change the star rating before submitting the review. They can even speak again to add more content. **Driving engagement and capturing deeper user insights** The AI-powered voice review feature had a significant positive impact on user engagement. By enabling users to speak in their native language, redBus saw a 57% increase in review length and a notable increase in the overall volume of reviews. The new feature successfully engaged a segment of the user base that was previously hesitant to type a review. Since implementation, user feedback has been overwhelmingly positive: customers appreciate the accuracy of the transcription and translation, and find the AI-generated summaries to be a concise overview of their longer, more detailed reviews. Gemini Flash, although hosted in the cloud, delivered a highly responsive user experience. “A common observation from our partners and stakeholders has been that the level of responsiveness from our new AI feature is so fast and seamless that it feels like the AI is running directly on the device,” said Abhi. “This is a testament to the low latency of the Gemini Flash model, which has been a key factor in its success.” ** An easier way to build with AI** For the redBus team, the project demonstrated how Firebase AI Logic and Gemini Flash empower mobile developers to build features that would otherwise require backend implementation. This reduces dependency on server-side changes and allows developers to iterate quickly and independently. Following the success of the voice review feature, the team at redBus is exploring other use cases for on-device generative AI to further enhance their app. They also plan to use Google AI Studio to test and iterate on prompts moving forward. For Abhi, the lesson is clear: “It’s no longer about complex backend setups,” he said. “It’s about crafting the right prompt to build the next innovative feature that directly enhances the user experience.” **Get started** Learn more about how you can use Gemini and Firebase AI Logic to build generative AI features for your own app.
android-developers.googleblog.com
October 31, 2025 at 2:11 AM
New agentic experiences for Android Studio, new AI APIs, the first Android XR device and more, in our Fall episode of The Android Show
_Posted by Matthew McCullough, VP of Product Management, Android Developer_ We’re in an important moment where AI changes everything, from how we work to the expectations that users have for your apps, and our goal on Android is to transform this AI evolution into opportunities for you and your users. Today in our Fall episode of The Android Show, we unpacked a bunch of new updates towards delivering the highest return on investment in building for the Android platform. From new agentic experiences for Gemini in Android Studio to a brand new on-device AI API to the first Android XR device, there’s so much to cover - let’s dive in! ## Build your own custom Gen AI features with the new Prompt API On Android, we offer AI models on-device, or in the cloud.  Today, we’re excited to now give you full flexibility to shape the output of the Gemini Nano model by passing in any prompt you can imagine with the new Prompt API, now in Alpha. For flagship Android devices, Gemini Nano lets you build efficient on-device options where the users’ data never leaves their device. At I/O this May, we launched our on-device GenAI APIs using the Gemini Nano model, making common tasks easier with simple APIs for tasks like summarization, proofreading and image description. Kakao used the Prompt API to transform their parcel delivery service, replacing a slow, manual process where users had to copy and paste details into a form into just a simple message requesting a delivery, and the API automatically extracts all the necessary information. This single feature reduced order completion time by 24% and boosted new user conversion by an incredible 45%. ## Tap into Nano Banana and Imagen using the Firebase SDK When you want to add cutting-edge capabilities across the entire fleet of Android devices, our  cloud-based AI solutions with Firebase AI Logic are a great fit. The excitement for models like Gemini 2.5 Flash Image (a.k.a. Nano Banana) and Imagen have been incredible; now your users can now generate and edit images using Nano Banana, and then for finer control, like selecting and transforming specific parts of an image, users can use the new mask-based editing feature that leverages the Imagen model. See our blog post to learn more. And beyond image generation, you can also use Gemini multimodal capabilities to process text, audio and image input. RedBus, for example, revolutionized their user reviews using Gemini Flash via Firebase AI Logic to make giving feedback easier, more inclusive, and reliable. The old problem? Short, low-quality text reviews. The new solution? Users can now leave reviews using voice input in their native languages. From the audio Gemini Flash is then generating a structured text response enabling longer, richer and more reliable user reviews. It's a win for everyone: travelers, operators, and developers! ## Helping you be more productive, with agentic experiences in Android Studio Helping you be more productive is our goal with Gemini in Android Studio, and why we’re infusing AI across our tooling. Developers like Pocket FM have seen an impressive development time savings of 50%. With the recent launch of Agent Mode, you can describe a complex goal in natural language and (with your permission), the agent plans and executes changes on multiple files across your project. The agent’s answers are now grounded in the most modern development practices, and can even cross-reference our latest documentation in real time. We demoed new agentic experiences such as updates to Agent Mode, the ability to upgrade APIs on your behalf, the new project assistant, and we announced you’ll be able to bring any LLM of your choice to power the AI functionality inside Android Studio, giving you more flexibility and choice on how you incorporate AI into your workflow. And for the newest stable features such as Back Up and Sync, make sure to download the latest stable version of Android Studio. ## Elevating AI-assisted Android development, and improving LLMs with an Android benchmark Our goal is to make it easier for Android developers to build great experiences. With more code being written by AI, developers have been asking for models that know more about Android development. We want to help developers be more productive, and that’s why we’re building a new task set for LLMs against a range of common Android development areas. The goal is to provide LLM makers with a benchmark, a north star of high quality Android development, so Android developers have a range of helpful models to choose for AI assistance. To reflect the challenges of Android development, the benchmark is composed of real-world problems sourced from public GitHub Android repositories. Each evaluation attempts to have an LLM recreate a pull request, which are then verified using human authored tests. This allows us to measure a model’s ability to navigate complex codebases, understand dependencies, and solve the kind of problems you encounter every day. We’re finalizing the task set we’ll be testing against LLMs, and will be sharing the results publicly in the coming months. We’re looking forward to seeing how this shapes AI assisted Android development, and the additional flexibility and choice it gives you to build on Android. ## The first Android XR device: Samsung Galaxy XR Last week was the launch of the first in a new wave of Android XR devices: the Galaxy XR, in partnership with Samsung. Android XR devices are built entirely in the Gemini era, creating a major new platform opportunity for your app. And because Android XR is built on top of familiar Android frameworks, when building adaptively, you’re already building for XR. To unlock the full potential of Android XR features, you can use the Jetpack XR SDK. The Calm team provides a perfect example of this in action. They successfully transformed their mobile app into an immersive spatial experience, building their first functional XR menus on the first day and a core XR experience in just two weeks by leveraging their existing Android codebase and the Jetpack XR SDK.  You can read more about Android XR from our Spotlight Week last week. ## Jetpack Navigation 3 is in Beta The new Jetpack Navigation 3 library is now in beta! Instead of having behavior embedded into the library itself, we’re providing ‘how-to recipes’ with good defaults (nav3 recipes on github). Out of the box, it’s fully customizable, has animation support and is adaptive. Nav 3 was built from the ground up with Compose State as a fundamental building block. This means that it fully buys into the declarative programming model - you change the state you own and Nav3 reacts to that new state. On the Compose front, we’ve been working on making it faster and easier for you to build UI, covering the features you told us you needed from Views, while at the same time ensuring that Compose is performant. ## Accelerate your business success on Google Play With AI speeding up app development, Google Play is streamlining your workflow in Play Console so that your business growth can keep up with your code. The reimagined, goal-oriented app dashboard puts actionable metrics front and center. Plus, new capabilities are making your day-to-day operations faster, smarter, and more efficient: from pre-release testing with deep links validation to AI-powered analytics summaries and app strings localization. These updates are just the beginning. Check out the full list of announcements to get the latest from Play. ## Watch the Fall episode of The Android Show Thank you for tuning into our Fall episode of The Android Show. We're excited to continue building great things together, and this show is an important part of our conversation with you. We'd love to hear your ideas for our next episode, so please reach out on X or LinkedIn. A special thanks to my co-hosts,  Rebecca Gutteridge and Adetunji Dahunsi, for helping us share the latest updates.
android-developers.googleblog.com
October 31, 2025 at 2:11 AM
New tools and programs to accelerate your success on Google Play
_Posted by Paul Feng, VP of Product Management, Google Play_ _ _ _ _ _ _ Last month, we shared new updates showcasingour evolving vision for Google Play: a place where people can discover the content and experiences they love and where you can build and grow sustainable businesses. Our commitment to your success is at the heart of our continued investments. Today, we're excited to introduce a new bundle of tools and programs designed to enhance your productivity and accelerate your growth. From simplifying technical integration and localization, to offering deeper insights and creating powerful new ways to engage your audience these features will help streamline your development lifecycle. Watch our latest updates in The Android Show segment below or continue reading. You can also catch up on our latest Android developments by watching the full show. **Streamline your development and operations with new tools ** We're launching new tools to remove friction from your tedious development tasks by helping you validate deep links and scale to new markets with Gemini-powered AI. Simplify deep link validation with a built-in emulator Troubleshooting deep links can be complex and time-consuming so we’re excited to launch a new, streamlined experience that allows you to instantly validate your deep links directly within Play Console. This means you can use a built-in emulator to test a deep link and immediately see the expected user experience on the spot, just as if someone clicked the URL on a real device. Instantly validate your deep links using the new built-in emulator **Reach a global audience with Gemini-powered localization ** We’re making it easier to bring your app or game to a global audience by simplifying localization. With our latest translation service, we've integrated the power of Gemini into Play Console to offer high-quality translations for your app strings, at no cost. This service automatically translates new app bundles into your selected languages, accelerating your title to new markets. Most importantly, you always remain in full control with the ability to preview the translated app with a built-in emulator and easily edit or disable translations. **Drive growth and engagement with AI-powered insights and You tab ** We're launching new ways to help you reach and retain users, Including AI-powered insights and the new You tab for re-engagement. **Get faster insights with automated chart summaries** To help you spend less time interpreting data and more time acting on key insights, a new Gemini-powered feature on the Statistics page automatically generates descriptions of your charts. These summaries help you quickly understand key trends and events that might be affecting your metrics. For developers who use a screen reader, this feature also provides access to reporting in a way you haven't had before. Get faster insights with new Gemini-powered chart summaries Access objective-related metrics and actionable advice for audience growth Earlier this year, we launched objective-based overview pages in Play Console to consolidate your key metrics, app performance, and actionable steps across essential workflows. With dedicated pages for Test & Release, Monitor & Improve, and Monetize with Play already live, we're excited to announce the full completion of this toolkit. The new Grow users overview page is now available, giving you a comprehensive, tailored view to help you acquire new users and expand your reach. _ _Track your key audience growth metrics on the new "Grow users" overview page_ _ **Boost re-engagement with the You tab** Last month, we launched You tab, a brand new, personalized destination on the Play Store. This is where users can discover and re-engage with content from their favorite apps and games with curated rewards, subscriptions, recommendations, and updates all in one place. App developers can take advantage of this personalized destination by integrating with Engage SDK. This integration allows you to help people pick up right where they left off—like resuming a movie or playlist— or get personalized recommendations, all while seamlessly guiding them back into your app. Game developers can use this surface to showcase timely in-game events, content updates, and special offers, making it easy for players to jump right back into the action. Promotional content, YouTube video listings, and Play Points coupons are now open to all game developers for creating a rich presence on the You tab. The availability of these powerful re-engagement tools is part of our broader commitment to game quality through the new Google Play Games Level Up program. Learn more about the program's guidelines here. _Showcase in-game events and offers on the new You tab_ **Optimize your monetization strategy and track performance ** We're launching powerful new ways to configure your one-time products and track the full impact of your Play Points promotions with a new, consolidated reporting page. **Simplify catalog management for one-time products** Earlier this year, we introduced more flexible ways to configure one-time purchases. You can now offer your in-app products as limited-time rentals, and sign up for our early access program to get started with pre-orders. We've also launched a new taxonomy, building on our existing subscription model, to help you manage your catalog more efficiently. This new model unlocks significant flexibility to help you reach a wider audience and cater to different user preferences by letting you offer the same item in multiple ways. For example, you can sell an item in one country and rent it in another—helping Play better surface relevant offerings to users. Explore these new capabilities today in Play Console. _Manage your catalog more efficiently with new ways to configure one-time products_ ** ** Understand the impact and performance of Play Points promotions With Play Points recently opened to all eligible titles, you can now better understand the impact of your promotions. The new Play Points page in Play Console lets you see the total revenue, buyers and acquisitions that all Play Points promotions have generated. This reporting covers both your developer-created offers, as well as new reporting for Google-funded Play Points promotions, which includes direct and post-promotion performance metrics. New reporting for Play Points promotions The features announced today are more than just updates; they are the building blocks of a powerful growth engine for your business. We hope you start exploring these new capabilities today and continue sharing feedback so we can build the tools you need to build a thriving, sustainable business on Google Play.
android-developers.googleblog.com
October 31, 2025 at 2:11 AM
How Calm Reimagined Mindfulness for Android XR
_Posted by Stevan Silva , Sr. Product Manager, Android XR_ Calm is a leading mental health and wellness company with over 180 million downloads. When they started their development for Android XR, their core engineering team was able to build their first functional XR orbiter menus on Day 1 and a core experience in just two weeks. This demonstrates that building for XR can be an extension of existing Android development work, not something that has to be started from scratch. As a company dedicated to helping users sleep better, stress less, and live more mindfully, their extensive library has made Calm a trusted source for well-being content on Android. With the introduction of the Android XR platform, the Calm team saw an opportunity to not just optimize their existing Android app, but to truly create the next generation of immersive experiences. We sat down with Kristen Coke, Lead Product Manager, and Jamie Martini, Sr. Manager of Engineering at Calm, to dive into their journey building for Android XR and learn how other developers can follow their lead. Q: What was the vision for the Calm experience on Android XR, and how does it advance your mission? A (Kristen Coke, Lead Product Manager): Our mission is to support everyone on every step of their mental health journey. XR allows us to expand how people engage with our mindfulness content, creating an experience that wasn’t just transportive but transformative. If I had to describe it in one sentence, Calm on Android XR reimagines mindfulness for the world around you, turning any room into a fully immersive, multisensory meditation experience. We wanted to create a version of Calm that couldn’t exist anywhere else, a serene and emotionally intelligent sanctuary that users don't just want to visit, but will return to again and again. Q: For developers who might think building for XR is a massive undertaking, what was your initial approach to bringing your existing Android app over? A (Jamie Martini, Sr. Manager of Engineering): Our main goal was to adapt our Android app for XR and honestly, the process felt easy and seamless. We already use Jetpack Compose extensively for our mobile app, so expanding that expertise into XR was the natural choice. It felt like extending our Android development, not starting from scratch. We were able to reuse a lot of our existing codebase, including our backend, media playback, and other core components, which dramatically cut down on the initial work. The Android XR design guides provided valuable context throughout the process, helping both our design and development teams shape Calm’s mobile-first UX into something natural and intuitive for a spatial experience. Q: You noted the process felt seamless. How quickly was your team able to start building and iterating on the core XR experience? A (Jamie Martini, Sr. Manager of Engineering): We were productive right away, building our first orbiter menus on day one and a core XR Calm experience in about two weeks. The ability to apply our existing Android and Jetpack experience directly to a spatial environment gave us a massive head start, making the time-to-first-feature incredibly fast. Q: Could you tell us about what you built to translate the Calm experience into this new spatial environment? A (Jamie Martini, Sr. Manager of Engineering): We wanted to take full advantage of the immersive canvas to rethink how users engage with our content. Two of the key features we evolved were the Immersive Breathe Bubble and the Immersive Scene Experiences. The Breathe Bubble is our beloved breathwork experience, but brought into 3D. It’s a softly pulsing orb that anchors users to their breath with full environmental immersion. And with our Immersive Scene Experiences, users can choose from a curated selection of ambient environments designed to gently wrap around them and fade into their physical environment. This was a fantastic way to take a proven 2D concept (the mobile app’s customizable background scenes) and transform it for the spatial environment. We didn't build new experiences from scratch; we simply evolved core, proven features to take advantage of the immersive canvas. ** ** Q: What were the keys to building a visually compelling experience that feels native to the Android XR platform? ** ** A (Kristen Coke, Lead Product Manager): Building for a human-scale, spatial environment required us to update our creative workflow. ** ** We started with concept art to establish our direction, which we then translated into 3D models using a human-scale reference to ensure natural proportions and comfort for the user. ** ** Then, we consistently tested the assets directly in a headset to fine-tune scale, lighting, and atmosphere. For developers who may not have a physical device, the Android XR emulator is a helpful alternative for testing and debugging. ** ** We quickly realized that in a multisensory environment, restraint was incredibly powerful. We let the existing content (the narration, the audio) amplify the environment, rather than letting the novelty of the 3D space distract from the mindfulness core. ** ** Q: How would you describe the learning curve for other developers interested in building for XR? Do you have any advice? ** ** A (Jamie Martini, Sr. Manager of Engineering) : This project was the first step into immersive platforms for our Android engineering team, and we were pleasantly surprised. The APIs were very easy to learn and use and felt consistent with other Jetpack libraries. ** ** My advice to other developers? Begin by integrating the Jetpack XR APIs into your existing Android app and reusing as much of your existing code as possible. That is the quickest way to get a functional prototype. A (Kristen Coke, Lead Product Manager): Think as big as possible. Android XR gave us a whole new world to build our app within. Teams should ask themselves: What is the biggest, boldest version of your experience that you could possibly build? This is your opportunity to finally put into action what you’ve always wanted to do, because now, you have the platform that can make it real. Building the next generation of spatial experiences The work the Calm team has done showcases how building on the Android XR platform can be a natural extension of your existing Android expertise. By leveraging the Jetpack XR SDKs, Calm quickly evolved their core mobile features into a stunning spatial experience. If you’re ready to get started, you can find all the resources you need at developer.android.com/xr. Head over there to download the latest SDK, explore our documentation, and start building today.
android-developers.googleblog.com
October 31, 2025 at 2:11 AM
Introducing Cahier: A new Android GitHub sample for large screen productivity and creativity
_Posted by Chris Assigbe, Android Developer Relations Engineer_ Ink API is now in beta and is ready to be integrated in your app.. This milestone was made possible by valuable developer feedback, leading to continuous improvements in the API's performance, stability, and visual quality. Google apps, such as Google Docs, Pixel Studio, Google Photos, Chrome PDF, Youtube Effect Maker, and unique features on Android such as Circle to Search all use the latest APIs. To mark this milestone, we're excited to announce the launch of Cahier, a comprehensive note-taking app sample optimized for Android devices of all sizes particularly tablets and foldable phones. ## What is Cahier? Cahier ("notebook" in French) is a sample app designed to demonstrate how you can build an application that enables users to capture and organize their thoughts by combining text, drawings, and images. The sample can serve as the go-to reference for enhancing user productivity and creativity on large screens. It showcases best practices for building such experiences, accelerating developer understanding and adoption of related powerful APIs and techniques. This post walks you through the core features of Cahier, key APIs, and the architectural decisions that make the sample a great reference for your own apps. Key features demonstrated in the sample include: * Versatile note creation: Shows how to implement a flexible content creation system that supports multiple formats within a single note, including text, freeform drawings, and image attachments. * Creative inking tools: Implements a high performance, low latency drawing experience using the Ink API. The sample provides a practical example of integrating various brushes, a color picker, undo/redo functionality, and an eraser tool. * Fluid content integration with drag and drop: Demonstrates how to handle both incoming and outgoing content using drag and drop. This includes accepting images dropped from other apps and enabling users to drag content out of your app for seamless sharing. * Note organization: Mark notes as favorites for quick access. Filter the view to stay organized. * Offline first architecture:  Built with an offline first architecture using Room, ensuring all data is saved locally and the app remains fully functional without an internet connection. * Powerful multi-window and multi-instance support: Showcases how to support multi-instance, allowing your app to be launched in multiple windows so users can work on different notes side by side, enhancing productivity and creativity on large screens. * Adaptive UI for all screens: The user interface seamlessly adapts to different screen sizes and orientations using ListDetailPaneScaffold and NavigationSuiteScaffold to provide an optimized user experience on phones, tablets, and foldables. * Deep system integration: Provides a guide on how to make your app the default note-taking app on Android 14 and higher by responding to system wide Notes intents, enabling quick content capture from various system entry points. ## Built for productivity and creativity on large screens For the initial launch, we're centering the announcement on a few core features that make Cahier a key learning resource for both productivity and creativity use cases. #### A foundation of adaptivity Cahier is built to be adaptive from the ground up. The sample utilizes the material3-adaptive library specifically ListDetailPaneScaffold and NavigationSuiteScaffold to seamlessly adapt the app layout to various screen sizes and orientations. This is a crucial element for a modern Android app, and Cahier provides a clear example of how to implement it effectively. Cahier adaptive UI built with Material 3 Adaptive library. ### Showcasing key APIs and integrations The sample is focused on showcasing powerful productivity APIs that you can leverage in your own applications, including: * **Ink API** * **Notes role** * **Multi-instance , Multi-window, and Desktop windowing** * **Drag and drop** ## A Closer look at key APIs Let's dive deeper into two of the cornerstone APIs that Cahier integrates to deliver a first class note-taking experience. ### Creating natural inking experiences with the Ink API Stylus input transforms large screen devices into digital notebooks and sketchbooks. To help you build fluid and natural inking experiences, we’ve made the Ink API a cornerstone of the sample. Ink API makes it easy to create, render, and manipulate beautiful ink strokes with best in class low latency. Ink API offers a modular architecture, so you can tailor it to your app's specific stack and needs. The API modules include: * Authoring modules (Compose - views): Handle realtime inking input to create smooth strokes with the lowest latency a device can provide. In DrawingSurface, Cahier uses the newly introduced InProgressStrokes composable to handle realtime stylus or touch input. This module is responsible for capturing pointer events and rendering wet ink strokes with the lowest possible latency. * Strokes module: Represents the ink input and its visual representation.  a user finishes drawing a line, the onStrokesFinished callback provides a finalized/dry Stroke object to the app. This immutable object, representing the completed ink stroke, is then managed in DrawingCanvasViewModel. * Rendering module: Efficiently displays ink strokes, allowing them to be combined with Jetpack Compose or Android views. To display both existing and newly dried strokes, Cahier uses CanvasStrokeRenderer  in DrawingSurface for active drawing and in DrawingDetailPanePreview for showing a static preview of the note. This module efficiently draws the Stroke objects onto a Canvas. * Brush modules (Compose - views): Provide a declarative way to define the visual style of strokes. Recent updates (since the alpha03 release) include a new dashed line brush, particularly useful for features like lasso selection. DrawingCanvasViewModel holds the state for the currentBrush. A toolbox in DrawingCanvas allows users to select different brush families (like StockBrushes.pressurePen() or StockBrushes.highlighter()) and change colors. The ViewModel updates the Brush object, which is then used by the InProgressStrokes composable for new strokes. * Geometry modules (Compose - views): Support manipulating and analyzing strokes for features like erasing and selecting. The eraser tool within the toolbox and functionality in DrawingCanvasViewModel rely on the geometry module. When the eraser is active, it creates a MutableParallelogram around the path of the user's gesture. The eraser then checks for intersections between the shape and bounding boxes of existing strokes to determine which strokes to erase, making the eraser feel intuitive and precise. * Storage module: Provides efficient serialization and deserialization capabilities for ink data, leading to significant disk and network size savings. To save drawings, Cahier persists the Stroke objects in its Room database. In Converters, the sample uses the storage module’s encode function to serialize the StrokeInputBatch (the raw point data) into a ByteArray. The byte array, along with brush properties, is saved as a JSON string. The decode function is used to reconstruct the strokes when a note is loaded. Beyond these core modules, recent updates have expanded the Ink API's capabilities: * New experimental APIs for custom BrushFamily objects empower developers to create creative and unique brush types, providing the possibilities for tools like Pencil and Laser Pointer brushes. Cahier leverages custom brushes, including the unique music brush showcased below, to illustrate advanced creative possibilities. Rainbow laser created with Ink API's custom brushes. Music brush created with Ink API's custom brushes. * Native Jetpack Compose interoperability modules streamline the integration of inking functionalities directly within your Compose UIs for a more idiomatic and efficient development experience. Ink API offers several advantages that make it the ideal choice for productivity and creativity apps over a custom implementation: * Ease of use: Ink API abstracts away the complexities of graphics and geometry, allowing you to focus on Cahier's core features. * Performance: Built-in low latency support and optimized rendering ensure a smooth and responsive inking experience. * Flexibility: The modular design allows you to pick and choose the components needed, which enables seamless integration of the Ink API into Cahier's architecture. #### Ink API has already been adopted across many Google apps, including for markup in Docs and for Circle to Search as well as partner apps like Orion Notes, and PDF Scanner. “Ink API was our first choice for Circle-to-Search (CtS). Utilizing their extensive documentation, integrating the Ink API was a breeze, allowing us to reach our first working prototype w/in just one week. Ink's custom brush texture and animation support allowed us to quickly iterate on the stroke design.” - Jordan Komoda, Software Engineer - Google. ### Becoming the default notes app with notes role Note-taking is a core capability that enhances user productivity on large screen devices. With the notes role feature, users can access  your compatible apps from the lock screen or while other apps are running. This feature identifies and sets system wide default note-taking apps and grants them permission to be launched for capturing content. #### Implementation in Cahier Implementing the notes role involves a few key steps, all demonstrated in the sample: 1. Manifest declaration: First, the app must declare its capability to handle note-taking intents. In AndroidManifest.xml, Cahier includes an <intent-filter> for the android.intent.action.CREATE_NOTE action. This signals to the system that the app is a potential candidate for the notes role. 2. Checking role status: SettingsViewModel uses Android's RoleManager to determine the current status. SettingsViewModel checks whether the notes role is available on the device (isRoleAvailable) and whether Cahier currently holds that role (isRoleHeld). This state is exposed to the UI using Kotlin flows. 3. Requesting the role: In the Settings.kt file, a Button is displayed to the user if the role is available but not held. When clicked, the button calls the requestNotesRole function in the ViewModel. The function creates an intent to open the default app settings screen where the user can select Cahier. The process is managed using the rememberLauncherForActivityResult API, which handles launching the intent and receiving the result. 4. Updating the UI: After the user returns from the settings screen, the ActivityResultLauncher callback triggers a function in the ViewModel to update the role status, ensuring the UI accurately reflects whether the app is now the default. Learn how to integrate the notes role in your app in our create a note-taking app guide. Cahier launched in a floating window as the default note-taking app on a Lenovo tablet. #### A major step forward: Lenovo enables notes role We're thrilled to announce a major step forward for large screen Android productivity: Lenovo has enabled support for Notes Role on tablets running Android 15 and higher! With this update, you can now update your note-taking apps to allow users with compatible Lenovo devices to set them as default, granting seamless access from the lock screen and unlocking system level content capture features. This commitment from a leading OEM demonstrates the growing importance of the notes role in delivering a truly integrated and productive user experience on Android. ### Multi-instance, multi-windowing, and desktop windowing Productivity on a large screen is all about managing information and workflows efficiently. That's why Cahier is built to fully embrace Android's advanced windowing capabilities, providing a flexible workspace that adapts to user needs. The app supports: * Multi-windowing: The fundamental ability to run alongside another app in split-screen or free-form mode. This is essential for tasks like referencing a web page while taking notes in Cahier. * Multi-instance: This is where true multitasking shines. Cahier allows users to open multiple, independent windows of the app simultaneously. Imagine comparing two different notes side by side or referencing a text note in one window while working on a drawing in another. Cahier demonstrates how to manage these separate instances, each with its own state, turning your app into a powerful, multifaceted tool. * Desktop windowing: When connected to an external display, Android desktop mode transforms a tablet or foldable into a workstation. Because Cahier is built with an adaptive UI and supports multi-instance, the app performs beautifully in this environment. Users can open, resize, and position multiple Cahier windows just like on a traditional desktop, enabling complex workflows that were previously out of reach on mobile devices. Cahier running in desktop window mode on Pixel Tablet. Here’s how we implemented these features in Cahier: To enable multi-instance, we first needed to signal to the system that the app supports being launched multiple times by adding the PROPERTY_SUPPORTS_MULTI_INSTANCE_SYSTEM_UI property to MainActivity ‘s declaration in AndroidManifest: <activity     android:name="com.example.cahier.MainActivity"     android:exported="true"     android:label="@string/app_name"     android:theme="@style/Theme.MyApplication"     android:showWhenLocked="true"     android:turnScreenOn="true"     android:resizeableActivity="true"     android:launchMode="singleInstancePerTask">     <property         android:name="android.window.PROPERTY_SUPPORTS_MULTI_INSTANCE_SYSTEM_UI"         android:value="true"/>     ... </activity> Next, we implemented the logic to launch a new instance of the app. In CahierHomeScreen.kt, when a user opts to open a note in a new window, we create a new Intent with specific flags that instruct the system on how to handle the new activity launch. The combination of FLAG_ACTIVITY_NEW_TASK, FLAG_ACTIVITY_MULTIPLE_TASK, and FLAG_ACTIVITY_LAUNCH_ADJACENT ensures the note opens in a new, separate window alongside the existing one. fun openNewWindow(activity: Activity?, note: Note) {     val intent = Intent(activity, MainActivity::class.java)     intent.putExtra(AppArgs.NOTE_TYPE_KEY, note.type)     intent.putExtra(AppArgs.NOTE_ID_KEY, note.id)     intent.flags = Intent.FLAG_ACTIVITY_NEW_TASK or Intent.FLAG_ACTIVITY_MULTIPLE_TASK or         Intent.FLAG_ACTIVITY_LAUNCH_ADJACENT     activity?.startActivity(intent) } To support multi-window mode, we needed to signal to the system that the app supports resizability by setting the Manifest’s <activity> or <application> element. <activity     android:name="com.example.cahier.MainActivity"     android:resizeableActivity="true"     ...> </activity> The UI itself being built with the Material 3 adaptive library enables it to adapt seamlessly in multi-window scenarios like Android’s split screen mode. To enhance user experience, we added support for drag and drop. See below how we implemented this in Cahier. ### Drag and drop A truly productive or creative app doesn’t function in isolation; it interacts seamlessly with the rest of the device's ecosystem. Drag and drop is a cornerstone of this interaction, especially on large screens where users are often working across multiple app windows. Cahier fully embraces this by implementing intuitive drag and drop functionality for both adding and sharing content. * Effortless Importing: Users can drag images from other applications—like a web browser, photo gallery, or file manager—and drop them directly onto a note canvas. For this, Cahier uses the dragAndDropTarget modifier to define a drop zone, check for compatible content (like image/*), and process the incoming URI. * Simple sharing: Content inside Cahier is just as easy to share as content from other apps. Users can long-press an image within a text note, or long-press the entire canvas of a drawing note and image composite, and drag it out to another application. #### Technical deep dive: Dragging from the drawing canvas Implementing the drag gesture on the drawing canvas presents a unique challenge. In our DrawingSurface, the composables that handle live drawing input (the Ink API's InProgressStrokes) and the Box that detects the long-press gesture to initiate a drag are sibling composables. By default, the Jetpack Compose pointer input system is designed so that just one sibling composable —the first one in declaration order that overlaps the touch location—receives the event. In Cahier’s case, we want our drag-and-drop input handling logic to have a chance to run and potentially consume inputs before the InProgressStrokes composable uses all unconsumed input for drawing and then consumes that input. If we don’t arrange things in the right order, our Box won’t detect the long-press gesture to start a drag, or InProgressStrokes won’t receive the input to draw. To solve this, we created a custom pointerInputWithSiblingFallthrough modifier, and we put our Box using that modifier before InProgressStrokes in the composable code. This utility is a thin wrapper around the standard pointerInput system but with one critical change: it overrides the sharePointerInputWithSiblings() function to return true. This tells the Compose framework to allow pointer events to pass through to sibling composables, even after being consumed. internal fun Modifier.pointerInputWithSiblingFallthrough(     pointerInputEventHandler: PointerInputEventHandler ) = this then PointerInputSiblingFallthroughElement(pointerInputEventHandler) private class PointerInputSiblingFallthroughModifierNode(     pointerInputEventHandler: PointerInputEventHandler ) : PointerInputModifierNode, DelegatingNode() {     var pointerInputEventHandler: PointerInputEventHandler         get() = delegateNode.pointerInputEventHandler         set(value) {             delegateNode.pointerInputEventHandler = value         }     val delegateNode = delegate(         SuspendingPointerInputModifierNode(pointerInputEventHandler)     )     override fun onPointerEvent(         pointerEvent: PointerEvent,         pass: PointerEventPass,         bounds: IntSize     ) {         delegateNode.onPointerEvent(pointerEvent, pass, bounds)     }     override fun onCancelPointerInput() {         delegateNode.onCancelPointerInput()     }     override fun sharePointerInputWithSiblings() = true } private data class PointerInputSiblingFallthroughElement(     val pointerInputEventHandler: PointerInputEventHandler ) : ModifierNodeElement<PointerInputSiblingFallthroughModifierNode>() {     override fun create() = PointerInputSiblingFallthroughModifierNode(pointerInputEventHandler)     override fun update(node: PointerInputSiblingFallthroughModifierNode) {         node.pointerInputEventHandler = pointerInputEventHandler     }     override fun InspectorInfo.inspectableProperties() {         name = "pointerInputWithSiblingFallthrough"         properties["pointerInputEventHandler"] = pointerInputEventHandler     } } Here’s how it's used in DrawingSurface: Box(     modifier = Modifier         .fillMaxSize()         // Our custom modifier enables this gesture to coexist with the drawing input.         .pointerInputWithSiblingFallthrough {             detectDragGesturesAfterLongPress(                 onDragStart = { onStartDrag() },                 onDrag = { _, _ -> /* consume drag events */ },                 onDragEnd = { /* No action needed */ }             )         } ) // The Ink API's composable for live drawing sits here as a sibling. InProgressStrokes(...) With this in place, the system correctly detects both the drawing strokes and the long-press drag gesture simultaneously. Once the drag is initiated, we create a shareable content:// URI with FileProvider and pass the URI to the system's drag and drop framework using view.startDragAndDrop(). This solution ensures a robust and intuitive user experience, showcasing how to overcome complex gesture conflicts in layered UIs. ## Built with modern architecture Beyond specific APIs, Cahier demonstrates crucial architectural patterns for building high-quality, adaptive applications. ### The presentation layer: Jetpack Compose and adaptability The presentation layer is built entirely with Jetpack Compose. As mentioned, Cahier adopts the material3-adaptive library for UI adaptability. State management follows a strict Unidirectional Data Flow (UDF) pattern, with ViewModel instances used as data containers that hold note information and UI state. ### The data layer: Repositories and Room For the data layer, Cahier uses a _NoteRepositor y_ interface to abstract all data operations. This design choice cleanly allows the app to swap between a local data source (Room) and a potential future remote backend. The data flow for an action like editing a note is straightforward: 1. The Jetpack Compose UI triggers a method in the ViewModel. 2. The ViewModel fetches the note from NoteRepository, handles the logic, and passes the updated note back to the repository. 3. NoteRepository saves the update to a Room database. ### Comprehensive input support To be a true productivity powerhouse, an app must handle a variety of input methods flawlessly. Cahier is built to be compliant with large screen input guidelines and supports: * Stylus: Integration with the Ink API, palm rejection, registration for the notes role, stylus input in text fields, and immersive mode. * Keyboard: Support for most common keyboard shortcuts and combinations (like ctrl+click, meta+click) and clear indication for keyboard focus. * Mouse and trackpad: Support for right-click and hover states. Support for advanced keyboard, mouse, and trackpad interactions is a key focus for further improvements. ## Get started today We hope Cahier serves as a launchpad for your next great app. We built it to be a comprehensive, open source resource that demonstrates how to combine an adaptive UI, powerful APIs like Ink and notes role, and a modern, adaptive architecture. Ready to dive in? * Explore the code: Head over to our GitHub repository to explore the Cahier codebase and see the design principles in action. * Build your own: Use Cahier as a foundation for your own note-taking, document markup, or creative application. * Contribute: We welcome your contributions! Help us make Cahier an even better resource for the Android developer community. Check out the official developer guides and start building your next generation productivity and creativity app today. We can't wait to see what you create!
android-developers.googleblog.com
October 30, 2025 at 7:32 AM
High-Speed Capture and Slow-Motion Video with CameraX 1.5
_Posted by Leo Huang, Software Engineer_ Capturing fast-moving action with clarity is a key feature for modern camera apps. This is achieved through high-speed capture—the process of acquiring frames at rates like 120 or 240 fps. This high-fidelity capture can be used for two distinct purposes: creating a high-frame-rate video for detailed, frame-by-frame analysis, or generating a slow-motion video where action unfolds dramatically on screen. Previously, implementing these features with the Camera2 API was a more hands-on process. Now, with the new high-speed API in CameraX 1.5, the entire process is simplified, giving you the flexibility to create either true high-frame-rate videos or ready-to-play slow-motion clips. This post will show you how to master both. For those new to CameraX, you can get up to speed with the CameraX Overview. * * * ## The Principle Behind Slow-Motion The fundamental principle of slow-motion is to capture video at a much higher frame rate than it is played back. For instance, if you record a one-second event at 120 frames per second (fps) and then play that recording back at a standard 30 fps, the video will take four seconds to play. This "stretching" of time is what creates the dramatic slow-motion effect, allowing you to see details that are too fast for the naked eye. To ensure the final output video is smooth and fluid, it should typically be rendered at a minimum of 30 fps. This means that to create a 4x slow-motion video, the original capture frame rate must be at least 120 fps (120 capture fps ÷ 4 = 30 playback fps). Once the high-frame-rate footage is captured, there are two primary ways to achieve the desired outcome: * Player-handled Slow-Motion (High-Frame-Rate Video): The high-speed recording (e.g., 120 fps) is saved directly as a high-frame-rate video file. It is then the video player's responsibility to slow down the playback speed. This gives the user flexibility to toggle between normal and slow-motion playback. * Ready-to-play Slow-Motion (Re-encoded Video): The high-speed video stream is processed and re-encoded into a file with a standard frame rate (e.g., 30 fps). The slow-motion effect is "baked in" by adjusting the frame timestamps. The resulting video will play in slow motion in any standard video player without special handling. While the video plays in slow motion by default, video players can still provide playback speed controls that allow the user to increase the speed and watch the video at its original speed. The CameraX API simplifies this by giving you a unified way to choose which approach you want, as you'll see below. * * * ## The New High-Speed Video API The new CameraX solution is built on two main components: * Recorder#getHighSpeedVideoCapabilities(CameraInfo): This method lets you check if the camera can record in high-speed and, if so, which resolutions (Quality objects) are supported. * HighSpeedVideoSessionConfig: This is a special configuration object that groups your VideoCapture and Preview use cases, telling CameraX to create a unified high-speed camera session. Note that while the VideoCapture stream will operate at the configured high frame rate, the Preview stream will typically be limited to a standard rate of at least 30 FPS by the camera system to ensure a smooth display on the screen. ### Getting Started Before you start, make sure you have added the necessary CameraX dependencies to your app's build.gradle.kts file. You will need the camera-video artifact along with the core CameraX libraries. // build.gradle.kts (Module: app) dependencies {     val camerax_version = "1.5.1"     implementation("androidx.camera:camera-core:$camerax_version")     implementation("androidx.camera:camera-camera2:$camerax_version")     implementation("androidx.camera:camera-lifecycle:$camerax_version")     implementation("androidx.camera:camera-video:$camerax_version")     implementation("androidx.camera:camera-view:$camerax_version") } ### A Note on Experimental APIs It's important to note that the high-speed recording APIs are currently experimental. This means they are subject to change in future releases. To use them, you must opt-in by adding the following annotation to your code: @kotlin.OptIn(ExperimentalSessionConfig::class, ExperimentalHighSpeedVideo::class) * * * ## Implementation The implementation for both outcomes starts with the same setup steps. The choice between creating a high-frame-rate video or a slow-motion video comes down to a single setting. ### 1. Set up High-Speed Capture First, regardless of your goal, you need to get the ProcessCameraProvider, check for device capabilities, and create your use cases. The following code block shows the complete setup flow within a suspend function. You can call this function from a coroutine scope, like lifecycleScope.launch. // Add the OptIn annotation at the top of your function or class @kotlin.OptIn(ExperimentalSessionConfig::class, ExperimentalHighSpeedVideo::class) private suspend fun setupCamera() {     // Asynchronously get the CameraProvider     val cameraProvider = ProcessCameraProvider.awaitInstance(this)     // -- CHECK CAPABILITIES --     val cameraInfo = cameraProvider.getCameraInfo(CameraSelector.DEFAULT_BACK_CAMERA)     val videoCapabilities = Recorder.getHighSpeedVideoCapabilities(cameraInfo)     if (videoCapabilities == null) {         // This camera device does not support high-speed video.         return     }     // -- CREATE USE CASES --     val preview = Preview.Builder().build()     // You can create a Recorder with default settings.     // CameraX will automatically select a suitable quality.     val recorder = Recorder.Builder().build()     // Alternatively, to use a specific resolution, you can configure the     // Recorder with a QualitySelector. This is useful if your app has     // specific resolution requirements or you want to offer user     // preferences.     // To use a specific quality, you can uncomment the following lines.     // Get the list of qualities supported for high-speed video.     // val supportedQualities = videoCapabilities.getSupportedQualities(DynamicRange.SDR)     // Build the Recorder using the quality from the supported list.     // val recorderWithQuality = Recorder.Builder()     //     .setQualitySelector(QualitySelector.from(supportedQualities.first()))     //     .build()     // Create the VideoCapture use case, using either recorder or recorderWithQuality     val videoCapture = VideoCapture.withOutput(recorder)     // Now you are ready to configure the session for your desired output... } * * * ### 2. Choosing Your Output Now, you decide what kind of video you want to create. This code would run inside the setupCamera() suspend function shown above. #### Option A: Create a High-Frame-Rate Video Choose this option if you want the final file to have a high frame rate (e.g., a 120fps video). // Create a builder for the high-speed session val sessionConfigBuilder = HighSpeedVideoSessionConfig.Builder(videoCapture)     .setPreview(preview) // Query and apply a supported frame rate. Common supported frame rates include 120 and 240 fps. val supportedFrameRateRanges =     cameraInfo.getSupportedFrameRateRanges(sessionConfigBuilder.build()) sessionConfigBuilder.setFrameRateRange(supportedFrameRateRanges.first()) Option B: Create a Ready-to-play Slow-Motion Video Choose this option if you want a video that plays in slow motion automatically in any standard video player. // Create a builder for the high-speed session val sessionConfigBuilder = HighSpeedVideoSessionConfig.Builder(videoCapture)     .setPreview(preview) // This is the key: enable automatic slow-motion! sessionConfigBuilder.setSlowMotionEnabled(true) // Query and apply a supported frame rate. Common supported frame rates include 120, 240, and 480 fps. val supportedFrameRateRanges =    cameraInfo.getSupportedFrameRateRanges(sessionConfigBuilder.build()) sessionConfigBuilder.setFrameRateRange(supportedFrameRateRanges.first()) This single flag is the key to creating a ready-to-play slow-motion video. When setSlowMotionEnabled is true, CameraX processes the high-speed stream and saves it as a standard 30 fps video file. The slow-motion speed is determined by the ratio of the capture frame rate to this standard playback rate. For example: * Recording at 120 fps will produce a video that plays back at 1/4x speed (120 ÷ 30 = 4). * Recording at 240 fps will produce a video that plays back at 1/8x speed (240 ÷ 30 = 8). * * * ## Putting It All Together: Recording the Video Once you have configured your HighSpeedVideoSessionConfig and bound it to the lifecycle, the final step is to start the recording. The process of preparing output options, starting the recording, and handling video events is the same as it is for a standard video capture. This post focuses on high-speed configuration, so we won't cover the recording process in detail. For a comprehensive guide on everything from preparing a FileOutputOptions or MediaStoreOutputOptions object to handling the VideoRecordEvent callbacks, please refer to the VideoCapture documentation. // Bind the session config to the lifecycle cameraProvider.bindToLifecycle(     this as LifecycleOwner,     CameraSelector.DEFAULT_BACK_CAMERA,     sessionConfigBuilder.build() // Bind the config object from Option A or B ) // Start the recording using the VideoCapture use case val recording = videoCapture.output     .prepareRecording(context, outputOptions) // See docs for creating outputOptions     .start(ContextCompat.getMainExecutor(context)) { recordEvent ->         // Handle recording events (e.g., Start, Pause, Finalize)     } * * * ## Google Photos Support for Slow-Motion Videos When you enable setSlowMotionEnabled(true) in CameraX, the resulting video file is designed to be instantly recognizable and playable as slow-motion in standard video players and gallery apps. Google Photos, in particular, offers enhanced functionality for these slow-motion videos, when the capture frame rate is 120, 240, 360, 480 or 960fps: * Distinct UI Recognition in Thumbnail: In your Google Photos library, slow-motion videos can be identified by specific UI elements, distinguishing them from normal videos. | ---|--- Normal video thumbnail| Slow-motion video thumbnail * Adjustable Speed Segments during Playback: When playing a slow-motion video, Google Photos provides controls to adjust which parts of the video play at slow speed and which play at normal speed, giving users creative control. The edited video can then be exported as a new video file using the Share button, preserving the slow-motion segments you defined. | ---|--- Normal video playback| Slow-motion video playback with editing controls * * * ### A Note on Device Support CameraX's high-speed API relies on the underlying Android CamcorderProfile system to determine which high-speed resolutions and frame rates a device supports. CamcorderProfiles are validated by the Android Compatibility Test Suite (CTS), which means you can be confident in the device's reported video recording capabilities. This means that a device's ability to record slow-motion video with its built-in camera app does not guarantee that the CameraX high-speed API will function. This discrepancy occurs because device manufacturers are responsible for populating the CamcorderProfile entries in their device's firmware, and sometimes necessary high-speed profiles like CamcorderProfile.QUALITY_HIGH_SPEED_1080P and CamcorderProfile.QUALITY_HIGH_SPEED_720P are not included. When these profiles are missing, Recorder.getHighSpeedVideoCapabilities() will return null. Therefore, it's essential to always use Recorder.getHighSpeedVideoCapabilities() to check for supported features programmatically, as this is the most reliable way to ensure a consistent experience across different devices. If you try to bind a HighSpeedVideoSessionConfig on a device where Recorder.getHighSpeedVideoCapabilities() returns null, the operation will fail with an IllegalArgumentException. You can confirm support on Google Pixel devices, as they consistently include these high-speed profiles. Additionally, various devices from other manufacturers, such as the Motorola Edge 30, OPPO Find N2 Flip, and Sony Xperia 1 V, also support the high-speed video capabilities. * * * ### Conclusion The CameraX high-speed video API is both powerful and flexible. Whether you need true high-frame-rate footage for technical analysis or want to add cinematic slow-motion effects to your app, the HighSpeedVideoSessionConfig provides a unified and simple solution. By understanding the role of the setSlowMotionEnabled flag, you can easily support both use cases and give your users more creative control.
android-developers.googleblog.com
October 30, 2025 at 7:32 AM