Android App Startup Time: Finding Bottlenecks and Pain-points

From a user perspective, speed is everything. They want the fastest cars, machines, the internet, and apps aren’t left out either. Everyone wants to do things at a quicker pace.

In the world of Android development, we want to be able to deploy our apps to real devices for testing without having to wait those precious 10 seconds that finally accumulate to hours just spent; sitting and watching Gradle build our apps.

The same thing applies when users make use of our apps, they want the apps to startup lightning-fast, no matter how crappy their device hardware might be and when it starts, they want to make network calls at the best imaginable speed, no matter how flaky their internet connection is at the moment, hence the reason why App startup time is important to us, we want to be able to show that our apps are faster than that of the competitor.

I recently had to debug a little deeper into android app startup time and I aggregated some good info and thus decided to share. We would cover more of the theory while I embed appropriate links to each topic.

Dependency Injection

Dependency Injection libraries can help you structure your dependencies, but at what cost in terms of setup and runtime performance, I found a few blog posts that tried to do justice in terms of comparing common DI libraries out there.

performance metrics snapshot

Knowing what DI lib to pick for your project is a big part of your decision process, for example, if you picked the Dagger Lib, you would still have to determine your use case. would you prefer:

  • Easier/Faster setup time? Use Hilt
  • Would you be working on DFMs? Use vanilla Dagger
  • Fairly large project but you want more control than Hilt Provides and less complexity from Vanilla Dagger? Use Dagger-Android

you can find a more complete comparison of the various DI Libs here

App startup Library

Apps and libraries often rely on having components initialized right away when the app starts up. Most 3rd party Libs meet this need by using content providers to initialize each dependency without having to request a Context from you.

That’s a good thing cause it reduces the chances of leaking your App Context, but content providers are expensive to instantiate and can slow down the startup sequence unnecessarily since these Libs are initialized at App startup in Application.onCreate(). Additionally, Android initializes content providers in an undetermined order.

The App Startup Lib provides a more performant way to initialize components at app startup and explicitly define their dependencies. using the app startup lib can help us improve startup time.

Here is a link to the docs (Quite a simple and straightforward documentation)

What amount of work does App startup save us?

App Startup uses a single ContentProvider to run the initialization code from different initializers, instead of having each run in its own ContentProvider.

So instead of each library/dependency creating its own ContentProvider, only 1 is created by app Startup to run all the initialization logic, hence it comes down to the cost of creating a single ContentProvider. This may depend on a couple of things: Whether it’s a high or low-end device, the API level, etc, On a Pixel 2 running Android 10, it takes a little over 2ms just to create a single ContentProvider.

ICYMI: what if we have more than one initializer which depends on other initializers?

If InitializerB and InitializerC depend on InitializerA, they both need to be declared in the manifest. Basically, Only high-level Initializers have to be declared this way for automatic initialization. So if A depends on B, and B depends on C, only A has to be declared.

Why App Startup Lib?

Many libraries use one or multiple ContentProviders for initialization, including Firebase, WorkManager, Crashlytics, Google Ads. The cost can add up, and keeping it low will provide a better user experience.

Many users care about app performance and app startup latency. Thus the optimizations that save ms(milliseconds) on a high-end device (and more on low-end devices) can matter.

App Profiling using method tracing

We have to look out for processes/methods that take up so much time to be initialized/run and then find ways to mitigate the process as much as possible. We can enable method tracing to give us a better overview of the methods that run are triggered in the life of our app, from the start point to the endpoint as we have defined.

This doc shows us the easy ways we can record method triggers in our apps and read the data collected.

We can record method/process traces by setting a start and stop points for these traces by simply calling:

Debug.startMethodTracing(“trace_record”) //start position
Debug.stopMethodTracing() //end position

Using this method, we can isolate processes in a bit to track down long-running culprits, Then we can find these traces in the file manager –> android –> data –> our app folder

here we have a general overview of traces in the sample from the onCreate of the Application to the point where a response is returned by the API.
Here we can see the approximate creation/initialization time of our network interceptor and in order to mitigate that high cost of almost 200ms, we can declare our interceptor in this instance to run only in debug mode and not in production.

Here we see that the creation of a single ViewModel takes approximately 530ms, hence what we can do is to favor code re-usability and make similar/connected fragments to share a single ViewModel as seen in this sample codebase, hence reducing the cost of creating multiple ViewModels.

Enable Strict Mode

Additionally, we can enable StrictMode to detect accidental disk or network access on the application’s main thread, helping us to catch processes that run under the hood and access the network or perform IO operations on the UI thread.

In some instances, it would also pick out custom code that does long-running processes on the main thread.

class BaseApp : Application() {

    override fun onCreate() {

        if (BuildConfig.DEBUG) {


By enabling detectAll() and penaltyLog() we can see logs processes that violate our StrictMode profile in the Logcat and then take appropriate action.

Fixing possible pain-points and bottlenecks

  • Use lazy initialization: parts of your application don’t need to be initialized during startup. Try to identify those parts and delay the initialization as much as possible — e.g., the user profile might only be necessary on the profile activity and not very useful during startup.
  • Dispatch 3rd party libraries initialization to a background thread: library initializations in the Apps Application.onCreate() will make it slower. Try to find the libraries that can be asynchronously initialized and move their initialization to a background thread. I.e. libs that are not explicitly required to be initialized in Application.onCreate() should be moved out and only initialized when they are needed or use the manual initialization feature of the app startup lib to delay the initialization of libs that aren’t needed at startup.
  • Remove defective 3rd party libraries: this is an extreme scenario, but if a specific library is too slow to be initialized after investigating via strict mode and the Profiler and there is no possibility to initialize it elsewhere, you should consider removing it or replacing it with an alternative lib.
  • Avoid reflection: We can considerably improve the startup time by avoiding reflection or avoiding apps that rely heavily on reflection. Reflection operations are very common in serialization libraries like Gson and Jackson etc.
  • 3rd Party Analytics: It’s also worth noting that Firebase Performance Monitoring will measure an app’s startup time on user’s devices, so you can understand with certainty what the actual gains of an optimization ended up being, in the real world scenarios.
  • Solve synchronization issues: The startup time might be increased if the main thread is blocked waiting for another operation executed on another thread. Try to find those locks.
  • Flattening the layout hierarchy: by removing nested layouts as much as possible, see examples of a flattened hierarchy and a nested hierarchy in the files below.
  • Delay inflating parts of the UI: Parts of UI that do not need to be visible during launch can be delayed, by using a ViewStub object as a placeholder for sub-hierarchies that the app can inflate at a more appropriate time.

Edge Scenario

In a case where all the above measures fail to yield significant success and change in the app startup time, consider changing the API model If Possible to adopt a streaming model using sockets, GRPC.

JSON serialization and deserialization can sometimes be costly because text-based manipulation is slow, hence using this approach the app can maintain a connection to the API and stream the data in smaller bits that would be easier to download and parse without significantly affecting the user experience or relying heavily on the strength of the users’ internet connection or device hardware (CPU).

Useful Links

Sample Repo:

App Profiling:

App Startup:

Here you go, if you had any of the steps here help to improve your startup times, do leave a comment. Suggestions are also welcome.

Cheers 🙂

Total Page Visits: 7425 - Today Page Visits: 1

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *