People expect long battery life on their mobile devices, and apps play a vital role in achieving that experience. Understand how app behavior affects battery consumption, and learn strategies to conserve energy while providing the best experience for your app. Find out how Xcode Energy Reports can help you tune your app to use the least amount of power possible.
Good afternoon, everyone. My name is Phil Azar, and I'm a software engineer on the Power Team at Apple. Today, along with my colleague, David, I'm excited to share with you what's new in energy debugging.
Battery life is hugely important to our customers. The Power Team strives to make sure that everyone can get through the day on a single charge without having to plug their device in.
We work with our developers here at Apple to try and optimize battery life by guiding them and helping them make design choices that are energy efficient. Apps are also hugely important to our customers. In fact, we find that most usage on device is directly attributed to third-party apps. This is incredible, and it makes it more important now than ever before to focus on energy efficiency in the design of your application.
To that end, we're going to talk about three things today. First, we're going to talk about some battery life concepts that you can employ in your application to make sure that you are being as energy efficient as possible. Then, we're going to talk about some tools that we have available for you to understand and quantify where energy is going in your application. And finally, I'll pass it on to my colleague, David, who is going to talk about a new tool that we have available to take your energy debugging one step further.
So, let's go ahead and get started and talk about some general concepts.
To make a battery life great for our users, we have to start with first principles and understand what makes battery life battery life. So, let's start.
What is energy? Fundamentally, if you think back to physics, energy is the product of power and time. As your app is running on any of our platforms, it'll be consuming energy at various rates. This is because the different things that your app does consume different amounts of power. Additionally, the more time it spends consuming that power the more energy consumption you'll face. We can plot this graphically.
Here, you can see as your app is running there are various peaks and troughs of power consumption. It will follow that the area under that curve is energy, and this relates directly back to your application in its various modes of running. When your app is active and when your app is idle, it's going to consume different amounts of power. When your app is active, we say that the power being consumed is at its highest point. This is because the user is directly using your application for whatever it was or intended for. Then, when your app is idle but still running, the power consumption drops.
Finally, when your app is suspended, there's still a basal level of power consumption, and that's interesting to note.
When your app is doing any of the work that it's been designed to do, it's going to be asking the system to bring up hardware that it needs to do that work, and the energy associated with that hardware being brought up and used is called overhead. Your app doesn't have direct control over overhead, but it really does influence through anything that it does. Then, when your apps first utilize those hardware resources, this is called active energy. So, now, your app has access to, let's say, the radio or has access to, let's say, the camera, and it's using that subsystem, this energy being consumed is going to be called active energy.
So, then, it stands to reason that the battery life problem is actually a two-part optimization problem. We have to think about being efficient about the active energy that we are consuming, and we also need to be thinking about the overhead that we'll be incurring by asking for different hardware resources on the system.
So, I've mentioned hardware and these subsystems that supposedly consume energy. So, what exactly consumes energy on the system? As an app developer, you're going to run into a number of different hardware subsystems in your app development process.
But there are four subsystems that we think on the Power Team will contribute most highly to your energy consumption. These are as listed here; processing, networking, location, and graphics. Let's run through these and try to understand what they mean.
Processing is what you might imagine.
It's going to be the energy consumed when your app utilizes system resources on, let's say, the SOC. Such as DRAM, CPU, etcetera.
It's really the workhorse component. Energy consumed here is going to be highly dependent on the code that your app is executing and the workload that you've asked your app to perform.
So, in a nutshell, the more operations and code your app executes, the more energy it will consume in the form of processing.
Networking is the next major subsystem that we think about when we talk about what consumes energy on our devices. Networking energy is what you might imagine. Whenever your app asks to do any form of networking over cellular, Wi-Fi, and Bluetooth, it's going to consumer energy in the form of networking.
This energy is traffic-dependent. The more traffic that your app asks to be sent over any of these technologies, the more energy it will consume. So, put it bluntly, the more network requests that your app asks for, the more energy you'll consume in networking.
Location follows suit but it's a little different.
In a location subsystem, when your app asks to fix location using GPS, Wi-Fi, and cellular, it's going to consume energy in the location system.
The location energy is going to be accuracy and frequency dependent. If you're asking to fix a user's location with a high degree of accuracy and at a very high cadence, you're going to get a lot of energy consumed in the form of location. So, putting it all together, the more time spent tracking location in your application, the more energy you'll consume as location energy.
Finally, we have graphics. In the graphics subsystem, you would imagine that process components such as he GPU and the CPU contribute to the energy consumed by graphics.
This is going to be animations and UI dependent. So, when your app is asking for any animations to be displayed or any UI to be rendered, it's going to consume energy in the form of graphics.
This is highly complexity dependent. The more complex your animations and UI are the more energy that you'll consume in the form of graphics.
Finally, a good rule of thumb is to say that the more rendering that your app does, doing animations or UI, the more energy you're going to consume in the form of graphics. So, we talked about these four subsystems, and what's the take-away message? There's a common thread ties them all together in our app development, and so that the more work you do, the more energy you're going to consume. We can't necessarily say do less work because that means our app might do less. So, then, the point here is that we need to optimize the work we do and make it as energy-efficient as possible. But it's not so simple.
Thinking about energy efficiency is a process. It's not just so that we can make an optimization and suddenly our energy is going to be more efficient or our app is going to be better for battery life. We have to get into this mode of thinking that our app has a set of resources that it's using, and we need to use those resources efficiently. So, with that being said, let's take a look at some examples of real-world situations where we can think about energy efficiency and really start this process off. Let's talk about when our app is in the foreground.
When our app is in the foreground, it will likely be providing the main user experience. For many of us, this is the most important and critical part of our application. With that being said, energy efficiency in the foreground is about focusing on providing value to your user, ensuring that whatever you're doing provides some immediate impact for the user experience. One tenet we can follow is to only do work when required. Sounds pretty straightforward. Well, let's take a look at an example and illustrate why this is so important. Let's say you're building a media application, and the primary goal of the media application is to present content to the user at a regular cadence. Well, a really robust solution would be to implement a sort of timer-based approach to refresh the content feed. This will ensure that the content of the user is seeing is as fresh as possible without any sort of interaction.
This isn't a very energy-efficient approach, and let's sort of understand why. If we plot the power over time curve for a solution like that, we see that every time our timer fires, we have a little bit of active energy that's consumed.
But the really important part here is that we have a ton of overhead, and this is because every time we ask to display new content, we likely have to bring up subsystems such as networking, graphics, and processing to do all that work and display that content, and the user might not actually want it. So, we'll end up burning a lot of energy consistently while that application is running. We can do better. If we think about what the user actually wants, the fresh content, we can implement a solution that is on demand.
Now, in this new solution, user interaction or some kind of a notification from our server will provide us the new content and display it to the user.
This solution isn't that different, but it's an energy-efficient approach and makes a dramatic impact on our power over time. Let's take a look at why.
Now, if we imaging that our app is running in the foreground, and a user interaction occurs, we would refresh our content feed and display it to the user. Then, our app will go idle as our user is using it, let's say, to scroll or just to read the content that's been displayed. You'll notice that the overhead here is still a little bit high, but it's been significantly reduced. The trick here is that we've allowed the subsystems we no longer need to go to sleep and idle off.
Another tenet that we can follow to reduce our energy consumption in the foreground is to minimize complex UI.
So, I mentioned before that in graphics our energy consumption is highly complexity-dependent, and we always want to make our apps look as good as possible. So, we're going to spend a lot of time building this UI that looks great and animations that are pleasing to view.
However, this can have unintended side effects, and let's look at an example to illustrate why. If I'm a video player, my goal is to let a user watch a video. Simple. But I could be tempted to add new controls and UI above that video, let's say, in the form of related videos or a scrubber or maybe volume controls. This allows a greater degree of control to the user to use this application and enjoy the video they're watching. This is actually insidiously energy inefficient, and let's understand why.
On many of our devices, there's a display optimization in place that allows for video playback to be very energy efficient when there is no UI on screen. This is something that is not immediately clear when you're building an application like this. However, it makes all the difference. So, a good approach to take advantage of this optimization and counteract this sort of energy inefficiency we see is to have a simple auto dismissal of our UI controls. And this could mean that any related content that we put on the video or in the UI layer simply goes away if the user is not interacting with it.
This makes a big difference on our energy consumption during video playback, as this display optimization is critical for maintaining quiescent energy-efficient playback. So, we've talked a lot about the foreground, but what about the background? Many of us who are building applications such as music players, or maybe even alarm clocks, are focused on the background. Our main experience comes from our app running effectively in the background.
Well, when we're in the background, we have some things that we need to be aware of.
Likely, our app is going to be running in conjunction and concurrently with other systems on device. Let's say I'll be using iMessage or maybe even Facetime. To that end, we should focus on minimizing our workload to ensure energy efficiency when we're in the background.
Well, this is a pretty broad statement. So, let's kind of try to understand it. When you're in the background, you may be able to utilize subsystems that are already being used by other apps on the system. However, it's important to note that the majority of the priority for the energy consumption is going to go to those applications that are in the foreground. So, then, we should focus on minimizing our workload to make sure we don't interrupt those experiences.
One way we can start thinking about this is to coalesce all of our tasks. If there's a lot of maintenance work, let's say, that we need to do in the background, or we have a lot of networking activity that needs to be performed, let's say, then it would be best for us to group those together and do them all at the same time. That way, we have the minimal impact on anything else happening on the system. A really common example that many of you may face is to upload analytics and working with application analytics.
It's likely that when you're collecting these analytics you'll be sending them immediately because this is a very robust solution, and it allows you to build a dataset that is protected against crashes in your application.
Well, doing that may not be very energy efficient. If we were to send our analytics every time we went into the background, we would risk overusing our networking hardware. And here's how that looks like when we take a look at the power over time curve. Every time we enter the background, we would spin up networking resources to send these analytics, and then we would come down and go idle again.
This may not look like a lot with just three on this graph, but you can imagine if your application is experiencing heavy usage, this adds up over time.
The right way to do this is super straightforward, and it's simply to send these in deferred batches. We have a lot of APIs that support this coalescing principle, and one of the biggest ones is NSURLSession. Using NSURLSession with a discretionary property and a background session will enable you to take advantage of this sort of an optimization very quickly, and this is the right way to do it. Let's take a look at what the energy over time looks like now, if we've done this. We can see here that while it might take a little longer for our app to do any sort of uploading for analytics, the energy that we're going to consume is going to be far less, and it's going to be condensed to one single burst. This is effectively the result of coalescing any tasks when you're running in the background. You get a high energy for a short period of time completing those tasks, but then once you're finished you no longer have to worry about doing those tasks and potentially interrupting an experience of another application.
Another example that seems sort of straightforward is to end your tasks quickly.
With many APIs on the system that allow you to take advantage of background running, things like UI background task and UIKit, or VOIP and PushKit. And these APIs have ways for you as an app developer to indicate that you no longer need to run in the background. So, it stands to reason that as an app developer, if you're using any of these background modes, you would call these completion handlers, let's say, to let the system know you're done. Well, that doesn't always happen, and in a lot of cases, we might actually forget or not want to end our task. So, we let our tasks expire.
There's a great energy impact to this, and it's really something that people don't necessarily see when they're developing their application.
Let me demonstrate why this is energy inefficient with the power over time curve.
You could imagine if you enter the background for any reason and your task starts, you finish some time afterwards. Then, if we let our task expire, as we've said, we enter this sort of idle phase where you're consuming energy and our app is running in the background for whatever reason we've asked our API for, but there's not really much else happening. And then, we have a long tail of overhead because we've kept the system awake and subsystems we thought they needed to be using their own resources are now waiting for us to finish. The quick solution to this is to simply call your completion handlers whenever they're available. And as I mentioned, UI background task is one of the biggest ones. When we enter the background from the foreground, we can call this API and UIKit. If we don't let our system know that we don't need, if we let our system know that we don't need to do any work anymore, we save a lot of energy and allow hardware systems to go idle when they need to go idle. Here's what that looks like if we call these completion handlers.
You could see here that the tail of active energy that we saw before is gone, and now we've greatly reduced our tail of overhead as well. A simple solution, but it has a big impact on your overall energy consumption.
So, we've talked about some ways that we can start thinking about energy efficiency as a process. If we focus on optimizing the work we do in all of our use cases, we can really work on optimizing the energy that our application consumes. For a deeper dive into the things we talked about and to maybe get a little bit more hands-on with the code behind some of these optimizations we discussed, I really recommend that you check out our video from last year, How to write energy-efficient apps. In that session, you'll find that there are a lot of interesting resources and more examples on how you can use energy-efficient designs in your application.
So, now that we've talked about some ways that we can improve energy efficiency in the design of our application, and we've spent a lot of time talking about ways that we can improve our energy efficiency through thinking about the hardware systems behind our application, what are the ways that we can quantify this? Let's say we've made a change, and we want to understand the real impact in our application.
Well, right now, let's talk about some tools that we have available for you today to do that sort of work. Today, we have two tools available that you can use to quantify your energy impact.
The first tool is the energy gauges, which are accessible directly through the Xcode debugger.
The energy gauges are a great way for you to rapidly iterate on your codes energy consumption and to help you understand at a very high level where your energy consumption is going by subsystem. And then, if the gauges aren't good enough, you can jump right into the instruments from the Developer Toolkit. The instruments will allow you to do a deeper dive into the various subsystems on the device. And understand at a lower level how these actual subsystems are performing and what they're doing. Let's take a look at the energy gauges first. As I said, these are accessible directly through the Xcode Debugger UI, so they're pretty easy to use. Let's jump into the UI. As you can see, we've selected the row that says energy impact, and now we have this main area in the UI that's composed of three major sections. On the top left, we have the canonical gauges themselves. These gauges range from low, high, and very high, and represent the average energy impact of your app at an instantaneous moment. It's important to know that where the gauge actually falls doesn't necessarily mean good or bad. It means that whatever your app is doing, it's consuming this much relative amount of energy. It's important because it's up to you as an app developer to think about your use case and whether or not you would expect it to do that. To the right of that, we have the average component utilization, and this is going to be a pie chart that shows you all of the different components relative to the total amount of energy that you're consuming, what percentage those components are consuming.
This is really useful because it's representative of those subsystems we talked about earlier, and it helps to identify if you have an excess amount of overhead or maybe if one component is taking too much energy, and you don't expect it. And then, immediately below that, building off of the average component utilization chart, we have a time series that represents the average utilization of each component as your app is running in real time. We could also see here that you have the state that your app is actually running in, foreground and background, and also it would list suspended.
This is a really awesome tool for understanding how your app is behaving in real time. So, as I said, the energy gauges are really great for doing high-level characterization work and rapid profiling. That's the key. When you're iterating on your code, you're trying to get something to work as an app developer, and you're trying to put something together, it may not seem immediately clear how you could really think about energy, but the gauges are a great way to start.
But let's say that you've done that and the gauges aren't really enough for you. That's where the instruments come in, and directly through the energy gauge's UI, we have access to three instruments that we think best correlate to the subsystems we talked about before.
These include the time profile, the network profiler, and the location profiler, and if you were to click through into any of these from the energy gauge's UI, you would be able to transfer your current debug session into any of those instruments. Let's take a look at one of the instruments here, the Time Profiler, and try to understand the UI. Now, the instruments have a very standard UI, but what's interesting about it is that it's very useable. And let's take a look.
Here, we can see the Time Profiler UI, and on the top, you see a bar that's representative of the different controls that you have of the actual instruments. On the top left, ou can see you have a Play and Pause button as well as your target that you're using to profile. And then, on the right, you see a plus button that allows you to very quickly drag and drop other instruments into you profiling pane, which can be found here.
And now, this profiling pane actually allows you to see what instruments are running and currently profiling your application. Here, since we're using a Time Profiler, we see the CPU usage and a graphical representation of how much CPU usage is being consumed over time.
Directly below that, we have a weighted call graph. Since we're using the Time Profiler, we're trying to understand how our CPU is being used by the application.
To that end, there's a weighted call graph that allows you to see exactly what is being called in your application and how much weight it has on CPU time. And then, directly to the right of that, you have a summation of the heaviest stacked race in your application that basically says what is the heaviest stack during this profiling run? There are a lot of other great instruments that you can use, and here are some of them now. This means that the instruments are really great for a couple of things. The first thing is that the instruments are really great for root cause analysis. Let's say you have a problem in a specific subsystem, so just processing or networking. You would be able to identify pretty rapidly what that problem might be using the Time Profiler or the Network Profiler. The instruments are also really great for doing in-depth profiling of your application. If you implement a CPU efficiency improvement of some kind; let's say you cut down the time that it takes for an algorithm to execute, the instruments are a really good way to understand if that's the, if the intended effect of your optimization is going through on that subsystem. But there's also one more thing that the instruments are really awesome for that I haven't talked about today, and that's untethered profiling.
There's a single instrument that you can use called the Energy Log, which allows you to do an untethered profiling run on a provision device while using your application. It's accessible directly to the developer settings, and when you start running it, you can use your phone as you normally would and use your application as you might expect for any number of use cases. And then, afterwards, when you're finished, you can stop the recording directly from the developer tools and jump into Instruments and upload that trace. This is really useful for understanding if there are any environmental problems that you're having that might be impacting your energy consumption. Now, we've talked about the tools; we've talked about the concepts; now, I want to do a demo and work through an example about how we can actually use these in tandem and solve energy problems and make our app more energy efficient.
So, today, we've prepared a simple game called Energy Game, which draws sprites onscreen and allows the application to inject a number of bugs. It's a very simple application that we've built, and it only has an app delegate in a View Controller, but the primary purpose is to show you how to use our tools rapidly to iterate through your code. So, I'm going to go ahead and build Energy Game here through the Xcode UI and let it run. Then, you'll see on the right side that all it really does is draw a little little battery sprite at a random time. There it is. Very simple. If I jump straight into the Xcode debugger and jump to energy impact, now, I can see my gauges. And so this is the UI that we just talked about. It's the same three areas that we discussed, and you could see right now that all my app is doing that we've designed it to do is just placed some sprites onscreen. But you notice that I'm doing networking, and my overhead seems to be high for simply no reason. Well, this is because we're also doing a little bit of networking and uploading the spike count every time a new spike is drawn onscreen. And so, through the Xcode energy gauges, you can actually see the impact of doing that. So, I'm going to go ahead and stop this now and jump into my code to understand where this is coming from. So, if I go to my View controller, where I actually add a new sprite, I've had a function here to upload the sprite count, which creates a simple connection object and uploads the sprite count every time a new sprite is added. I'm going to go ahead and comment the cell and then jump into my app delegate and move it to the only upload the sprite account when I'm in the background. And for the sake of this demo, I've named that my networking optimization. I'm going to go ahead and rebuild Energy Game and show you the effect this has on the energy gauges. Now, Energy Game is running again. I'm going to jump back to the Xcode Debugger UI, jump back to Energy Impact, and now we don't see any networking energy, and we don't see any overhead, which is good. So, that's simple optimization, simply moving a networking request from one area to the other and preventing it from happening often allowed us to greatly reduce our energy impact in our quiescent use case. So, now, I'm going to go ahead and inject a bug and try to see how we can see a bug when we use Xcode energy gauges. Bug1 is a simple bug that you can see on the bottom left here that will essentially cause a CPU spin in the background. This is a case that many of us might face in regular and real world development. I'm going to go ahead and inject this bug. And now that I've injected it, I'm going to background Energy Game, and as you can see in the Energy Gauge's UI, we transfer to the background. We do a little bit of networking because I moved that networking call to the background. But now, we also see that our CPU is going wild. So, this is the power of the gauges. We've now, we know that we're injecting a bug, but we can see that bug directly in the gauges.
So, now, to find the root cause, I'm going to go ahead and jump into the Time Profiler and transfer my debug session, as we discussed before.
So, now, I transferred my debug session, and it will begin running automatically. And as you see, the weighted cobra apples start populating in a moment; here, we can see that the dispatched thread here is consuming the most CPU time. Let's go ahead and dig into it. And we can see that we have closure at something called appdelegate.compute. Well, let's jump back to our application and try to understand what that is.
So, for the purpose of this demo, when we entered the background in Energy Game, we called something called computation.
Computation is a really terrible function. It basically starts spinning wildly with a while true loop when we inject Big1. So, it's very simple for the purpose of this demo, but using both the gauges and the time profiler, we were able to dig back directly to where this was happening, and we can see that this while true loop is not good.
So, I'll go ahead and comment this out because I love commenting out code instead of deleting it, and I'll go ahead and rebuild Energy Game.
We'll just jump back into the gauges to see that everything is okay, and now we'll go ahead and inject Bug1 again, and I'll go to the background.
And we see our expected networking activity but no CPU spin. Voila! We've solved it, using two tools in about 30 seconds or a minute. That's the power of these tools. They're able to let you rapidly iterate and root cause problems that you might face on day-to-day development. So, let's go back to the slides.
So, there's some takeaways from this demo.
The first takeaway is that the gauges, as we said, are great for rapid iteration. They allow you to quickly see where your problem might be happening, and they allow you to take the next step in figuring out how to solve it.
The second takeaway is that the instruments are great for in-depth profiling. And finally, the third takeaway is that we want you to think about energy efficiency as a primary objective in your application development. We have powerful tools available for you to quickly understand where your energy is going and to root cause problems that might be energy related. So, let's say you've done all of that, and you've shipped your application. From the App Store it's getting used; all your customers are greatly thankful that you shipped it on time. What's next? Let's say you still see customers saying that your app is bad for battery life. What sort of recourse do you have? Well, now, I'm going to pass it on to my colleague, David, who's going to talk to you about how you can face those challenges and solve them using our new tools. David. Good afternoon. Hi, I'm David, and I'm here today to talk about some new great tools for energy debugging. If you're an iOS developer with an app in the App Store, or in TestFlight, then this part of the talk is for you.
I'd like to start with the following question, now that you've shipped your app, how do you know how our app is doing in the wild? In other words, how do you know if your customers are experiencing energy issues that are leading to bad battery life? Now, a customer may leave a review on the App Store, saying, "My battery went down a lot while using this app." But they might not be able to tell you what happened. Or even worse, they may delete your app and not leave any feedback at all.
So, it can be challenging to find out if you have energy issues in the wild. And even if you know that there are energy issues, how do you debug an issue that occurred on your customer device? You can make use of tools like instruments and gauges that Phil talked about, but unless you know what to test for, it can be challenging to reproduce.
There can be environmental factors such as poor Wi-Fi conditions that occurred for your customer whereas on your desk, you have great Wi-Fi conditions. So, these are some really challenging questions. So, to help answer these questions, I'm excited today to talk about a new way of debugging energy issues using Xcode Energy Logs and Xcode Energy Organizer.
First, I'll talk about Xcode Energy Logs, which is a new way of reporting energy issues on device. Later, I'll cover Xcode Energy Organizer, which is a new tool for viewing Energy Logs. With these tools, for the first time ever, you'll have the data that you need to find and to fix energy issues. So, let's get started.
Xcode Energy Logs are a new way of reporting issues from device. We start with high CPU energy events, which is when your app is using lots of CPU. Each Energy Log will have a weighted call graph, which will point out the energy hotspots within your code.
These logs will be made available from TestFlight and the App Store, so you'll have real world data, what's actually happening with your customers. And with these logs, you'll be able to begin improving the battery life experience. Let's talk about when an Xcode Energy Log is generated.
Let's say your customer is using your app, which starts to put a really heavy load on the CPU. This can be natural, depending on what your app is doing. Well, let's say it's putting a really heavy load on the CPU for a long time. This causes a high CPU energy event to be detected. Now, there are two key thresholds that are checked for for a high CPU energy event. The first threshold is when your app is spinning 80% CPU for more than three minutes while in the foreground, and the second threshold is more than 80% CPU for more than one minute while in the background. In this latter case, your app may actually get killed to prevent runaway background usage.
Each instance of a CPU Energy Log indicates that your app uses so much CPU that it was worth flagging.
What this means in practical terms is that it was responsible for up to a 1% battery drop in a typical case. Now, you may be saying to yourself, 1% battery doesn't sound too bad.
But to put this in context, on an iPhone 6S with an additional 1% battery, your user could have had eight minutes of additional talk time or six minutes of additional browsing or 30 minutes of additional music. And if your app continues to burn at this rate, the battery would have dropped even more. So, writing CPU-efficient apps is really important, and your users will notice. An Energy Log has three things that can help you figure out what has happened. First is the context by which what happened that triggered the report. For example, it will say that your app spent for 8% over three minutes. The second piece of information is the metadata about where the Energy Log was created; for example, on an iPhone versus an iPad and on, say, Build 30 of you app.
The third and most important piece of information is the weighted call graph that will show you the energy hotspots in your code. So, let's talk a little bit more about the weighted call graph, how it was generated, and how you can use it to debug energy issues.
Let's say your program is comprised of a main function and a number of methods, Method 1, Method 2, Method 3, and Method 4. Your code begins to execute until a high CPU energy event is detected. Up to this point, backtraces are continuously sampled at a periodic interval of once per second, where each backtrace is a sample of an active frames in execution.
The first backtrace, for example, shows that main Method 1 and Method 2 were active.
The second backtrace shows that main Method 3 and Method 4 were active and so on.
Now, we can try to combine these backtraces together to form an overall picture. What we see here is a weight call graph, and this weighted call graph is really useful. Here, we can see that main was present in six out of the six samples that we collected, meaning that main was running 100% of the time. Of that, we see that Method 1 had five samples whereas Method 3 had only one sample. And within Method 1, we see that Method 2 and Method 3 had three samples and one sample respectively.
So, this gives us an overall picture of where the code was being executed and how much time was being spent.
So, when an Energy Log is created, there's a collection of periodic backtraces sampled at one per second. For each backtrace contains a list of the active frames being executed by the CPU, these backtraces are aggregated by sample count into a tree where the samples, where more samples mean more heavily executed code. And you can use these weighted call graphs to identify unexpected workloads in your app.
So, now that we know what an Energy Log is, how do we access them? First, Energy Logs are created on device.
Then, your beta testers and your customers, who have opted in, will upload these logs up to Apple.
Now, there might be hundreds or even thousands of these logs, so we will aggregate these logs for you, sort them, and present them in a list of top energy issues to you.
And you can download and view these logs using the new Xcode Energy Organizer tool. The Xcode Energy Organizer is your command center for debugging energy issues in the wild.
Energy Organizer makes it really easy to view energy logs.
The Energy Organizer is connected to TestFlight in the App Store, so you'll see a list of all your iOS apps. You'll be able to see some statistics of how often these energy issues occur in the wild. You'll have a list of the top energy issues sorted by how many devices that was impacted. You'll have a view of the weighted call graph for a number of different logs, which you'll be able to page through, using page through logs, and you can use Open in Project to jump directly into your code base so you can begin debugging these energy issues. And now, I'd love to show you a demo.
Now, I've made sure that I've signed into my developer account and that I've uploaded our Energy Game app up to TestFlight in the App Store. To bring up the Energy Organizer, I could just go into Window here, and click Organizer.
And this is the Energy Organizer UI. I make sure that the Energy tab is selected at the top, and if you've used the Crashes Organizer, you will already be familiar with this UI. On the left, we have a list of all of our apps.
Next to that, we have a list of our top energy issues. In the center is our Weighted Call Graph, and on the right-hand side are some statistics about the energy issue.
So, let's go ahead into the left here and select Energy Game, which is the game that we're working on. And then, make sure that we're on the correct build.
We see here a list of our top energy issues, sorted by how many times it's affected. Let's jump into this first energy issue, which hit 64 of our devices.
On the right-hand pane here, we have some more details about what happened, as well as a breakdown of how often that energy issue happened, and we can see that it happened across a mix of iPads, iPods, and iPod Touches, and we can see a distribution of how often it happened in the past two weeks. Let's take a look at the weighted call graph. We see that a lot of time is being spent in this dispatch call block calling into this app delegate computation function. Now, I can use this button here to jump us directly into our code base. So, we are back directly into our code. On the left here is one of the sample backtraces from our weighted call graph. We can see that we're spending a lot of time in this computation function. Now, this is the very function that Phil was talking about earlier on his demo.
And we can see that he's already commented this part of the code out, so he's already addressed this energy issue. So, let's jump back to the organizer.
I can go ahead and click this button here and mark this issue as resolved.
And what this does is the next time we open the Energy Organizer, we'll see that we've already taken care of this issue. All right, let's jump to the second issue, which hit 42 devices. Now, before going into the weighted call graph, I'd like to draw your attention to three features at the bottom here.
First is this page through logs where I can select one out of five sample energy logs out of the 42 that we've hit in the wild. As I page through these, you can see that the weighted call graph looks a little bit different, which is okay because these backtraces and these weighted call graphs are samples.
However, we've grouped these together by similarity, so these logs should look fairly similar to you.
This button here, when I click it, shows that all the system library frames that were hidden previously. And this button here clicks all, shows you all the frames that had low sample counts. Now, by default, we've hidden most of these frames for you so that we only show you the most important frames.
Let's take a look at this function.
It looks like a lot of time is being spent in this heavy timer function. Actually, I heard Phil talking about this bug off stage, and he said that he was going to take a look at it, so I'll let him deal with it. I can go ahead and rename this and move on to the next bug.
Let's take a look at one more bug.
Here, I can see there's a lot of time being spent in set next update timer and add new sprite. What is this function? Let's investigate.
I'll jump directly into the code, and I can see that a lot of time is being spent in this add new sprite function. Okay. Adding new sprites can be expensive, but the question to ask ourselves is, is this an expected workload? And the answer is, in this case, not really because we only expect to be adding sprites once every few seconds. So it doesn't quite make sense why this is chewing up so much CPU. Let's take a look at the backtrace to see who is calling us. We're being called by set next update timer. So, what is this function doing? We see that within set next update timer, we're calling in to this add new sprite. At the end of a function, we're calling in to this update timer to schedule the next time this function is called.
This timer is set to fire sometime between now and next update interval.
Now, next update interval is decremented by 1 until it hits 0, and then it's re-initialized according to this line of code here.
Now, here's where the problem is.
Time interval since last update date can potentially be negative, and we've seen cases of this happening, especially when users try to game the system. Maybe they're playing a game, and they want to reset the clock. Maybe they want some extra lives or some extra chances, so they go into System Settings and change the clock to 24 hours ago.
Well, in this case, this causes next update interval to be negative, and when we schedule a timer for a time that is sometime in the past, that timer will fire immediately and then call itself again and again.
So, we effectively have an infinite loop here. Fortunately, this is really easy to fix. We just go into this function here and change this to less than or equal to 0 so that even if next update interval is negative, we can break out of the loop. Now, this is a really great example of an energy issue that is really difficult to catch during normal testing but is made obvious once you have the data from the field. That's the power of Energy Logs, and that's the power of Energy Organizer.
Let's take a look at the three key takeaways from this demo. You can use the Energy Organizer to discover top energy issues in the field.
Take a look at the top issues, take a look at how often they're happening, and take a look at what kind of devices and builds are affected.
Second, you can view energy hotspots using the weighted call graphs. So, look out for the frames with unusually high sample counts, and watch out for the unexpected workloads.
Finally, use OpenEnd Project to jump directly into your code so you can make and inspect fixes with what's going on.
Let's summarize what we've learned today.
First, think about energy use and treat energy as a first-class citizen in every aspect of your design, development, and testing.
Second, make use of the great tools like energy gauges and instruments to profile your app.
And third, take a moment to explore the new Xcode Energy Organizer to understand and fix energy issues in the field. For more information, please come check out the following URLs, and feel free to come by the Power and Performance Lab on Friday from 9 to 11. Thank you and have a great evening. [ Applause ]
Looking for something specific? Enter a topic above and jump straight to the good stuff.
An error occurred when submitting your query. Please check your Internet connection and try again.