Creating a responsive user interface is one of the most important considerations for a mobile developer, and the smooth scrolling and quick responsiveness of iOS has been one of its hallmarks since day 1. — I’ve gotta be honest here; I still find myself every now and then finding great amusement in just flicking around a simple web view or scrolling some text on my phone. It just feels so right! — Keeping a smooth flow and one to one correspondence between user touch and visual display is crucial for maintaining the illusion and feeling that one is directly interacting with the objects on the screen. One key consideration to make this happen is do not block the main thread. If you are doing anything that might take a significant amount of time, you must do this on a background thread.
With iOS 4.0 and the introduction of blocks and Grand Central Dispatch, it became much easier to complete tasks in the background asynchronously without having to dive into the implementation details of threads and such. If you haven’t yet tried out GCD, take a look at the docs, or check out this tutorial to get you up and running quickly. It’s great for parsing data, downloading from the network, etc. off the main thread, and GCD makes it very easy to write code that will call back into the main thread once the background process completes. What it doesn’t work well for is anything to do with UIKit or the user interface. All drawing and touch interaction takes place, by design, on the main thread. So, what do you do if your drawing itself is taking too long and blocking the main thread? I’m sure there were people much cleverer than me who found some ways to get around it and do some drawing in the background, but basically up until iOS 4, UIKit was not thread-safe at all. If your drawing is too complicated and blocks, then you need to optimize it or simplify it. However, the release notes for iOS 4.0 contain the following short section:
Drawing to a graphics context in UIKit is now thread-safe. Specifically:
- The routines used to access and manipulate the graphics context can now correctly handle contexts residing on different threads.
- String and image drawing is now thread-safe.
- Using color and font objects in multiple threads is now safe to do.
This was not something I had really been interested in or concerned myself with until I ran into just such a problem recently. I created a custom subclass of UILabel which adds a colored, blurred shadow to the label to give it a glow effect. But, this drawing took drastically longer than the regular string drawing. For example, for the drawing that happens at app startup, using regular UILabels takes 104 milliseconds total in drawRect:. To draw the exact same strings with shadows takes 1297 milliseconds! So, you can imagine what this does to frame rates when there are multiple labels being updated rapidly during an already CPU intensive section of the code.
Multi is fun threading!
Since I already know ahead of time exactly what strings I need to display during this particular bottleneck, it would be nice to be able to draw all the labels at once in the background and cache them for later. My first approach was, a bit naïve. Well, I thought, that paragraph in the iOS 4 release notes says something about graphics contexts and string drawing being thread-safe, and with GCD I can easily create a bunch of labels in the background and then send them back to the main thread. The creation part worked out just fine, but because of the way the graphics run loop works, the drawing didn’t happen until it was time for them to appear onscreen. So, no advantages there. How about if I create my own graphics context and call drawRect: myself while in the background thread? Generally, you want to let the system call this because it’s very intelligent about knowing when views need updated and how to synchronize that with the animation frame rate and display refresh cycle. But what if you actually want to call it? That turned out to be no good either. The labels would draw in the background, but then would just redraw back on the main thread. Again, no performance improvement, probably a loss actually. After a trip to the developer forums and a quick helpful response to my question, I realized I was going about this completely the wrong way. And of course, after reading those release notes a bit more clearly, I saw what my problem was: drawing to a graphics context is thread-safe, and string and image drawing are thread-safe, but this doesn’t mean the standard UIKit objects play nicely with multiple threads. As my experiments showed, the main run loop does what it does, and even if something like a UILabel is already created and forced to draw before it is presented on the screen, as soon as it appears onscreen for the first time, it gets the signal to redraw. Part of the reason I pushed things as far as I did is because there’s so much convenience to UILabel drawing that you don’t get as easily when you’re dealing with CoreGraphics text rendering directly, but just as I suspected earlier and ignored: CoreGraphics/Quartz is the way to go. So, here’s what I finally figured out. So far it seems to be working:
On a background thread – dispatched to a background queue with GCD – do the following:
UIFont *labelFont = [UIFont fontWithName:@"Helvetica" size:32.0]; NSString *labelText = @"Foo"; CGSize viewSize = [labelText sizeWithFont:labelFont]; UIGraphicsBeginImageContextWithOptions(CGSizeMake(viewSize.width, viewSize.height), NO, 0.0); CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetFillColorWithColor(context, textColor.CGColor); CGContextSetShadowWithColor(context, CGSizeZero, shadowRadius, shadowColor.CGColor); [labelText drawInRect:CGRectMake(0.0, 0.0, viewSize.width, viewSize.height) withFont:labelFont lineBreakMode:UILineBreakModeClip alignment:UITextAlignmentCenter]; UIImage *theImage = UIGraphicsGetImageFromCurrentImageContext(); [imageArray addObject:theImage]; UIGraphicsEndImageContext();
By creating the string first, we can then get the size of the rectangle we’ll need to fit the string at a given font and size. Then create a new graphics context (line 4) to draw into, at the needed size. The BOOL parameter is to mark the context as opaque. The final parameter is a scale, and by inserting “0.0” we tell the context to just render at the scale of the current screen. That way, we automatically maintain resolution independence for retina and non-retina devices. Then, after setting the colors and shadow, we draw the string into the context. Next, the key step (line 9): create a UIImage object from the current graphics state. Finally, make sure to end the context. Once all that is complete, you now have a UIImage object which can be passed back to the main thread and displayed in a UIImageView or used as the contents for a CALayer. One reason I decided to write this little tutorial is that most of what I found in my own googling and on the dev forums for info on background thread drawing was actually stuff written before iOS 4.0. So basically it said don’t do it! My example here obviously covers one particular case, but hopefully the pattern is clear. The main take away for me was to be reminded that UIKit as a whole is still not thread-safe, including the methods such as drawRect:. The drawing which is thread-safe is the specific creation of a graphics context and the Quartz calls used to draw into and manipulate that context. To get what you’ve drawn to the screen, you must pass it back to the main thread as an image and then use it from there however you want.
Simulator vs. Device
I noticed pretty early on that my frame rate was suffering during a certain user interaction, but it took me a while to realize that the label drawing was actually the problem. As I’ve already said, there was quite a lot of other extra code that needed to run during this same process, so I figured the whole thing was just a bottleneck in general. But, as soon as I fired up Instruments and saw the ridiculous amounts of time being spent in [UILabel drawTextInRect:], I knew I had to do something different. A good reminder to measure, never guess. I’m sure everyone knows this, but it never hurts to hear it again: for true performance testing, there is no substitute for testing on device. The simulator is nice for fast iterations and no hassle previews of your work, but its performance is usually completely different from the device. In the simulator, these shadowed labels render no problem. One thing that makes the iOS so nice and smooth and snappy is that it is hardware accelerated. The graphics chip takes on a lot of the work of layout and compositing, but the actual drawing of the images which are passed off to the graphics hardware is done on the CPU. No wonder the simulator running on my MacBook Pro has no trouble rendering these shadows, but the iPhone itself bogs down! Another interesting thing is that every now and then I’ve found the simulator to be slower than the real device. The point is, you just never know, so TEST ON A DEVICE! And USE INSTRUMENTS! Ok, that’s enough yelling for now.