Updating for Modern iOS Design – A Case Study

TL;DR — Check out the pictures and videos on this page, and you’ll see how I adapted the basic circular menu of Click version 1 to fit the new iOS 7 design aesthetic, not just in visual terms, but also in interaction style: clean and almost chrome free, with more color, more seamless transitions, new inertial/physics-y interactions and direct manipulation transitions.

 

Hello Again!

As you can probably imagine, a LOT has happened in my life in the last almost two years since I’ve posted on this blog (and therefore also on iDevBlogADay)! And most of it has not been iOS development related, so I guess that’s my excuse for such an incredibly long time between posts. But, I have been working when I have the time, and I just released a new version of my iPhone app called Click Metronome. The original version was the topic of many of my posts here on this blog, especially about the design of the circular control. For this post, I thought it might be interesting to explore some of the design changes I made to Click when releasing version 2. The principles behind the design – and the features of the app – are largely the same, but the actual look and feel is completely different. So first, how about a look at a screenshot from version 1, and then the promo video for the new version for iOS 7+. Can you guess which is which? 🙂

 

Screenshot 2014.08.02 13.38.42

 

Circular Menu

At the time, I was really happy with how Click turned out. Then – and now – there’s a lot of metronome apps out there, but they seem to either be too simple and missing key features, or feature-rich but with a screen that’s completely full of buttons. So, one of my main goals for Click was to offer all the main features people would want – wide tempo range, variety of time signatures, accent options, etc. – and make them quickly accessible, but without cluttering up the screen with tons and tons of buttons. So, after much iteration, I came up with this menu system that showed all the current settings around a small circle. Then, the user taps an item and all the options for that setting are revealed in the circle, ready to be set by rotating. Here’s that menu in action in version 1 and version 2 (watch in slo-mo):

 

Click1MenuSmall

Click2MenuTapSmall

 

Version 1 Problems and Fixes:

I felt like I accomplished the goals I had set out, and watching people as they used the app, they always eventually figured out how to use it. But, it was never as intuitive and easy as I had hoped. As I got feedback from others, and as I thought more and more about the app myself, I came up with a whole list of problems with the implementation of the menu:

Continue reading “Updating for Modern iOS Design – A Case Study”

Circular Scrolling + Inertia

Why Inertia?

I’ve talked a lot on this blog about the circular scroll-view/table-view thing I created for my metronome app, and while I’m very pleased with how the whole thing turned out, I was still annoyed that I hadn’t figured out how to do one thing: give it inertia like UIScrollView. A commenter on one of the posts even asked me about inertia, and I basically just flat out said I’d like to, but it would be too hard! But after that, it just kept bugging me; there had to be a way. Inertial scrolling on iOS – and now on OS X too – just feels so nice. It’s one of those things that many people probably don’t even think about or notice until it’s gone. After feeling the smooth scrolling and realistic deceleration of UIScrollViews, my little rotary control just feels like garbage. I mean, come on, you give it a little flick and it stops rotating immediately when you lift your finger!? Lame! Frankly, my control just doesn’t feel quite completely at home on iOS yet, and I want to change that.

Maybe UIScrollView Can Help!

I got really exciting after watching the WWDC 2012 video for Session 223: Enhancing User Experience with Scroll Views. In the presentation, they give an excellent demo of how to leverage the physics of UIScrollView even when your app uses OpenGL. The UIScrollView is not used for any drawing, but by passing touches through it, listening for the scrollViewDidScroll, and reading the contentOffset, you can enjoy all the benefits of UIScrollView like inertia and edge bounce without actually drawing your content in the scroll view itself. Maybe, I thought, I can use that same trick on my circular scroller. Using the same techniques from the video, I was able to get up and running with the same circular action as well as a bit of UIScrollView provided inertia, but it just did not feel right at all. The geometry of linear scrolling and deceleration just don’t line up with what I need in terms of angular rotation.

Eureka!

The video technique didn’t work out quite like I had hoped, but I still took away one key idea: implement the inertia in the touch handler, not in the circular view itself. Seeing as how no one has seen the details of my implementation, that might not make sense to anyone but me right now, but I’ll try to explain. At its most basic, my control has two parts: the transparent view which interprets the touches as circular movements (a gesture recognizer, if you will, although it’s not an actual UIGestureRecognizer subclass) and the display view that receives notifications of those movements and rotates, swapping elements in and out sort of like UITableView cells. I had already implemented a bit of animation in the view component that allowed it to center on a certain cell, and I kept trying to think how I could make this also work with a longer rotation, and I was running into a lot of problems with the cell swapping during the rotation animation. But, if I implement the inertia on the control side and use the same delegate call for rotation that comes from an actual touch OR from the inertia, the view side doesn’t even know the difference. And it actually worked pretty well without much modification at all!

Check out the Repo

So, here’s the real/final point of this post: I’d love to get this whole rotation control thing (or most of it, at least) out into the wild as an open source component, but it’s been hard finding the time. It’s a fairly complex API, and I don’t feel like I can just release it without some more clarity and focus, as well as a nice sample project. So, I’m going to try to get it out there piece by piece if possible. I’ve put up a little bare bones project on GitHub – my first! – to test the waters. Let me know what you think, and of course, feel free to fork it and make it better. Right now it’s just a simple version of the rotation control view (RDDRotationControlSurface) which is hooked up to a UIViewController delegate that spins a UIImageView based on the input. Wow, that sounds way more complicated than it is – it’s just spinning a little picture of a very nice-looking German (or Austrian; can’t remember where this picture was taken) man in a cool getup. Don’t ask me why I chose this picture; it was there, and I went with it!

PS: You may be wondering why I didn’t go with a UIGestureRecognizer subclass for this. As far as I can tell, the gesture recognizer API would not work very well with something like this which needs to continue sending updates even after all touches have stopped. So, in the end, I’d still end up with a UIView subclass of some kind. Doesn’t mean GR couldn’t help here, but I just didn’t go that route.

Macro for Device Specific Settings (There’s a new iPhone Screen Size!)

iPhone 5

Well, it finally happened. Apple is releasing an iPhone with a different screen size and aspect ratio. “Oh no!” everyone screamed. iOS is headed down the path of Android! Fragmentation is here!! OK, there’s probably (almost ) no one actually saying that. Yes, there’s a new device, but the fragmentation situation for Apple – even including the various combination of screen size, processor type, iOS version, etc. – doesn’t even come CLOSE to that of Android. The fact is, Apple know what they’re doing. Agree or disagree with their reasoning for the iPhone 5’s new screen, it’s clear that they don’t take this type of change lightly. I mean, just listen to how Sir Jonathan Ive opens this video. How can you not agree with everything this man ever says?!

The first big “disruption” to the iOS ecosystem, was of course, the iPad. But guess what, Apple took careful measures to make sure that every single existing app would automatically run. I’m sure there were examples of apps that didn’t take to the simulated iPhone environment perfectly, but for the most part, everything just worked. And that was no small feat of engineering, I’m sure! Yes, we were all pushed to build native apps for the iPad, but they weren’t going to immediately alienate thousands of developers and millions of app users just because they decided to make a cool new device. The next disruption was iPhone 4 and Retina Display. Once again, great new technology for those who incorporate it, but in the meantime, existing apps for the most part ran just fine. And now, last week, the 4″ screen of the iPhone 5. But guess what, all existing apps will run just fine, only letter-boxed. But what about those who want to take advantage of this new technology, to create an app that makes full use of that new tall screen? Continue reading “Macro for Device Specific Settings (There’s a new iPhone Screen Size!)”

Click – A Postmortem

The App

My latest app, an iPhone metronome called Click, has been out for two months now, so I thought this would be a good time to do a little postmortem reflection on the development process, the launch, and some thoughts for the future. I’ve discussed some of the development and design of the app before on my blog, but I haven’t yet done much discussion of how it all turned out. So, here goes…if you’re mostly just interested in the numbers, then skip to the end (although I’ll give you a hint: they’re not too large).

What Went Well

1. The Design

There are still some things I want to tweak, but overall I’m very pleased with how the general look and feel of the app came out. It was incredibly satisfying to go through the whole process of brainstorming ideas, testing them out, and then finally settling on a design and control scheme and actually seeing it come to life. It was definitely a learning process, and even though it was slow going, I can look back on my time spent with this project and know that it wasn’t wasted time. I was focused on creating a certain “look” for this specific app, but everything I learned about Core Graphics, UIKit, and Photoshop is instantly transferable and applicable to future projects. There were certainly times that I wanted to give up on the customization, throw in a few stock UIButtons and ScrollViews and just ship this stupid thing, but in the end I’m glad I stuck it out! Being able to turn your vision of what something could be into a real, working product is, for me, one of the greatest joys of software development.

2. The Features/Scope

I spent quite a lot of time and effort thinking through exactly which options and features I wanted to include in this app. The only real required feature of a metronome is that it produce sound at a regular interval (and yes, probably also with some type of visual feedback), and the only required option is to be able to set the tempo to the desired speed. This is all that a traditional hardware metronome does. But modern metronomes – of the software and the hardware variety – have so much more audio flexibility: time signatures, variable accent patterns, subdivide the beat in various rhythms, multiple sounds, storing setups for creating set lists or practice routines, programming longer sequences for specific songs, and on and on. There’s also many varieties of visual feedback available, especially in the realm of software metronomes. I made the decision that one of the key distinguishing features of Click would be the ability to select Continue reading “Click – A Postmortem”

Interview with Beginning iOS Dev

20120710-211544.jpg

Sunset on The Great Salt Lake

I’m on day two of a three day move from Illinois to California right now, so I don’t have anything new ready for the blog. But, I didn’t want to miss my blogging slot completely, so I thought I’d link to an interview I did the other day over on the Beginning iOS Dev blog. Of you haven’t seen the website yet, you should take a minute to browse around. There are quite a few resources, and the interviews especially are interesting and very well done. There’s also a bit of a postmortem there of my app Click that’s been out for a while now if you’re interested in that. I may do a full write up on here sometime soon. In the mean time, one more full day of driving and we’ll finally be at our new home!

Some Good News and Some Bad News

I’m sure everyone is busy playing with the new bits from Apple right now, or lusting over the new hardware just announced, so I’m not sure who all will see this, but it’s my slot today for iDevBlogADay, and I’ve got some news to share, so I’m going for it!

I’m so mad at myself right now. The good news is, my new metronome app “Click” was just approved and is Ready for Sale!! The bad news is, I used a promo code to download it before releasing it for sale, and… it’s got a bug. A big ol’ freaking whopper of a bug. Somehow as I finalized my image assets, one of the resources had the wrong name in the Nib file – there was an “@2x” left on the end of the image name in the Nib configuration for the UIImageView. The app runs and loads fine in the simulator, and I swear the final build worked fine on my device too, but apparently the App Store release build is more picky and couldn’t locate the resource. So, one of the key pieces of the interface is invisible. So guess what? I’ll be waiting another week at least for another review. I could possibly ask for an expedited review, seeing as this is a “critical bug,” but the problem is the app isn’t actually even for sale yet, so it’s not like the bug is actually “live.” I don’t really wanna push my luck and/or annoy the review team now and then really need an expedited review later. So, one lesson is, keep testing the crap out of your app even after you submit, and you just might catch something and be able to developer reject your binary before the review starts. The other, real moral of the story is: delay the release of your app at least enough to generate a promo code and try out the released version for yourself. I have no idea when promo codes started working for apps before they’re actually released, but it certainly saved me this time! My thanks to @wtrebella for bringing this to my attention when he was tweeting about releasing Polymer.

I must say though, this isn’t all bad. I’ve been working on this app for over a year. I’ve been wanting to integrate a quality metronome into another app of my, theDrumDictionary, and in future music-related apps. So, this was supposed to be a quick side project in learning Core Audio and how to make a metronome, and since I was going to do it anyway, I figured, why not release a standalone metronome app too? But I didn’t want to make just another metronome app, and as I explored how to differentiate mine from the existing options, it sort of exploded into this very large project with difficult to implement (for me) custom controls and a large amount of Photoshop time designing a unique UI. Needless to say, I’m ready to get this thing out there and see what happens! When it comes to an initial release, I actually came to appreciate the sort of built in waiting period of the app store review time. Being forced to stop coding, it was a great chance for me to finalize my press materials and continue to get the word out about the new app. I was expecting to be done a *lot* sooner than how it turned out, and by the time I submitted the app for review, I knew I was going to be running right up against WWDC. Who’s going to care about some no-name company’s metronome app in the middle of big hardware and software announcements?! If approved during dub-dub, how long should I wait to release the app? Or should I release the app, and just wait until later to do press releases, etc.? Or is it really most beneficial to do a launch all at once – app appearing on new releases, press release, etc? (Great articles from Justine Pratt on marketing, by the way!) Well, now I don’t have much choice! I’ve probably got another whole week to keep up the marketing prep, spread the word to existing DrumDictionary customers, etc., and there’s not much worry about WWDC anymore. I’ve got to release this sometime, and a whole week after the big keynote seems like as good a time as any to me; it’s certainly much better than anytime this week. And, I can confidently run a full on launch blitz with no fear that I’m messing it up by separating the app release from the press release OR driving myself crazy as I wait for days for the right marketing timing knowing that it’s just sitting there ready waiting for me to change the availability date. I suppose I shouldn’t be too mad about it after all! (but let’s face it, I’m pretty pissed). Anyway, maybe this gives you something to think about before you release your next app! And, if you’re curious, head on over to my new Gig Bag Apps website and check out the trailer for Click. Maybe it’ll be a hit; if I ever actually release it 🙂

UIKit and GCD

Graphics Bottlenecks

Creating a responsive user interface is one of the most important considerations for a mobile developer, and the smooth scrolling and quick responsiveness of iOS has been one of its hallmarks since day 1. — I’ve gotta be honest here; I still find myself every now and then finding great amusement in just flicking around a simple web view or scrolling some text on my phone. It just feels so right! — Keeping a smooth flow and one to one correspondence between user touch and visual display is crucial for maintaining the illusion and feeling that one is directly interacting with the objects on the screen. One key consideration to make this happen is do not block the main thread. If you are doing anything that might take a significant amount of time, you must do this on a background thread.

With iOS 4.0 and the introduction of blocks and Grand Central Dispatch, it became much easier to complete tasks in the background asynchronously without having to dive into the implementation details of threads and such. If you haven’t yet tried out GCD, take a look at the docs, or check out this tutorial to get you up and running quickly. It’s great for parsing data, downloading from the network, etc. off the main thread, and GCD makes it very easy to write code that will call back into the main thread once the background process completes. What it doesn’t work well for is anything to do with UIKit or the user interface. All drawing and touch interaction takes place, by design, on the main thread. So, what do you do if your drawing itself is taking too long and blocking the main thread? I’m sure there were people much cleverer than me who found some ways to get around it and do some drawing in the background, but basically up until iOS 4, UIKit was not thread-safe at all. If your drawing is too complicated and blocks, then you need to optimize it or simplify it. However, the release notes for iOS 4.0 contain the following short section:

Drawing to a graphics context in UIKit is now thread-safe. Specifically:

  • The routines used to access and manipulate the graphics context can now correctly handle contexts residing on different threads.
  • String and image drawing is now thread-safe.
  • Using color and font objects in multiple threads is now safe to do.

This was not something I had really been interested in or concerned myself with until I ran into just such a problem recently. I created a custom subclass of UILabel which adds a colored, blurred shadow to the label to give it a glow effect. But, this drawing took drastically longer than the regular string drawing. For example, for the drawing that happens at app startup, using regular UILabels takes 104 milliseconds total in drawRect:. To draw the exact same strings with shadows takes 1297 milliseconds! So, you can imagine what this does to frame rates when there are multiple labels being updated rapidly during an already CPU intensive section of the code.

Multi is fun threading!

Since I already know ahead of time exactly what strings I need to display during this particular bottleneck, it would be nice to be able to draw all the labels at once in the background and cache them for later. My first approach was, Continue reading “UIKit and GCD”

Photoshop Layer Comps

Just a quick Photoshop tip today, but it’s something I’ve been making extensive use of the last few weeks, so I thought I’d share. If you happened to read my last post and/or watch the video, you would have seen that in my new metronome app, I’m handling interface rotation in a somewhat different way than most apps. Rather than using the standard system autorotations – using the usual springs and struts in Interface Builder or the UIView’s autoresizingMask property – I’m leaving the basic layout of the controls the same and just rotating the contents. It’s kind of hard to describe, so if that doesn’t make sense, skip to about the 10:00 mark on this video.

Here’s the gist of the code to make this happen:

  • In the main ViewController’s shouldAutorotate method I only return YES for  UIInterfaceOrientationLandscapeRight, the same orientation the app launches in. Meaning, the view controller will not do any auto-rotating once it’s loaded into its initial state.
  • I’ve registered for the UIDeviceOrientationDidChangeNotification. Even though the View Controller will not do anything automatically when the device is rotated, the system will still generate these notifications when the orientation changes.
  • When I receive this notification, I pass the message along, and the individual views apply whatever sort of rotation transform they need to in order to remain “right side up.”
  • If the Status Bar is visible, you can also programmatically set its orientation with:[[UIApplication sharedApplication] setStatusBarOrientation:(UIInterfaceOrientation)orientation animated:YES];

What this means from a design perspective, is that the UIImageViews themselves, which contain the main interface chrome, do NOT rotate at all. So, here on the right is what the main control frame looks in the launch orientation – notice the shadows, gradients, etc. all use the “canonical” iOS 90 degree light source.

Let’s say the user then rotates to LandscapeLeft – my subviews will rotate themselves, but the image will stay exactly the same. The image on the left is the same, but rotated 180 degrees. It’s strange how much different – and more noticeable – the light/shadow/gradient effects are when they’re flipped around the wrong way!

So, in order to maintain the right look, what I need to do is create separate images for each orientation and load these in as part of my custom rotation handling. Here’s where Photoshop layer comps come in. What they let you do is take snapshots of certain aspects of your document state and then reload them with one click. For example, in my case, I’ve set up one Layer Comp for each of the four orientations I’ll support. Here’s the workflow:

  • Setup the document for the current orientation. In the case of LandscapeRight, that means 90 degree light sources for all drop shadows, gradients that go light to dark from top to bottom, etc.
  • In the Layer Comps window – add it to the toolbar from the Window menu if you don’t see it – select New Layer Comp from the pulldown menu.
  • In the dialogue box that opens, give your comp a name, select which parts of the document state you want to be saved as part of the snapshot, and add any helpful comments you might have. 
  • For this particular case, I’ve told the Layer Comp to only save the Layer Styles of the document’s layers.
  • Repeat the process for each orientation, setting the light sources, gradients, etc. on the Layer Styles, and then saving it as a new Layer Comp.

By using vector/smart objects and layer styles – you are doing that aren’t you? – the exact same set of objects and layers is used for every orientation. I’m free to adjust the positioning, size, and shape of the objects, and then, when it comes time to export for each orientation, I just click through the four Layer Comps one by one, and all my light and shadow effects are applied instantly to all objects. It takes a bit of work to setup, but once it’s ready, it saves huge amounts of time over going to each object individually and resetting the properties every time I want to make a change in the design and re-export for each orientation. For things like the “Tap,” “+,” and “-” labels, and for different button states, I also have a set of Layer Comps which control layer visibility. So, for example, if I need to re-export the image for the “pressed” tap button – I hit the Layer Comp for the orientation I want, which loads the correct layer styles, then hit the “Tap Button Pressed” layer comp which won’t affect the layer styles, but will hide the normal Tap button layers and show the pressed ones. Two clicks and I’m ready to export. So, that’s how I’ve been using Layer Comps in my particular case to speed up my design workflow – hopefully it gives you some ideas for how you might be able to use them in your own workflow!

Introducing: Click

I’ve mentioned my upcoming metronome app a few times before on the blog here, but now that I’m getting closer to completion, I thought I’d take a moment and give it a formal introduction. It’s still very much a work in progress, so don’t take this as a press release or a marketing video. It’s more like, a behind the scenes intro from a developers perspective. So, without further ado, here we go!

The Name

“Click,” or perhaps with a subtitle, “Click – Metronome” is the title of my new app. Honestly, I’m not 100% happy with this name. To the general population, it’s probably pretty ambiguous, hence the subtitle. But among musicians – who would be the majority of those purchasing this app – a “click” is a common way of referring to a metronome. Basically, there’s a lot of metronome apps out there, and this was one of the only non-corny, non-“punny”, simple, to-the-point names that was not yet taken.

The Design

It’s easier to show you this than to try to explain it – so jump to the video if you want – but I want to share some of my rationale/process of design for this app. There’s lots of metronomes out there (are you seeing a theme yet?), but from a design perspective, most of them are just plain crap. Now, I’m no design expert by any means – it’s only been in the process of making this app that I’ve begun to truly dig in and study principles of UI and UX design. So, that being said, if even I can see  – and articulate why – these apps are crap, then they must actually crap!! Seriously though, there are some good metronomes out there, but there’s definitely room for more quality ones.

One of the issues I see, even with the good ones, is that they want to be powerful and give the user a lot of options, but in doing so they sacrifice usability. Even the easy to use ones still end up filling the entire screen with buttons and cram the visual part of the metronome itself into a small section of the screen. So first and foremost, I wanted to find a way to give plenty of options, but still leave lots of screen space for the metronome itself. In that respect, I’m very pleased with how the circular menu/selector has worked out for this.

As far as the actual *look* of the design goes, I’m shooting for something with both realism AND digital “flexibility.” It’s realistic in the sense that the app looks like a real object – with shadows, highlights, texture, some physical buttons and handles, etc. But, it’s not overly skeuomorphic and doesn’t necessarily directly resemble any real-life metronome. The main controls and the central view are basically just “screens” upon which I can display anything I want, completely unrestrained by what would or would not work on a physical device.

Another thing I wanted to do with this app is minimize the use of words or labels. Obviously, there are a lot of numbers used – no way around that really when you’re talking about setting specific tempos and time signatures – but there’s very few words. As long as I can find appropriate symbols to communicate what each thing does, I think this will give the app a nice, clean, accessible quality. Not to mention how much easier localization will be when there’s only a handful of words in the whole app!

The Video

Enough talk, just watch! Not everything is working yet, and even the working things are still in progress, but this will hopefully give a nice preview of what’s coming up. I’d love to hear any feedback you’ve got, positive or negative! Thanks for watching.

—-  Yikes! The video quality did not turn out too well after compression. Oh well, you get the idea. It looks *great* on the device, I promise  🙂  —-

P.S. – Wanna Help Test?

If you’re interested in helping beta test Click, follow the link here to Test Flight. I may not be able to take everybody – Apple’s 100 device limit is turning out to be more restrictive than I realized at first – but I will take whoever I can. If you’re a musician or otherwise particularly “qualified” to test this app, that will help me narrow it down if I need to, but it’s not a requirement. Thanks!

Test Flight – http://bit.ly/zjcd0g

SVG to CoreGraphics Conversion

*UPDATE – August 2, 2014*

There’s been a LOT of different tools come out in the last several years since I posted this. I’m still seeing a fair amount of traffic showing up here from google, so I thought I’d stick in a little update here with some links to newer apps/tools/converters for generating CoreGraphics code from other file types or graphical editors. I haven’t tried all of these, and I have no connection to their creators, just providing some links.

http://www.paintcodeapp.com

http://drawscri.pt

http://likethought.com/opacity/

I’m sure there’s more, so let me know in the comments if you’ve got another one. Hope this helps, and if you’re still interested in going into the SVG standard a little deeper or in seeing what I did earlier, then read on!

***

I’ve got another tutorial type post for today, but it’s really equal parts: “here’s what I found that helped but didn’t quite work,” “here’s what I did,” and, “anybody have any better ideas?” If you already know something about Core Graphics and why/when to use it and just want the gist of what I did to convert SVG files to Core Graphics calls, go ahead and skip on down to “SVG to the Rescue.” Otherwise, read on to hear about my experience.

Why Core Graphics?

If you’ve spent any time programming for iOS or OSX, you’ve probably been exposed at some level to the Core Graphics frameworks, otherwise known as Quartz. (BTW, if you’re looking for documentation in Xcode, you have to search “Quartz” in Xcode to find the Programming Guide. Searching “Core Graphics” won’t get you anything helpful. A common occurrence with the Xcode documentation browser in my experience, but, that’s a rant for a different day.) The documentation is actually quite good at getting you up and running with the APIs. There’s also some great tutorials in the Graphics and Animation section here from Ray Wenderlich’s site. As a framework, Quartz is the name for the complete 2D drawing engine on iOS and OSX, and it covers everything from drawing to the screen, to working with images and PDFs, to color management, and also includes low-level support for drawing text. It’s resolution and device independent, meaning it’s great for generating interface elements for your iOS apps. No need to manually create multiple versions of each resource – iPhone, iPad, @2x for Retina displays (presumably @2x iPad Retina at some point in the future) – just render the interface at runtime, and as long as you do it right, the OS will handle all the scaling and render everything at the right resolution. It’s also perfect for those times when you need to draw dynamic interfaces that are based on some kind of data or input rather than a static “look” that you design and then display. Although it’s not exactly easy to reverse-engineer the Apple apps and see exactly what they’re doing, it’s safe to say that many of them are rendering directly in app with Core Graphics, rather than loading static images. The WWDC 2011 video, “Practical Drawing for iOS Developers” shows step by step how the Stocks app renders its views, entirely in Quartz. If you’re starting from scratch, the docs, WWDC videos, and tutorials will have you drawing lines, arcs, and basic shapes in no time, all with strokes, fills, and even gradients.

Complex Shapes

The problem I ran into was how to get past just the basics. The API’s for Quartz path drawing go something like this: move to this point, add a line to this point, add a bezier curve to this point with control points at these locations, etc. It’s relatively easy to think about and describe basic geometric shapes in these kind of terms, and Core Graphics even provides convenient methods for creating things like rounded rectangles and ellipses. Even complex views like the stocks app are still very much data/number driven types of views, and even though the drawing process itself is more complicated, it’s not hard to imagine how you would programmatically calculate and describe, say, points on a graph. But, what if you want to draw something a little more organic? What about more complex shapes with lots of curves?
Quarter RestTake this Quarter Rest symbol, for example. As a vector graphic in Illustrator, it contains 3 straight lines and 13 different Bezier curves, each with two control points – and even that is after trying to simplify it as much as possible without losing the desired shape. The problem quickly becomes apparent – it’s virtually impossible to establish a good mental connection between the graphic as conceived by the artist/designer, and the actual code used to produce it on screen. Bret Victor has a great write-up on this artistic disconnect when it comes to dynamic and interactive images/interfaces. It was immediately evident to me that trying to build this graphic in code, line by line – guesstimating and then tweaking the coordinates of the lines, curves and control points – could only end in one way: much swearing and me throwing my computer out the window in frustration.

The main reason I wanted to use Core Graphics rather than static images is to be able to display these musical symbols with dynamic coloring and shadows for some highlight/glow effects. Now, the shadow part of this is actually possible using pre-rendered images. You can set shadow properties like color, radius, distance, on any UIView (technically, on the CALayer of the UIView), including UIImageViews. Quartz will use the alpha values of the pixels to calculate where the edges are, and will generate a nice shadow behind the elements of the image. I say it’s possible, but it’s not actually that practical. Doing it this way requires an extra offscreen rendering pass, and the performance will very likely suffer. In my case, it completely tanked, from somewhere around 53-54 fps with normal content, to around 15 fps when adding a shadow to static images. In some situations, you could work around this by using the shouldRasterize feature of CALayer, but for dynamic content, this could actually make performance even worse. After my experiment, I knew there was no way around it but to keep working on some way to convert my vector images in Illustrator into something I could use in my app. Enter the .svg format.

SVG To the Rescue!

SVG – scalable vector graphics – is a widely used standard for storing, well, just what the name says. Most vector based graphics programs, including Adobe Illustrator, can open SVG, and will also export to SVG. Since SVG is a web standard, one option is to use a UIWebView to render the SVG to the iPhone screen, but that option doesn’t work for me. I googled far and wide for some kind of svg to Quartz converter, and had a bit of luck:

This site was a good start. It provides a resource where you can copy and paste the path data from an svg file, and it will export Quartz source code. I had some weird results from my files, though and a little difficulty figuring out the proper data to paste into the form.

Here’s an Objective-C class, under a Creative Commons Attribution license, which will also take the path data from an svg and output a UIBezier path (iOS) or NSBezier path (OSX).

I also found this library, but as of right now, the site appears to be down. Perhaps just a temporary issue. (UPDATE: Looks like it’s back up, and now that I can see it again, this page is mostly just a link to this on github. A library that takes SVG files and turns them into CAShapeLayers.)

I didn’t test any of these options extensively, but they appear to be good options, especially the second. If they work for you, then great! What they all have in common though is that you first need to extract the path data from the .svg file, meaning, I had to do some research on the standard anyway. Turns out, .svg is just an XML format that you can open in any text editor. And, even better, the svg path commands are very similar to the Core Graphics API’s. Each individual path is contained in a <path> tag, and the specific commands are in the “d” attribute. Here’s the file for that Quarter Rest symbol – open in the browser to see it rendered, or download and open in a text editor and you’ll see the path data clearly separated. The svg standard includes lots of fancy things like fills, strokes, gradients, patterns, masking, even animation, but all I’m using here is a simple, single path. Path commands in svg are single letters, with parameters following.

  • M (x, y) – move to point.
  • C (x1, y1, x2, y2, x, y) – Add cubic bezier curve to point (x, y) with control points (x1, y1 and x2, y2).
  • L (x, y) – add line to point.
  • Z is the command to close the current path.
  • There’s also H, and V for horizontal and vertical lines, and S for curves with an assumed first control point relative to the last command.

Once I got the file into this format, each move was easily converted to Quartz API calls to build a path:

  • CGPathMoveToPoint
  • CGPathAddCurveToPoint (where the point parameters are even in the same order as the SVG command)
  • CGPathAddLineToPoint
  • CGPathCloseSubpath
  • The “H,” “V,” and “S” commands don’t have a Quartz counterpart, so they need to be adapted.

And, here’s the end result of that Quarter Rest symbol, rendered in app with Core Graphics, complete with shadow/glow effect and maintaining nice, snappy frame rates.

Parsing Gotchas

Parsing the svg file by hand turned out to be a little challenging. For one thing, in the interest of keeping file size small, there are almost no separators between items. No whitespace, but also not many commas or other delimiters, wherever it can be omitted. For example, a “move” command, followed by an “add curve” command might look something like this: “M60.482,613.46c0,0-17.859,0.518-26.997,0” Each place there’s a negative number, the separating comma is eliminated as unnecessary, and each command runs right into the next one, so it’s important to know that the parameters for each command come after the letter. Also, when the same command is used multiple times in a row, it’s possible according to the standard to omit the command the second time. So, a “c” followed by 12 numbers is actually two separate curve commands. One other catch: each of these svg commands is using absolute coordinates, but most of them also have a corresponding command using relative coordinates. These use the same single letters, but in lower case. For example, M 10 10 m 5 5  means move to absolute point (10, 10) and then move to (5, 5) relative to this point – so (15, 15) absolute. Unfortunately, Illustrator exports svg files using mostly these relative commands, so I also needed to convert them to absolute point values for Quartz.

You’re Still Reading? Wow!

OK, this was a long post, so if you’ve read this far, that means you’ve got some actual interest in the subject and/or need for an efficient way to get path data into Core Graphics. So here’s the part where you’re hoping I have the link to some parsing code or maybe a script I wrote that will do this work for you. But, I have to confess, I did all mine by hand. I only had a handful of symbols I needed for this project, each of which is a pretty simple, single path, so I did the parsing and converting by hand along with some rounding and cleaning up along the way. Maybe next time I’ll get it automated, but for now, it was a good exercise in exploring a new file format and diving deeper into Core Graphics. But, if you’ve given some of this a try yourself, I’d love to hear what you came up with! Did any of these other resources work well for you? Got any nice scripts to share with everyone else? Or, perhaps even more likely, am I way off track? Is there an even better workflow for getting from design in a visual editor to Quartz source code? If so, I’d be grateful to hear your ideas.

BONUS: of course, just as I’m finishing this post, I stumbled across yet another resource. From the looks of it, it might be the best yet. It’s an Objective-C parser that will take a portion of an Illustrator eps file and convert it to CGPath. Guess I’ll have to try that out now too! Method for Interpreting Illustrator Art Assets as Cocoa CGPathRef