Photoshop Layer Comps

Just a quick Photoshop tip today, but it’s something I’ve been making extensive use of the last few weeks, so I thought I’d share. If you happened to read my last post and/or watch the video, you would have seen that in my new metronome app, I’m handling interface rotation in a somewhat different way than most apps. Rather than using the standard system autorotations – using the usual springs and struts in Interface Builder or the UIView’s autoresizingMask property – I’m leaving the basic layout of the controls the same and just rotating the contents. It’s kind of hard to describe, so if that doesn’t make sense, skip to about the 10:00 mark on this video.

Here’s the gist of the code to make this happen:

  • In the main ViewController’s shouldAutorotate method I only return YES for  UIInterfaceOrientationLandscapeRight, the same orientation the app launches in. Meaning, the view controller will not do any auto-rotating once it’s loaded into its initial state.
  • I’ve registered for the UIDeviceOrientationDidChangeNotification. Even though the View Controller will not do anything automatically when the device is rotated, the system will still generate these notifications when the orientation changes.
  • When I receive this notification, I pass the message along, and the individual views apply whatever sort of rotation transform they need to in order to remain “right side up.”
  • If the Status Bar is visible, you can also programmatically set its orientation with:[[UIApplication sharedApplication] setStatusBarOrientation:(UIInterfaceOrientation)orientation animated:YES];

What this means from a design perspective, is that the UIImageViews themselves, which contain the main interface chrome, do NOT rotate at all. So, here on the right is what the main control frame looks in the launch orientation – notice the shadows, gradients, etc. all use the “canonical” iOS 90 degree light source.

Let’s say the user then rotates to LandscapeLeft – my subviews will rotate themselves, but the image will stay exactly the same. The image on the left is the same, but rotated 180 degrees. It’s strange how much different – and more noticeable – the light/shadow/gradient effects are when they’re flipped around the wrong way!

So, in order to maintain the right look, what I need to do is create separate images for each orientation and load these in as part of my custom rotation handling. Here’s where Photoshop layer comps come in. What they let you do is take snapshots of certain aspects of your document state and then reload them with one click. For example, in my case, I’ve set up one Layer Comp for each of the four orientations I’ll support. Here’s the workflow:

  • Setup the document for the current orientation. In the case of LandscapeRight, that means 90 degree light sources for all drop shadows, gradients that go light to dark from top to bottom, etc.
  • In the Layer Comps window – add it to the toolbar from the Window menu if you don’t see it – select New Layer Comp from the pulldown menu.
  • In the dialogue box that opens, give your comp a name, select which parts of the document state you want to be saved as part of the snapshot, and add any helpful comments you might have. 
  • For this particular case, I’ve told the Layer Comp to only save the Layer Styles of the document’s layers.
  • Repeat the process for each orientation, setting the light sources, gradients, etc. on the Layer Styles, and then saving it as a new Layer Comp.

By using vector/smart objects and layer styles – you are doing that aren’t you? – the exact same set of objects and layers is used for every orientation. I’m free to adjust the positioning, size, and shape of the objects, and then, when it comes time to export for each orientation, I just click through the four Layer Comps one by one, and all my light and shadow effects are applied instantly to all objects. It takes a bit of work to setup, but once it’s ready, it saves huge amounts of time over going to each object individually and resetting the properties every time I want to make a change in the design and re-export for each orientation. For things like the “Tap,” “+,” and “-” labels, and for different button states, I also have a set of Layer Comps which control layer visibility. So, for example, if I need to re-export the image for the “pressed” tap button – I hit the Layer Comp for the orientation I want, which loads the correct layer styles, then hit the “Tap Button Pressed” layer comp which won’t affect the layer styles, but will hide the normal Tap button layers and show the pressed ones. Two clicks and I’m ready to export. So, that’s how I’ve been using Layer Comps in my particular case to speed up my design workflow – hopefully it gives you some ideas for how you might be able to use them in your own workflow!

Advertisements

Introducing: Click

I’ve mentioned my upcoming metronome app a few times before on the blog here, but now that I’m getting closer to completion, I thought I’d take a moment and give it a formal introduction. It’s still very much a work in progress, so don’t take this as a press release or a marketing video. It’s more like, a behind the scenes intro from a developers perspective. So, without further ado, here we go!

The Name

“Click,” or perhaps with a subtitle, “Click – Metronome” is the title of my new app. Honestly, I’m not 100% happy with this name. To the general population, it’s probably pretty ambiguous, hence the subtitle. But among musicians – who would be the majority of those purchasing this app – a “click” is a common way of referring to a metronome. Basically, there’s a lot of metronome apps out there, and this was one of the only non-corny, non-“punny”, simple, to-the-point names that was not yet taken.

The Design

It’s easier to show you this than to try to explain it – so jump to the video if you want – but I want to share some of my rationale/process of design for this app. There’s lots of metronomes out there (are you seeing a theme yet?), but from a design perspective, most of them are just plain crap. Now, I’m no design expert by any means – it’s only been in the process of making this app that I’ve begun to truly dig in and study principles of UI and UX design. So, that being said, if even I can see  – and articulate why – these apps are crap, then they must actually crap!! Seriously though, there are some good metronomes out there, but there’s definitely room for more quality ones.

One of the issues I see, even with the good ones, is that they want to be powerful and give the user a lot of options, but in doing so they sacrifice usability. Even the easy to use ones still end up filling the entire screen with buttons and cram the visual part of the metronome itself into a small section of the screen. So first and foremost, I wanted to find a way to give plenty of options, but still leave lots of screen space for the metronome itself. In that respect, I’m very pleased with how the circular menu/selector has worked out for this.

As far as the actual *look* of the design goes, I’m shooting for something with both realism AND digital “flexibility.” It’s realistic in the sense that the app looks like a real object – with shadows, highlights, texture, some physical buttons and handles, etc. But, it’s not overly skeuomorphic and doesn’t necessarily directly resemble any real-life metronome. The main controls and the central view are basically just “screens” upon which I can display anything I want, completely unrestrained by what would or would not work on a physical device.

Another thing I wanted to do with this app is minimize the use of words or labels. Obviously, there are a lot of numbers used – no way around that really when you’re talking about setting specific tempos and time signatures – but there’s very few words. As long as I can find appropriate symbols to communicate what each thing does, I think this will give the app a nice, clean, accessible quality. Not to mention how much easier localization will be when there’s only a handful of words in the whole app!

The Video

Enough talk, just watch! Not everything is working yet, and even the working things are still in progress, but this will hopefully give a nice preview of what’s coming up. I’d love to hear any feedback you’ve got, positive or negative! Thanks for watching.

—-  Yikes! The video quality did not turn out too well after compression. Oh well, you get the idea. It looks *great* on the device, I promise  🙂  —-

P.S. – Wanna Help Test?

If you’re interested in helping beta test Click, follow the link here to Test Flight. I may not be able to take everybody – Apple’s 100 device limit is turning out to be more restrictive than I realized at first – but I will take whoever I can. If you’re a musician or otherwise particularly “qualified” to test this app, that will help me narrow it down if I need to, but it’s not a requirement. Thanks!

Test Flight – http://bit.ly/zjcd0g

SVG to CoreGraphics Conversion

*UPDATE – August 2, 2014*

There’s been a LOT of different tools come out in the last several years since I posted this. I’m still seeing a fair amount of traffic showing up here from google, so I thought I’d stick in a little update here with some links to newer apps/tools/converters for generating CoreGraphics code from other file types or graphical editors. I haven’t tried all of these, and I have no connection to their creators, just providing some links.

http://www.paintcodeapp.com

http://drawscri.pt

http://likethought.com/opacity/

I’m sure there’s more, so let me know in the comments if you’ve got another one. Hope this helps, and if you’re still interested in going into the SVG standard a little deeper or in seeing what I did earlier, then read on!

***

I’ve got another tutorial type post for today, but it’s really equal parts: “here’s what I found that helped but didn’t quite work,” “here’s what I did,” and, “anybody have any better ideas?” If you already know something about Core Graphics and why/when to use it and just want the gist of what I did to convert SVG files to Core Graphics calls, go ahead and skip on down to “SVG to the Rescue.” Otherwise, read on to hear about my experience.

Why Core Graphics?

If you’ve spent any time programming for iOS or OSX, you’ve probably been exposed at some level to the Core Graphics frameworks, otherwise known as Quartz. (BTW, if you’re looking for documentation in Xcode, you have to search “Quartz” in Xcode to find the Programming Guide. Searching “Core Graphics” won’t get you anything helpful. A common occurrence with the Xcode documentation browser in my experience, but, that’s a rant for a different day.) The documentation is actually quite good at getting you up and running with the APIs. There’s also some great tutorials in the Graphics and Animation section here from Ray Wenderlich’s site. As a framework, Quartz is the name for the complete 2D drawing engine on iOS and OSX, and it covers everything from drawing to the screen, to working with images and PDFs, to color management, and also includes low-level support for drawing text. It’s resolution and device independent, meaning it’s great for generating interface elements for your iOS apps. No need to manually create multiple versions of each resource – iPhone, iPad, @2x for Retina displays (presumably @2x iPad Retina at some point in the future) – just render the interface at runtime, and as long as you do it right, the OS will handle all the scaling and render everything at the right resolution. It’s also perfect for those times when you need to draw dynamic interfaces that are based on some kind of data or input rather than a static “look” that you design and then display. Although it’s not exactly easy to reverse-engineer the Apple apps and see exactly what they’re doing, it’s safe to say that many of them are rendering directly in app with Core Graphics, rather than loading static images. The WWDC 2011 video, “Practical Drawing for iOS Developers” shows step by step how the Stocks app renders its views, entirely in Quartz. If you’re starting from scratch, the docs, WWDC videos, and tutorials will have you drawing lines, arcs, and basic shapes in no time, all with strokes, fills, and even gradients.

Complex Shapes

The problem I ran into was how to get past just the basics. The API’s for Quartz path drawing go something like this: move to this point, add a line to this point, add a bezier curve to this point with control points at these locations, etc. It’s relatively easy to think about and describe basic geometric shapes in these kind of terms, and Core Graphics even provides convenient methods for creating things like rounded rectangles and ellipses. Even complex views like the stocks app are still very much data/number driven types of views, and even though the drawing process itself is more complicated, it’s not hard to imagine how you would programmatically calculate and describe, say, points on a graph. But, what if you want to draw something a little more organic? What about more complex shapes with lots of curves?
Quarter RestTake this Quarter Rest symbol, for example. As a vector graphic in Illustrator, it contains 3 straight lines and 13 different Bezier curves, each with two control points – and even that is after trying to simplify it as much as possible without losing the desired shape. The problem quickly becomes apparent – it’s virtually impossible to establish a good mental connection between the graphic as conceived by the artist/designer, and the actual code used to produce it on screen. Bret Victor has a great write-up on this artistic disconnect when it comes to dynamic and interactive images/interfaces. It was immediately evident to me that trying to build this graphic in code, line by line – guesstimating and then tweaking the coordinates of the lines, curves and control points – could only end in one way: much swearing and me throwing my computer out the window in frustration.

The main reason I wanted to use Core Graphics rather than static images is to be able to display these musical symbols with dynamic coloring and shadows for some highlight/glow effects. Now, the shadow part of this is actually possible using pre-rendered images. You can set shadow properties like color, radius, distance, on any UIView (technically, on the CALayer of the UIView), including UIImageViews. Quartz will use the alpha values of the pixels to calculate where the edges are, and will generate a nice shadow behind the elements of the image. I say it’s possible, but it’s not actually that practical. Doing it this way requires an extra offscreen rendering pass, and the performance will very likely suffer. In my case, it completely tanked, from somewhere around 53-54 fps with normal content, to around 15 fps when adding a shadow to static images. In some situations, you could work around this by using the shouldRasterize feature of CALayer, but for dynamic content, this could actually make performance even worse. After my experiment, I knew there was no way around it but to keep working on some way to convert my vector images in Illustrator into something I could use in my app. Enter the .svg format.

SVG To the Rescue!

SVG – scalable vector graphics – is a widely used standard for storing, well, just what the name says. Most vector based graphics programs, including Adobe Illustrator, can open SVG, and will also export to SVG. Since SVG is a web standard, one option is to use a UIWebView to render the SVG to the iPhone screen, but that option doesn’t work for me. I googled far and wide for some kind of svg to Quartz converter, and had a bit of luck:

This site was a good start. It provides a resource where you can copy and paste the path data from an svg file, and it will export Quartz source code. I had some weird results from my files, though and a little difficulty figuring out the proper data to paste into the form.

Here’s an Objective-C class, under a Creative Commons Attribution license, which will also take the path data from an svg and output a UIBezier path (iOS) or NSBezier path (OSX).

I also found this library, but as of right now, the site appears to be down. Perhaps just a temporary issue. (UPDATE: Looks like it’s back up, and now that I can see it again, this page is mostly just a link to this on github. A library that takes SVG files and turns them into CAShapeLayers.)

I didn’t test any of these options extensively, but they appear to be good options, especially the second. If they work for you, then great! What they all have in common though is that you first need to extract the path data from the .svg file, meaning, I had to do some research on the standard anyway. Turns out, .svg is just an XML format that you can open in any text editor. And, even better, the svg path commands are very similar to the Core Graphics API’s. Each individual path is contained in a <path> tag, and the specific commands are in the “d” attribute. Here’s the file for that Quarter Rest symbol – open in the browser to see it rendered, or download and open in a text editor and you’ll see the path data clearly separated. The svg standard includes lots of fancy things like fills, strokes, gradients, patterns, masking, even animation, but all I’m using here is a simple, single path. Path commands in svg are single letters, with parameters following.

  • M (x, y) – move to point.
  • C (x1, y1, x2, y2, x, y) – Add cubic bezier curve to point (x, y) with control points (x1, y1 and x2, y2).
  • L (x, y) – add line to point.
  • Z is the command to close the current path.
  • There’s also H, and V for horizontal and vertical lines, and S for curves with an assumed first control point relative to the last command.

Once I got the file into this format, each move was easily converted to Quartz API calls to build a path:

  • CGPathMoveToPoint
  • CGPathAddCurveToPoint (where the point parameters are even in the same order as the SVG command)
  • CGPathAddLineToPoint
  • CGPathCloseSubpath
  • The “H,” “V,” and “S” commands don’t have a Quartz counterpart, so they need to be adapted.

And, here’s the end result of that Quarter Rest symbol, rendered in app with Core Graphics, complete with shadow/glow effect and maintaining nice, snappy frame rates.

Parsing Gotchas

Parsing the svg file by hand turned out to be a little challenging. For one thing, in the interest of keeping file size small, there are almost no separators between items. No whitespace, but also not many commas or other delimiters, wherever it can be omitted. For example, a “move” command, followed by an “add curve” command might look something like this: “M60.482,613.46c0,0-17.859,0.518-26.997,0” Each place there’s a negative number, the separating comma is eliminated as unnecessary, and each command runs right into the next one, so it’s important to know that the parameters for each command come after the letter. Also, when the same command is used multiple times in a row, it’s possible according to the standard to omit the command the second time. So, a “c” followed by 12 numbers is actually two separate curve commands. One other catch: each of these svg commands is using absolute coordinates, but most of them also have a corresponding command using relative coordinates. These use the same single letters, but in lower case. For example, M 10 10 m 5 5  means move to absolute point (10, 10) and then move to (5, 5) relative to this point – so (15, 15) absolute. Unfortunately, Illustrator exports svg files using mostly these relative commands, so I also needed to convert them to absolute point values for Quartz.

You’re Still Reading? Wow!

OK, this was a long post, so if you’ve read this far, that means you’ve got some actual interest in the subject and/or need for an efficient way to get path data into Core Graphics. So here’s the part where you’re hoping I have the link to some parsing code or maybe a script I wrote that will do this work for you. But, I have to confess, I did all mine by hand. I only had a handful of symbols I needed for this project, each of which is a pretty simple, single path, so I did the parsing and converting by hand along with some rounding and cleaning up along the way. Maybe next time I’ll get it automated, but for now, it was a good exercise in exploring a new file format and diving deeper into Core Graphics. But, if you’ve given some of this a try yourself, I’d love to hear what you came up with! Did any of these other resources work well for you? Got any nice scripts to share with everyone else? Or, perhaps even more likely, am I way off track? Is there an even better workflow for getting from design in a visual editor to Quartz source code? If so, I’d be grateful to hear your ideas.

BONUS: of course, just as I’m finishing this post, I stumbled across yet another resource. From the looks of it, it might be the best yet. It’s an Objective-C parser that will take a portion of an Illustrator eps file and convert it to CGPath. Guess I’ll have to try that out now too! Method for Interpreting Illustrator Art Assets as Cocoa CGPathRef

A DrumDictionary Update

Not much has happened around here the last few weeks for me to write about. I’ve mostly been busy with finishing up my classes for the semester and haven’t had hardly any time to work on the new app. Sad face. After I missed my earlier goal for releasing the app in October, I was going to shoot for a holiday release. And now, I’ve missed that timeframe too! That’s alright though. The more I think about it, the more I realize that it’s probably a bad time to release anyway. I had great sales numbers last year at Christmas, but that was on an already established product, and I was fortunate enough to get a nice feature in several app stores worldwide. When it comes to releasing a new app, however, it seems like so many people shoot for December, and so many people/companies put their apps on sale for the holidays that the competition is just insane! So, while I’m a little sad the app isn’t done yet, I don’t mind taking a little more time to get it right and release at more reasonable time. One thing I am going to try to crank out here in the next few days though is an update to my current app theDrumDictionary. It’s been a while since I’ve added any new content, and I’m sure people are wondering if I’ve abandoned the app. For this type of an app, though, I think it’s really important to keep people engaged and keep new content coming. The problem is, after several months of no updates, I’m getting pretty much the same average downloads per day, so it’s hard to be motivated to keep working on it! Whenever I get time to work on apps, I just want to work on the new one and get it done, not spend my time on this other one that’s already doing OK. Anyone else ever get that feeling? How do you find the balance between working on new stuff and maintaining the old, especially when you’re not using any IAP, so there’s no monetary incentive for providing more content? I’d love to hear some of your strategies or tips!

A Debugging Lament. And, Totally Unrelated – Simple Gesture Recognition

When Debugging Sucks (More than usual)

For my iDevBlogADay post today, I considered writing a tutorial about debugging, but ultimately decided against it. For one thing, it would most likely have just ended up being me venting about this freakin’ bug that plagued me for months on and off. For another thing, who would really trust a lesson in debugging written by a guy who took 4 months to solve one little bug 🙂  So, I’ll get on to my main topic in a minute, but first, a few quick tips from my experience:

1. Just because the bug first appears when you start adding animations, doesn’t mean it’s a problem with how you calculate your animations; it could have been hiding there in the background all along and only appeared when you started trying to automate some things instead of reacting directly to user input…grrr

2. Just because the bug appears to be a math error, it may not be. Remember that tiny little “if…else” statement that you set up WAY back at the beginning of this project? Yeah, you didn’t account for the rare, but oh-so-important case when you’ll actually need to run the code in BOTH branches. “if…if” FTW!

3. When your “bug” is actually two very similar bugs that manifest in nearly identical ways, but are caused by slightly different circumstances, just face it, you better start praying and offering some sacrifices to the programming gods, ’cause they are obviously angry with you. Good luck trying to find a way to reliably reproduce THAT issue!

4. Do not, I repeat, do not stop digging until you *know* that you’ve found the root cause. “Well, the problem is happening somewhere in this general area, so maybe if I rework this part right here and see what…” STOP!! Keep. Searching. Now, it’s one thing to make adjustments as a part of the bug-finding process. It’s a whole other thing to just start changing things hoping the bug will disappear. It’s not like I didn’t know that before, it’s just that, *hangs head in shame* I got so desperate and…sniffle…sob…*takes a deep breath*…OK, I’m back. On the bright side, my shot-in-the-dark refactoring adventures DID result in a much better understanding of my code and in the end, a simpler, cleaner implementation.

Now, onto the main topic: simple gesture recognition techniques

Tap, Drag, Hold

iDevices are all about multitouch. It’s one of the breakthrough features that made the original iPhone so unique, and it’s implementation in iOS has been copied by every other similar smartphone and tablet out there. Over the years, as people got used to the technology, and especially after the introduction of the larger screen iPad, gestures have become a huge part of the iOS experience. Personally, I have yet to dive into programming with UIGestureRecognizers, but I’ve read the docs and watched the WWDC videos, and it really as amazing. If you’re looking to implement complex, multi-touch, multi-tap gestures on many overlapping and interacting views, then there is some really powerful stuff in there. But what if you don’t need all that? For me, the most important gestures for my metronome app are simple taps and drags, along with translating their locations into angles around a circle. I’ve demonstrated the basics of this second part in this post about rotating a circle along with a user touch. Rather than learn a whole new framework and go through the process of attaching multiple gesture recognizers to my view, with different methods for different gestures, I was looking for a simple way to implement single taps and drags right within the UIControl touch handling methods I was already using. Turns out, it’s not hard at all. (I’m using a UIControl subclass for this, but there are very similar touch handling methods on a basic UIView that will get you the same results).

Touch handling for UIControls consists of 4 main methods:

-beginTrackingWithTouch:withEvent:

-continueTrackingWithTouch:withEvent:

-endTrackingWithTouch:withEvent:

-cancelTrackingWithTouch:withEvent:

This particular implementation does not handle multiple touches, so the “multipleTouchEnabled” property of the view should be set to NO. In the -beginTracking method, we’ll grab the location of the touch that’s passed in with, conveniently enough, “[touch locationInView:self]” and store it in a CGPoint variable. We’ll also need a “tap” flag, which we’ll set here to YES.

For however long this individual touch stays down, we’ll get calls in the -continueTracking method each time the touch moves. Compare the new location to the initial touch location we stored above, and – using good old Pythagorean Theorem – calculate the distance between them. If the touch has moved more than some predefined distance, then set the “tap” flag to NO, and continue on with whatever tracking you need to do. In my case, I start tracking the angle of rotation around the center and send this info to a delegate.

In the -endTracking method, called when the finger is lifted, simply check the “tap” flag: if the touch never moved from the original location, then “tap” is still YES, and you execute the code you want for handling a tap. Otherwise, handle the end of the dragging gesture. Without too much more work, you could also implement a tap and hold option: start a timer in -beginTracking. If the touch moves, cancel the timer at the same time you set the “tap” flag to NO. If the timer fires, then you know the user touched and held without moving. If desired, it would also be possible with a timer to set a short delay and listen for multiple taps, but then we’re getting into territory that might be good for an actual UIGestureRecognizer, or at least some different implementation that can properly make use of UITouch’s tapCount property.

All in all, I’m really happy with how this turned out. Once I thought about it for a minute, it was far easier than I anticipated. It’s a nice, simple implementation that can be stuck directly into the existing touch handling methods of UIControl/UIView, and it even lets you easily define what kind of “wiggle room” you want to allow before a tap becomes a drag, as well as easily set the time needed to count as a “tap and hold” if you go that route. With just a few lines of code and a couple flags checked in the right places, you can implement different actions for tap, drag, and tap and hold, plus various combinations of the three, on any UIControl or UIView subclass.

Time Flies When You’re Trying To Ship That App

Crunch Time

Wow. I can’t believe it’s already been four weeks since my last post. For one thing, that means I missed my iDevBlogADay slot last time around. For another thing, it means I’m less than 2 weeks away from my deadline for uploading my metronome app binary (part of why I missed the last post – super crunch mode!). But alas, it’s not going to happen. For anyone who may not know what I’m talking about, iTunes Connect allows you to begin the process of submitting an app – reserving the name, adding metadata, etc. – without actually uploading an app binary right away. However, to prevent name-squatters, once you start that process, you’ve got 120 days to submit. If you miss the deadline, your app is automatically deleted, and the name is back up for grabs for anybody but you! That’s right, you can never use that name for an app again. You may have noticed above when I mentioned “super crunch mode.” But wait, you might be thinking, isn’t that against what being indie as all about? Working for yourself, when you can, no forced week/month long crunches with no sleep and mandatory overtime to push out a product by some unrealistic deadline…

Not THAT Email!

…So, there I am innocently checking my inbox, when, bam! “You have not yet uploaded a binary for your app…If you do not upload a binary for your app by 14 October 2011 (Pacific Time), it will be deleted from iTunes Connect. The app name will then be available for another developer to use.” When I reserved my app name and began this whole process, I was well under way with what I thought would be the hardest part of the programming. Turns out I was right, it was the hardest part. So hard – or at least I’m still enough of a noob – that three months later I’m still messing with a few weird edge case bugs that, when they occur, render the controls basically unusable. Yeah, I get that email, and start to panic – OK, if I can get such and such a feature done in the next two days, then I’ll have enough to send out to testers, oh wait, still gotta find more testers. Then, all I’ll have to do is crank out an icon, finish the main screen design, decide which features to cut even though they were part of what would make my app unique, oh yeah, and there’s still that nasty bug lingering around…no problem, I can get it done in four weeks! And I tried. Well, I tried for about a week – putting off classwork, procrastinating on responsibilities for my actual (part-time) job, barely sleeping, having no time for my wife and friends…and then life caught up with me and I just HAD to step back and do some other stuff. And to be honest, I’m glad I did.

What To Do, What To Do…

That week, when I was trying to power through and crank out an app – a 1.0 version that would certainly be buggy and with subpar, rushed design – I hated it. There were a handful of thrilling moments, when major improvements came together quickly by sheer force and will-power and it looked like maybe I had a chance, but for the most part, it was awful. So, I’ve decided to let the deadline pass. Will I lose my app name? Maybe, although I had a few variations in mind already, so maybe it won’t be so bad to lose this particular one. There’s also a variety of supposedly valid ways to game the system and work around the deadline, but I won’t go into those. Google will help you out if you’re curious, but I’m just not sure how much stock to put into random, usually old forum postings of people saying they tried such and such, and it worked to save their desired name without actually releasing the app. I’m not going to get sucked into two more weeks of hell only to put out a crappy, unpolished product. That’s exactly what’s already out there (with some exceptions of course) and the reason I’m trying my hand at an already well-saturated market. I’m not going to completely throw away some of my desired and unique features because I jumped the gun on registering my app name. This is only my second major app project, so needless to say, I’ve still got a lot to learn about programming and time-management and estimation; I think it’s OK to cut myself some slack. I’m not going to get bogged down by the extreme stress a crazy two weeks like this would produce. What am I going to do? Get my app fully functional and thoroughly tested, and actually enjoy doing it, not to mention try to make the most of the potential 1.0 initial release sales bump with a quality product. I’m going to enjoy this beautiful fall weather and take my dog on walks like I did today. I’m going to go camping this weekend with great friends. I’m going to live life how I want to and allow space for some some hard work mixed with a little creativity and time to do its magic and hopefully lead me to a great app. And that’s how you do it indie style 🙂

Oh yeah, and I’m going to push that button over there and publish this post after midnight US central time. Oops, guessed I missed my iDevBlogADay deadline again. Oh well, at least Miguel won’t automatically delete my post and make me choose a new name!

Circular Layout and Scrolling – Part 2

I think for this post I’d like to continue implementing the rotary control I started in my previous post. In that example, all we really did was figure out how to arrange a series of numbers around a circle. I briefly mentioned how you could implement a rotation on the entire circle, but didn’t get into the details of how that works. So, for this post, we’ll add a bit of touch handling code to take care of rotating the entire circle based on the movement of a user’s finger on the screen.

Touch Rotation

Although the iOS multitouch system is very advanced, for this example, all we really need to is track a single finger as it rotates around the center of our circle. To do this we’ll implement the touchesBegan:withEvent and touchesMoved:withEvent methods of UIView, which are inherited from UIResponder. Since we’re now building this example out into a more full-fledged control, we’ll go ahead and create a subclass of UIImageView that will take care of displaying the same view as before, as well as handling touches to implement rotation. I’ve added the basic number layout to a “setup method” that’s called from both initWithFrame: and initWithCoder: (which is used when the view is loaded as part of a .xib file). One more method we’ll need is the one that actually calculates the angle of the rotation. What we need to know is the angle between the user’s finger and the center of our view. Now, I’ve learned some of this stuff before, but my trigonometry skills are, let’s just say a bit rusty, so when it came time to do these calculations, I was lost at first. However, Matthijs Hollemans has a nice open source rotary knob control that saved me a lot of time and headache of figuring this out. It turns out that math.h has a very helpful function called atan2(x, y). The introduction on the wikipedia article sums it up nicely: it returns the angle in radians between the positive x axis and the point given by the coordinates. What’s that mean for us? Calculate the x and y distance of a touch from the center of our view and pass that to atan2, which will return us the angle of the touch relative to our center. Once this calculation is figured out, basically all we need to do is store the angle of the initial touch in touchesBegan: and then each time touchesMoved: is called, calculate how much the angle has changed since the last call and pass this in as the rotation value on the CGAffineTransformRotate function to rotate the current transform by that angle. Here’s the source on paste bin. It’s the header and implementation in one paste. It’s just a UIImageViewSubclass, so drop it in IB or add programmatically like you would any other UIImageView to see what it looks like. Or skip the background image and just subclass UIView instead. Everything else will work exactly the same.

NB: In general, any custom touch handling object really should also implement touchesEnded: and touchesCanceled:, but in this particular case, it doesn’t really matter much to us when or how the touches stop. When our touches end (or get canceled), our touchesMoved: method stops getting called, and the UI is simply left at whatever rotation it last received. Once new touches start, we store that initial angle value and continue on from there.

Preview

There’s lots more I could show on this whole concept, especially about how to connect the rotation with a certain selection or value, and about how to display more content than what can fit around one circle, but I don’t want to go too much further right now for a few reasons: 1) My implementation works pretty well and is quite flexible, but the animations aren’t all quite right, and it’s still got a few nasty edge case bugs I’m trying to work out, so it’s not yet ready for mass consumption! 2) I’m not sure how much of this will be interesting to everybody. So, what I thought I’d do is share a little video showing at least the visual aspect of what my this thing can do, and if anyone is interested in seeing more code when it’s fully baked, then let me know what you’d like to see, and I’ll share what I’ve learned! Sorry for the bad audio; just using my earbuds microphone and trying to be quiet and not wake up my wife. Oh yeah, and the weird video aspect ratio. I wish the iPhone simulator wouldn’t act so jumpy when you rotate it. Anyway please let me know what you think and what/if you’d be interested in hearing more about with this control.