Nitpicking iOS Notification Banners

We’re all familiar by now with the iOS notification banners that appear at the top of your screen. These slide into view from offscreen in a top-down direction.

In general these are great. They’re certainly a big improvement over the full-screen alerts from iOS 4 and earlier. But the banners can get annoying when they slide over app content you need to see, especially navigation bars.

The nuclear options – the ones that turn off banners altogether – are too extreme. Luckily iOS has a simpler way. You can dismiss a banner early by swiping up, from the bottom of a banner to the top edge of your screen.

Here’s my nitpick though: why is this gesture only allowed in a vertical direction? The target region is so small that in practice I often end up triggering a tap, i.e. the exact opposite of what I intended to do.

Perhaps it’s for logical consistency. The banner appears top-to-bottom, so dismissing it occurs in reverse. But this isn’t an important enough spatial rule in my opinion.

You should be able to swipe horizontally to flick a notification off screen. We did it this way in Riposte and it was awesome.

Can you think of a good reason why horizontal dismissals shouldn’t be allowed?

|  June 24 2014




David Rönnqvist on CALayer Animations

David Rönnqvist has a new post today, "Multiple Animations", on the interplay between competing explicit and implicit CALayer animations.

David’s site is gorgeous, but it’s also a textbook case for why we need RSS. David publishes new posts infrequently, yet none of them should be missed. Subscribe to his site here.

|  June 23 2014




Healthy Skepticism – My Critique of HealthKit as Both iOS Dev and Registered Nurse

Of the many new APIs announced at WWDC this summer, HealthKit has been particularly thought-provoking for me. At the risk of sounding like that guy, I think I have a somewhat priviledged perspective of HealthKit. There can’t be that many former registered nurses who’ve switched to iOS app development and tried to start a healthcare data company.

I’ve devoted the better part of the last four years to understanding the healthcare industry, both its current problems and its possible futures. Along the way I’ve learned many things – some hopeful, some downright depressing. I ought to describe how HealthKit looks from my vantage point.

Before jumping into HealthKit, let’s take a step back and look at the past and present state of healthcare information – what it is, where it’s stored, and how it’s transmitted and used. I’ll limit my description to the US since that is what I’m most familiar with.

Stacks of Paper

When I was a nurse, I worked in critical care. A typical patient at my hospital was brought in via ambulance or helicopter from an outlying urgent care facility. Though I worked at a hospital in Nashville, it was not uncommon for us to admit patients transferred to us from hospitals as far away as Kentucky. A transfer patient would be wheeled out of an ambulance and onto my ward by EMTs hired to ferry patients between hospitals. Tucked into the corner of the mattress, I’d find several fat manila envelopes filled with stacks of paper printouts from the outlying facility’s electronic health record system (EHR). There were so many pages it wasn’t possible to use them as a working reference. Instead, I primarily relied on the verbal report from the EMT to learn a patient’s past and present condition. It was only later, after our doctors had a chance to review the reams of paper printouts, that the full picture would begin to be revealed.

Though the transfer patient may have been in the outlying facility for days or weeks, as far as our EHR was concerned, today was Day Zero. Discontuity between caregivers’ records increases the likelihood of mistakes. Doctors and nurses go through a great deal of training in order to verbally communicate patient data as efficiently and safely as possible. This training helps us offset the risks of fractured medical records. Those stacks of paper became a supplementary reference, secondary to verbal reports. It would take a day or two before our own EHR would be populated with enough of that patient’s data to become a primary reference.

From Paper to EHRs

Until relatively recently, the vast majority of medical records in the US have been recorded on paper. From routine doctor visits to lengthy stays in critical care, every piece of data – lab results, medication orders, progress notes, etc. – were written or typed on paper and stored in massive warehouses. It wasn’t until the 1990s that electronic health records (EHRs) started to gain widespread traction. Doctors and hospitals were under no legal obligation to use EHRs, so the only providers to use them did so for organizational efficiency.

There have been numerous studies of the impact of EHRs on patient care, with mostly positive results. The consensus is that EHRs improve institutional logistics (billing accuracy, resource management, etc.) and help decrease medical errors, if sometimes at the expense of time spent at the bedside. They also contain latent possibilities for medical research and population health management – but only if most doctors and hospitals go fully paperless.

Though there are hundreds of EHR vendors, a mere handful of major players have dominated the market – companies like Epic, Cerner, Allscripts, and Meditech. Every vendor has its own unique software stack, from data storage to caregiver applications. There is no common database linking all these software products together. Every institution’s medical records are trapped within proprietary silos. Any interoperability with other EHRs has been made possible only on an ad hoc basis, at the whimsical discretion of EHR vendors and their customers. In practice, interoperability is virtually nonexistent. Patients are transferred between institutions with a stack of paper printouts, or nothing at all.

There are two main reasons why EHR interoperability hasn’t happened: it would be bad for business, and technical standards are lacking.

Interoperability Would Be Bad for Business

It’s disappointing but unsurprising that EHR vendors would keep medical data trapped inside their silos. If medical data were distributed via a shared database, their products would be reduced to either dumb pipes or thin client apps. Being a dumb pipe is bad for business. Selling thin clients isn’t a great option, either. EHR user interfaces are notorious for their terrible design. As a former registered nurse, I have plenty of interface design horror stories I could share with you. The reason these apps are so poorly designed is simple: they’re enterprise software. The customer is the hospital administrator, not the bedside nurse. The real money is in long-term, multi-million-dollar contracts with institutions who aren’t anyone else’s customers.

Interoperability isn’t in the interests of most healthcare providers, either. As a healthcare provider, you want the other institution to make it easy for you to see their data, so you can make your facility more efficient. But you have a neutral or negative interest in providing the same openness in return. Why would you invest in infrastructure that makes it easy for your patients to go somewhere else? Business models or legal requirements – or both – would have to change in order for EHR vendors and healthcare providers to be willing participants in a world of shared medical information.1

Interoperability Would Be Technically Challenging

There are technical obstacles to interoperability, too. Medical information is incredibly complex to model. It’s edge cases from top to bottom. Even something as simple as defining the possible values for a person’s gender raises difficult questions about biological versus preferred sex. Out of necessity, a number of protocols have been developed over the years that can encapsulate medical data in transit between subsystems within a given institution.

The most commonly used protocol is called HL7 – a gargantuan protocol with many variants. In the real world, no two institutions use the exact same implementation of HL7. Most systems in the US use one of the 2.x versions, which are pipe delimitted, prone to error, and not human-readable. Here’s a typical HL7 message for a lab result:

MSH|^~\&|GHH LAB|ELAB-3|GHH OE|BLDG4|200202150930||ORU^R01|CNTRL-3456|P|2.4
 PID|||555-44-4444||EVERYWOMAN^EVE^E^^^^L|JONES|19620320|F|||153 FERNWOOD DR.^
 ^STATESVILLE^OH^35292||(206)3345232|(206)752-121||||AC555444444||67-A4335^OH^20030520
 OBR|1|845439^GHH OE|1045813^GHH LAB|15545^GLUCOSE|||200202150730|||||||||
 555-55-5555^PRIMARY^PATRICIA P^^^^MD^^|||||||||F||||||444-44-4444^HIPPOCRATES^HOWARD H^^^^MD
 OBX|1|SN|1554-5^GLUCOSE^POST 12H CFST:MCNC:PT:SER/PLAS:QN||^182|mg/dl|70_105|H|||F

Yeah. Right. That’s a far cry from the tidy, readable JSON response from your garden-variety social media API.

There is a newer 3.x series of HL7 that is based on XML, but few EHRs in the US are actually using it. Thus the 2.x sample above is the current state-of-the-art of medical data exchange. Since HL7 2.x is pipe-delimitted, it is easy for implementers to insert data between the wrong pipes, breaking the already weak links between EHR subsystems. This happens so frequently that an entire industry exists just to solve this problem.

The deeper problem with HL7, in my opinion, is that it isn’t designed for persistence. It’s a means to encode ephemeral messages. The actual work of when and how to send messages, and where to store their contents, is left up to each EHR vendor. Linking together EHRs from two different vendors would be an enormous engineering task. A shared repository of private medical records would need something much more readable and resilient than HL7. It would need to look more like the JSON messages used by modern, RESTful web APIs.

HITECH and Meaningful Use

Earlier in this post I wrote that EHRs didn’t begin to gain widespread traction until the 1990s. This was an overstatement of the facts. The reality of EHR usage is that – even as late as 2009 – fifty percent of US hospitals were only only halfway electronic. Most just converted the easy stuff to electronic records, like lab results. Less than one percent (!) of them had completely moved beyond paper records. Many still had no electronic records at all.

The 2009 ARRA act passed by the US Congress included a landmark set of reforms aimed to drag US medical institutions kicking and screaming into the 21st Century – or at least the 20th. Not to be confused with the “Obamacare” reforms, the HITECH Act obligated US healthcare providers to demonstrate “meaningful use” of electronic health records. Meaningful Use, as the program has come to be called, ties Medicare reimbursements to EHR usage. A series of requirements, broken up into stages, will be rolled out over the next decade. Each successive stage unveils more stringent rules. Institutions that meet or exceed the current criteria in a timely fashion will earn bonuses on their Medicare reimbursements. Institutions that don’t will face penalties. Medicare reimbursements are bread-and-butter for healthcare providers, so there is strong motivation to keep up with the demands made by Meaningful Use.

The Meaningful Use criteria are still being defined, but the ones that have already been put into play are praiseworthy. Institutions must be able to electronically transmit a Continuity of Care Document (CCD) upon demand. A CCD is a brief summary of a patient’s past and present medical conditions. This requirement is aimed to solve the “stacks of paper” problem above. The CCD is a glorified PDF, but it’s the next best thing to having truly interoperable EHRs. Other Meaningful Use requirements are aimed at improving patient safety by requiring barcode scanning before administering drugs (BCMA), or requiring doctors to use specially-designed software to write orders instead of pen and paper (CPOE).

The most intriguing part of Meaningful Use is that it places the burden of proof on medical care providers, not EHR vendors. It’s up to each institution to select an EHR that supports Meaningful Use criteria. EHR vendors are in a mad rush to update all their products to meet the minimum requirements in time.

It is not yet known if Meaningful Use will ever require true interoperability between EHRs. If that happens, I would be extremely pleased, as a software developer, a former nurse, and a patient. With congressional lobbying being what it is in the US, I doubt EHR vendors or healthcare providers will ever let true interoperability become a legal obligation.

The False Promise of HealthKit

To a layperson, the introduction of HealthKit at WWDC looks like Apple might hope to provide the foundation for a future of shared medical data. The example use cases looked pretty cool at a glance. According to Apple, your doctor could conceivably have easy access to vital signs obtained by a Withings blood pressure cuff connected to your iPhone. The list of HealthKit partners, like the Mayo Clinic and Epic Systems, was particularly impressive. But I don’t think either HealthKit or Apple is in a strategic position to escape the forces that keep our medical data trapped in the status quo.

The first problem with HealthKit is that it can only model a tiny fraction of the spectrum of medical data. There is a very long list of things it can’t do: track medication doses, doctor’s orders, procedural notes, etc. But let’s assume for sake of argument that HealthKit eventually ships with model classes for every conceivable type of medical data. It still wouldn’t be able to bring about EHR interoperability.

As I discussed above, interoperability is technically challenging no matter who attempts it. Apple clearly has the capacity to tackle the technical issues if it really wanted to. The central problem for interoperability is one of motivation. Who has the power to compel all the hospitals and EHR vendors in the US to open up read/write access to their medical records?

In my estimation, there are only two entities capable of doing so. The first and obvious one is the government. If Meaningful Use ever mandates one-hundred-percent interoperability, then the industry would have no choice but to comply.

The second entity would be a for-profit company that offers healthcare providers a mutually-benefical partnership. This company would compel hospitals to allow them access, but with a carrot instead of a stick. If there was a way that hospitals could benefit from partnering with an open EHR framework, then they might happily allow their siloed data to flow freely between competing institutions.

Unless I am misjudging Apple’s intentions, HealthKit looks like it’s another way to keep high-end customers loyal to the iPhone and other Apple products. As such, it’s against Apple’s interests to make HealthKit available on competing platforms like Android or Windows. But for stored medical data to be of any significant use to healthcare providers, it can’t be limited to just A) patients who own iPhones and use HealthKit apps and B) providers with EHRs configured to access those apps. It’s unreasonable to expect that either healthcare providers or EHR vendors would devote limited engineering resources for the sake of a handful of patients, especially when the laundry list for pending Meaningful Use requirements is still so long.2

In practice, I expect HealthKit will have little or no impact on professional healthcare delivery.3 I think the experimental partnerships between Apple and the companies listed during the WWDC Keynote will remain exactly that: experimental. It will take a lot more than HealthKit to make a dent in the universe of healthcare.


  1. Clayton Christensen’s book on the business of healthcare offers a fascinating exploration of these kinds of problems. 

  2. This logic is the same for any hypothetical Apple wearable device, too. 

  3. The personal fitness industry is another story, however. HealthKit is an excellent, well, fit there. 

|  June 19 2014




Thanking My Dad for Caring About “Getting It Right”

It’s Fathers Day. I’m relinking to this post about my dad’s lesson on always doing your best work. My dad cared enough about “getting it right” to make creative work an issue of character, not just a hobby. Thanks, Dad. I hope to teach this to my son, too.

|  June 15 2014




Maglus Stylus Review

Full Disclosure: Applydea gave me review samples of the black Maglus and interchangeable tips to try out for this article. Even so, everything I write below is what I really think.

The best iPad stylus is also the one you’ve probably never heard of: the Maglus by Applydea. There’s a lot to like about the Maglus. Its sturdy aluminum body was – to my knowledge – the first to be shaped like a carpenter’s pencil. It has strong magnets hidden under the rubber pads, which make it easy to snap onto a Smart Cover or the side of a cabinet. Most important of all, it has the best tip of any stylus I’ve used. The silicone material registers touches faster than any other stylus out there. Its nearly-spherical shape retains its form under a wide range of pressures, which helps with accuracy as well as feel.

Made with Paper and Maglus.

The Maglus’ team reached out to me to see if I’d be interested in trying out their newer anodized black model. Having been happy with the standard aluminum finish, I expected not to like the black one as much, but I was wrong. The black finish looks really nice in person. If you’re a fan of darker iPhones and iPads, you’ll appreciate it. For reasons I can’t quite express, the darker color feels more appropriate to a drawing tool than the aluminum finish, at least to me.

New anodized black model, with extras.

Applydea also included an interchangeable microfiber tip for me to try. It looks like a tiny version of the wire mesh that encloses a microphone like the Yeti from Blue, but feels like smooth cloth.

Alternate microfiber tips.

The microfiber outer layer is wrapped tightly around some kind of dense material. I was expecting it to feel spongy, but instead the tip feels stiffer than the silicone version. More force is required to get it to register a touch, but there is less overall friction between the tip and the iPad’s display. I still prefer the original silicone tip more. I tend to write and draw with light pressure, so the microfiber tip posed problems for me. If you have a heavier hand than me, you might prefer the microfiber tip.

If you’re curious about how the Maglus compares to the Pencil by Fifty Three, I wrote a comparison review last year. Everything I wrote then still applies today. The Maglus is without a doubt the best all-around stylus you can buy.

|  June 12 2014




Unread for iPad

Unread for iPad is available on the App Store today. It’s a brand new app with a clean, distraction-free reading experience. It has all the sharing features you’ve come to expect from the iPhone version, as well as the full set of syncing services: FeedWrangler, Feedly, NewsBlur, Feedbin, and Fever.

I’m proud of the way this app came together. Compared to the iPhone, designing for the iPad is especially difficult. The iPad presents a challenging mixture of established interface patterns, awkward display dimensions, and a comparatively infinite canvas of pixels. Unread for iPad balances all these constraints against an overarching goal of mental and physical comfort.

You can navigate anywhere in the app from the edges of the screen. There’s no need to constantly reposition your hands. Just sit back and read your favorite online writers wherever you’re most comfortable.

Unread for iPad is $4.99 (USD) on the App Store. Also, in case you missed it: Version 1.3 of Unread for iPhone was released to the App Store last week. It has lots of bug fixes and performance improvements, especially for older iOS devices. Two new hidden themes, too.

|  June 09 2014




Smartphones, the Internet of Things, and the Death of Software

Inventions that change our lives are magical. They pry us free from physical laws. The printing press enabled the thoughts of a distant writer to multiply, spread, and live forever. The telephone stretched casual conversations – conversations that would have barely crossed a dining room table – until they spanned the globe. Remember what Steve Jobs called the personal computer? A bicycle for your mind.

For the next big thing to be the Next Big Thing, it must be magical. It must free us from some constraint that seemed immovable the day before. In what ways are we still bound to a technological or mechanical necessity?

The Internet in Your Pocket

What is it about the smartphone that has made it so influential? At a tangible level, the smartphone is a combination of technologies: a touch screen, user-friendly software, mobile chips, compact batteries. But at a more abstract level, the smartphone is The Internet in Your Pocket. Of all its contributions, I think it’s the always-on, always-connected, and always-with-you nature of the smartphone that has been its defining trait. The smartphone connects us to the teeming whole of human ideas, at all times and everywhere.

The untethered freedom of the Internet in Your Pocket has had both quantitative and qualitative effects on how we use the Internet. We spend more time on it than ever, and we also spend that time in new ways: messaging, social media, sharing photos, watching TV and movies, etc. Almost every app of consequence on my iPhone is backed by some kind of Internet-based API. My iPhone is pretty boring when it’s in Airplane Mode.

The smartphone transformed the Internet from a thing we use in one place into a thing we use anyplace. The difference between the corner of your kitchen and everywhere is hard to overstate. It’s for this reason that I respond to some people’s exuberance about the Internet of Things with a smirk. The Internet in Your Pocket is way more interesting than the Internet in Your Toaster. The latter is an incremental change that builds upon what the smartphone has begun. I don’t expect web-connected home appliances to change the lives of the people who buy them, certainly not at the magnitude that the smartphone has changed them.

The Death of Software

Rather than an Internet of Things, I like to imagine that a truly intelligent, ubiquitous artificial intelligence would change our lives to a similar degree that the smartphone has.

Through the present day, our concept of software has been a more-or-less static arrangement of logic and design. The user has a goal (manage her tasks, be entertained, etc.). The app is built to help her meet that goal. But the user has to squeeze her life into a shape that conforms to the software. If she’s lucky, there’s at least one app that fits her well enough to get the job done. But even the best piece of software still has rough edges. It’s indirect. It has a learning curve. It’s unaware of her context, and unwilling or unable to act in concert with other apps the user needs.

A truly intelligent artificial entity, as I envision it, would turn this situation upside down. Instead of the user conforming to the software, the software would conform to the user – a deceptively simple change that would have vast implications.

Software concepts that have been with us since the beginning of the personal computer would no longer be relevant. For example, apps as discreet experiences would be obsolete. There would no longer be any need for a web browser, a messaging app, a todo list app, etc. There would only be one app: the interaction between the user and the AI. Everything else would be built on an ad-hoc basis, in real-time, then thrown away:

"What do I have to do today?"The AI constructs a todo list, artfully typeset and formatted to compliment the tastes of the user.

"My kid won’t stop crying. Can you make him a game?"The AI constructs a simple game pitting the child’s dog as a hero versus his villainous school teachers. The levels progress according to patterns established by well-designed games of yesteryear.

"Where should we eat?"The AI presents what amounts to a Yelp-like interface, built from scratch using everything it knows about your family, what you eat in general, food allergies, what food you haven’t had lately, how long it takes to arrive and order food, etc. It’s not a startup’s MVP. It’s just for you.

And these are just the effects that such an AI might have on a personal electronic device. One can easily imagine the huge changes that such an entity could bring to medical care documentation, scientific research, and more. For every stereotypical bit of AI science fiction, there are dozens of life-changing applications that would be too boring to put in a film, even if they’d make a fortune.

Software, instead of feeling like a sea of half-baked ideas with a few rare gems, would feel like the bicycle of the mind you’ve always wanted but never thought possible.

I like to imagine this kind of AI growing out of an industry like video games. It’s not hard to imagine a time when gaming hardware is so powerful that there aren’t enough artists to create objects at the full level of detail that the hardware is able to render. To keep pushing the level of realism, a team of game developers would undertake the task of creating an AI with intuition and taste. Level designers would interact with the AI in loose, human terms:

"Make it gloomier."

"Put a neighborhood here with two story houses. Wait, three stories. These four need flood damage."

"The guy who lives here reads comics and he’s been on vacation for a few months."

The AI level designer would respond to comments like these by assembling realistic worlds and objects – not procedurally generated stuff, which would look intentionally random, but realistically generated stuff: a tarp covering a leaky roof; dog’s nose prints on a storm door; soggy U-Haul boxes; a stack of mail. The game developers will think they’ve built a design tool, but what they’ll actually have built is the death of software as we know it.

The question that makes me uncomfortable with this idea: if this were to happen, what would happen to software developers?

|  May 27 2014




Friday App Design Review – Castro for iPhone

Every Friday I will post a detailed design review of an iOS app. If you’d like your app to be considered click here for more information. I am also available to consult privately on your projects.

This week’s Friday App Design Review is Castro, the podcast app from Supertop. There’s a lot to like about Castro. I like how well Castro balances the constraints of iOS 7, the need for visual affordances, and Supertop’s creative impulse for originality. I especially like how thoughtfully it uses borders.

As I have said many times, few things are as important in iOS app design as borders. Borders aren’t necessarily literal borders drawn around an element. A border is any area where two or more edges meet. A border can be literal, as in the case of a one-pixel horizontal score between rows. A border can also be implied, like the invisible borders around the square margins of toolbar icon buttons.

iOS 7’s confusing visual language has made it harder for third-party apps to handle borders. There are mixed messages suggested by Apple’s stock apps. iOS 7 insists on text-only buttons, yet not for certain glaring cases. It has a general tendency toward unclear borders between logical sections, though it sometimes uses them with abandon. There isn’t yet a clear pattern for us to imitate. In the absence of best practices, each app seems to strike out into its own unique territory, often with awkward results.

Castro’s particular mixture of literal and implied borders is fantastic. It’s almost always easy to know where one tappable area ends and the next one begins. Literal borders break up the screen in logical ways, reinforcing the navigation hierarchy. Most impressively, Castro manages to do all this within the aesthetic constraints of iOS 7. Let’s look at some of the ways Castro uses borders, and explore ways to make them even better.

Episodes List

One of the biggest risks in Castro is the absence of literal borders between rows of episodes. Without careful planning, one row could easily blur into the next. Castro uses several techniques to solve this problem.

Episodes List

The bold episode titles create a strong implied border at the top of each row.

Implied top borders

The alternating rhythm of the large bold titles and small light body text helps break up the content, too.

The wide left margins are broken up only by podcast artwork, like tabs peeking out of the top of a Rolodex. These thumbnails accentuate the rhythm created by the episode titles.

Artwork folder tab effect.

Notice how the episode summaries are allowed to run into four lines. Your eyes subconsciously parse a summary paragraph as if it’s a big rectangle.

Large summary paragraphs

This suggests a strong implied border along the bottom of the row. The large paragraph also counterbalances the concentrated heaviness of the artwork on the far left. The weight of visual elements looks balanced across the width of the row. In a list like this, each cell should feel like an iPad with its center of gravity squarely in the middle.

Individually these elements might not be enough to create strong implied borders. But together the implied borders are unmistakable. The user never doubts where she can tap in order to select an episode. The strength of the implied borders has another benefit: it makes it possible for section headers to have literal bottom borders without blurring the separation of adjacent rows.

Section headers group by date.

Podcasts List

The podcasts list employs most of the techniques as the episodes list. But notice how the absence of long summary paragraphs diminishes the strength of the implied borders.

Podcasts List

Each row also feels lopsidedly heavy on the left. It’s as if the artwork is a bowling ball near the edge of a plank.

Both the episodes list and podcasts list have variable row heights. Variable row heights can obscure the visual rhythm of implied borders. This effect is more noticeable in the podcasts list because the average row height is shorter. I would suggest adding an additional line or two of metadata to each row, perhaps the date of latest episode. This would increase the average row height thus strengthening the rhythm of the implied borders. It would also distribute visual weight more evenly across the row.

Navigation Bar Border

Castro’s navigation bar has a literal border separating it from the main content. It’s more bold than what is typical on iOS 7, which is laudable. But I think there’s room for improvement.

Here’s a detail view of the navigation bar’s bottom border:

It’s an opaque grey color, most likely:

[UIColor colorWithWhite:0.65 alpha:1.0]

When viewed at a natural distance, it looks like a thin dark line between two white areas. But there’s a problem whenever dark content is scrolled underneath the border. Against the dark content, the border looks like a light gray color. In the detail view above, you can see this in the portion of the border that overlaps the 99% Invisible artwork. At a natural viewing distance, the border loses it’s crispness. An alternative that works well against any kind of content would be to use a translucent black color:

[UIColor colorWithWhite:0.0 alpha:0.33]

I would use this color and have the border overlap the scrollable content. Here’s a mocked up detail view with this alternate color:

At a natural viewing distance, this border would look crisp against any kind of content.

Playback Toolbar Border

The playback toolbar also has a strong border. The toolbar’s background is solid black, which would otherwise disappear against the predominantly dark episode content during playback:

The toolbar has a border which, like the navigation bar, is also an opaque gray:

[UIColor colorWithWhite:0.3 alpha:1.0]

While this border looks okay against the dark episode content, it doesn’t look crisp when the toolbar overlaps the predominant white of the episodes list:

Click to see enlarged version

At a natural viewing distance, this grey border looks more like misaligned pixels than a border. The toolbar would look better if the black extended all the way to the edge:

Click to see enlarged version

But wouldn’t this undermine the purpose of the grey border when viewing the episode details? Yes, but there’s another way to draw the border which would look crisp in both contexts. First, here’s what the existing border looks like when scrolling between the episodes list and the episode details:

Instead of the opaque grey color, I suggest using a translucent white color:

[UIColor colorWithWhite:1.0 alpha:0.12]

Using this color, I’d extend the border so it overlaps the content above the toolbar. This would both accentuate the crisp dark edge of the toolbar when set against white content and form a strong border when set against dark content.

This has the added benefit of letting the color of the episode details seep into the border, which is in keeping with the aesthetics of the rest of the details screen.

|  May 24 2014




Seeking Advice for a Right-to-Left Language Bug in Unread

This is cross-posted from this Stack Overflow question. If you know the answer I’d appreciate your help.

In Unread, I’m using the NSAttributedString UIKit Additions to draw attributed strings for article summaries in a UIView subclass. The problem I have is that despite using a value of NSWritingDirectionNatural for the baseWritingDirection property of my paragraph style, text always defaults to left-to-right.

Here’s how I form the attributed string (simplified example):

NSString *arabic = @"العاصمة الليبية لتأمينها تنفيذا لقرار المؤتمر الوطني العام. يأتي ذلك بعدما أعلن اللواء الليبي المتقاعد خليفة حفتر أنه طلب من المجلس الأعلى للقض الدولة حتى الانتخابات النيابية القادمة";

NSMutableParagraphStyle *paragraph = [[NSMutableParagraphStyle alloc] init];
paragraph.baseWritingDirection = NSWritingDirectionNatural;
paragraph.lineBreakMode = NSLineBreakByWordWrapping;

NSMutableDictionary *attributes = [[NSMutableDictionary alloc] init];
attributes[NSParagraphStyleAttributeName] = paragraph;

NSAttributedString *string = [[NSAttributedString alloc] 
                             initWithString:arabic 
                             attributes:attributes];

And here’s how I draw the text:

- (void)drawRect:(CGRect)rect {
    [self.attributedText drawWithRect:rect 
                              options:NSStringDrawingUsesLineFragmentOrigin 
                              context:nil];
}

And yet it still flows from left to right:

What am I missing?

UPDATE: – B.J. Titus has answered my SO post correctly. It turns out that NSWritingDirectionNatural, despite what it sounds like it does, doesn’t actually introspect the string to determine an appropriate writing direction. It just uses whatever is the base writing direction of the current system language. It will even apply a right-to-left margin to left-to-right runs of text. The workaround is to manually determine the appropriate writing direction and set an explicit LTR or RTL direction.

|  May 22 2014




My Reasonable iPhone 6 Prediction

Since a larger iPhone is all but a given at this point, the interesting question is how will Apple do it? There are several directions Apple could take. Before I delve into speculation, let’s rally around some terms.

Now for some fun speculation.

@3x Scale, Same Logical Size

Apple could increase the iPhone’s scale from @2x to @3x, re-using an existing logical size (either 320x480 or 320x568). This would allow them to use the same display panel already in use in the iPhone 5s, but cut it into a larger shape. This is more or less what Apple did with the first iPad mini; its display panel was the same as that of the iPhone 3G, just larger. The problem with this approach is that it would result in a phone that seems comically large for an Apple product:

320x480 points at 3x scale.

This would look even more ridiculous if I’d mocked up a 320x568 point ratio at an @3x scale.

Historically, Apple’s designers opt for modest differences in the physical sizes of a given product range (for example, 13-, 15-, and 17-inch MacBooks Pro). So if Apple chose to bump the scale to @3x, I would expect them to also use a higher pixel density than 326 PPI. This would require a new LCD panel, which might be prohibitively challenging. From what I understand, shipping LCDs with a new pixel density would require significant engineering and infrastructure resources, as compared to merely cutting existing panels to a larger size.

@2x Scale, New Logical Size

Alternatively, Apple could re-use the current retina iPhone LCDs, but cut them to fit into a new logical size. I think a logical resolution of 396x656 would be an interesting choice. This would increase the home screen layout by one row and one column:

396x656 points at 2x scale.

To my eyes, this looks like a more sensible size increase and a better use of the larger display. It also has the benefit of re-using the same 326 PPI display panel technology already being manufactured.

My Bet

For all these reasons, I think Apple is much more likely to ship an iPhone 6 with a new logical resolution at an @2x scale than an existing logical resolution at an @3x scale.

|  May 20 2014




A Practical Introduction to Photoshop for iOS Developers

What follows is a crash course in Photoshop for iOS developers. I’m going to take a very nuts-and-bolts approach. I hope to demystify what it is that an iOS app designer means when she says things like “working in vector” or “pushing pixels.” Beware the following caveat: this is an article about tools, not design. If this were an article about ice sculpture, it would teach you how to turn on the chainsaw. It’s up to you to sculpt an angel without losing a limb.

I’ll give you a winter prediction: it’s gonna be cold, it’s gonna be grey, and it’s gonna last you for the rest of your life.

Photoshop is a big beast. In some places its interface design capabilities feel tacked-on as an afterthought. When mocking up an iOS app in Photoshop, you’ll find that you only need a fraction of the available features. The unused features make it hard for newcomers to know where to begin. It helps to find your bearings before opening your first document.

A Stack of Layers

A photoshop document is a stack of layers that are composited in real time down to a single two-dimensional image. Every layer has several components:

1. Layer Content

Setting aside any other effects or styles that may be applied, a layer’s content is its most basic component. There are five main layer types, each with its own kind of content: raster, fill, shape, text, and smart object.

2. Layer Masks

Every layer has an optional set of masks, which function like stencils. An individual layer can have up to two masks: a raster mask and vector mask (except shape layers, which can only have a raster mask, since they already have a vector mask by definition). For example, a raster layer could have a heart-shaped vector mask:

Masked Crusader

3. Layer Styles

Each layer has an array of options that apply styles to the inner and outer regions of its content. Layer styles include things like drawing a border around the visible edges of layer content (a stroke), or adding a drop shadow that casts a shadow on layers underneath.

Layer Styles Window

There are lots of layer styles, each with its own suitable purposes and range of possible effects. I’ll go into detail about some of them later on.

4. Blend Modes and Opacities

Non-opaque areas of layer content are composited with underlying layers according to the selected blend mode for that layer. The blend mode selector defaults to “Normal”, but there are many other choices. Many of the blend modes have exaggerated photographic effects, as you can see here:

Three blend modes, same shape and color.

Except for certain specific cases, you should always set each layer’s blend mode to “Normal.” When it comes time to save image slices as PNGs to use in your app, Photoshop will blend non-opaque areas with an empty translucent background, thus losing the information produced by a dynamic blend mode.

There are also two opacity sliders for each layer. The one officially dubbed opacity adjusts the opacity for the entire layer, including any layer styles that have been applied. The other opacity slider is called fill. The fill slider adjusts the opacity of the layer’s contents without affecting the opacity of the styles. The difference between opacity and fill is easier to understand with a visual example:

Opacity versus Fill

5. Layer Groups

Layers can be organized into a group, which looks like a folder in the the layer panel. Since Photoshop CS6, layer groups have their own layer styles and masks, as well as opacities and blend modes. This can be difficult to wrap your head around in the beginning, but it comes in handy when mocking up complex layouts.

Working in Vector

With new device form factors always on the horizon, it’s important for iOS designers to build mockups and image resources in ways that are easy to scale up or down as needed. The recommended approach is called “working in vector,” which is not necessarily a reference to working with SVGs. Most iOS app image resources are loaded as PNG files. But that doesn’t mean the Photoshop documents that generated them aren’t vector-based.

A vector-based Photoshop document is composed entirely of shape layers and fill layers. It is trivial to scale a non-retina @1x resolution document up to an @2x document, as long as it’s entirely composed of these two layer types. It’s often as simple as using the “Image Size…” menu item.

Some designers do all their mockups at non-retina scale and then scale up to retina for final processing. I prefer the opposite. Others make large sprite sheet documents that have both normal and retina scale images for app resources, side-by-side, sliced up for easy exporting. Whatever your approach, the most important thing is to avoid using raster layers at all costs. Scaling up a raster layer to a higher resolution will make your hard work look terrible:

Vector versus Raster


Sample Project: A Classic Button

Let’s put together everything discussed above in a sample project. It may not be in-fashion these days, but a classic iOS button is a great learning project for experimenting with shape layers, layer styles, and vector-based documents.

“Gee, our old LaSalle ran great…”

1. Create a New Document

New Document Modal

A document for mocking up an iOS app should be in the RGB color space with a resolution of 72 pixels per inch. I usually work in a 16 bit color depth since it produces smoother gradients. If you plan on exporting PNGs for use in an actual app (say, for button states), be sure to have a transparent background.

2. Disable Color Management

Notice in the screenshot above that I selected “Don’t Color Manage This Document.” Photoshop processes color differently than other OS X apps. When you’re designing for mobile apps or for the web, you’ll want to disable all forms of Photoshop’s color management. In addition to disabling color management for all new documents, you’ll also want to select “Monitor RGB – Display” for “RGB Working Space” under the “Color Settings…” menu item:

Color Settings

Using the native RGB space of your Mac’s display, your Photoshop document will look similar to what you’ll see on a device. But there’s no substitute for using something like Skala Preview to preview your designs in situ on an iPhone or iPad. Marc Edwards from Bjango has an excellent article that goes into detail on color management and Photoshop.

3. Enable Pixel-Snapping

All vertical and horizontal edges in your mockup should be aligned to whole integer pixel margins. It’s possible for shape layers to have path segments that are out of alignment with whole pixel margins. When this happens it makes the edges of your shapes look fuzzy:

Pixel snapping is crucial.

There is an option in Photoshop’s general preferences screen called “Snap Vector Tools and Transforms to Pixel Grid” which, when enabled, makes this much easier to manage.

This button toggles pixel snapping.

If you’re working on a retina resolution document (1536 by 2048 pixels for a portrait iPad), try to make all horizontal and vertical edges line up with even-numbered pixel margins. That way when you scale the document down for a non-retina screen, your edges won’t fall on sub pixel boundaries (which leads to fuzzy edges).

4. Add a Solid Color Fill Layer

Using a fill layer makes it easy to non-destructively tweak the background color of the document whenever you wish. To speed up my work, I use John Gruber’s system keyboard shortcut for the Help menu search box. I just start typing N-E-W-F-I-L-L until the desired item appears in the drop-down.

New Fill Layer

After picking a color, your document will have a backdrop to go behind the button.

Fill layer in action.

5. Add the Button Shape Layer

First put up some guides to mark where you want the button to go, either with my menu item trick (N-E-W-G-U-I-D-E) or by manually dragging inward from the rulers.

Measure twice, draw once.

Next, choose the rounded rectangle shape tool from the tools panel. You may need to click and hold to switch between the shape tools from the sub-menu.

Shape Tool (click and hold)

When you’ve selected the rounded rectangle tool, the options toolbar changes to show the options for this tool, including a corner radius:

Corner radius option

If you can’t see the options menu, toggle it’s visibility under the “Window” menu item.

Change the corner radius to something neither too small or too large. Since this is a big button at retina scale, I think 16 pixels looks good. Now draw in your shape:

Newly-drawn Shape Layer

Notice that there is now a layer in the layers pane called “Rounded Rectangle 1”. I recommend giving a proper name to every layer in your document.

6. Change the Button’s Color

To change the fill color of your rounded rectangle, double click in the thumbnail preview for that layer in the layer pane (can it be less obvious?). This will make the color picker appear. Pick a bright blue color, but not too saturated. Something like this:

New Shape Layer ready to go

7. Experiment With Layer Styles

With the options in the Layer Styles window, there is shocking variety of possible effects you can achieve – even with just a single shape layer. The full breadth of each tool is outside the scope of this post, but I’ll give you a taste with the following recipe. To show the Layer Styles window, double click in the empty gray area of a layer row in the Layers Pane (yeah, I know, even less obvious).

Inner Shadow

Inner Glow

Gradient Overlay

Gradient Overlay Submenu

Outer Glow

Drop Shadow

Bonus Points: Add an Icon Shape Layer

For bonus points, switch the shape tool to the custom shape option and add a white icon to your button:

Style the icon to make it look sort of like mine does here. Remember this star is just one shape layer. You should be able to create this glossy, raised look with just its Layer Styles window alone.

|  May 19 2014




Basement Menus and Breaking the “Rules” of App Design

Luis Abreu has an interesting breakdown of basement menus in his recent post: Why and How to avoid Hamburger Menus. It has some great points and is certainly worth a read, but it got me thinking about when to break the “rules” of UI design.

During the course of my design critique of Glassboard for iPhone, I listed the questions I ask myself when considering whether to use a basement menu in an app:

  1. Is there a single screen where the user spends most of her time?

  2. Is there a dynamic number of equally-weighted menu items?

  3. Are the contents of the menu easy to memorize?

  4. Are hard-to-memorize items used infrequently?

  5. Are the number of items kept to a minimum?

It’s important to note that this is a list of questions, not a list of reasons. There are times when a basement menu is a bad choice, and there times when it is a great choice. Every app has a unique set of goals and constraints. It’s up to the designer to find a good solution. Don’t limit your choices prematurely by assuming some options are off-limits.

As I’m discovering with Unread for iPad, even design patterns that are almost universally maligned can sometimes be the best choice. Apple, via moments like an on-stage presentation by Phil Schiller1, taught us all how much better iPad apps are than most Android tablet apps. The latter are typically just scaled-up versions of their phone-form cousins, whereas iPad apps are designed to take advantage of the iPad’s display. I don’t know about you, but I found myself immediately agreeing with Schiller’s comment that day. From then on, I took it as a given that no self-respecting app designer would design an iPad app that is just a blown-up form of its iPhone version.

But a scaled-up iPhone layout turned out to be the best choice for Unread on the iPad. As I describe in detail here, a full-screen iPad layout is more faithful to Unread’s goal of a relaxed, focused reading experience. It also makes it possible to navigate almost anywhere in the app without having to reposition your hands from the edges of the device.

The fun and the frustration of creative work are two sides of the same coin. Treat every project like it’s your first. Marc Edwards shared a fantastic anecdote about this recently. He recalls what it was like to work with a lead designer who, from the outside, appeared to be an indecisive flake:

Then it clicked.

He’d intentionally try different and crazy things, knowing that most wouldn’t work. He didn’t care. He didn’t care and it didn’t matter — we’d end up in places we never would have if we over thought the layout. The question wasn’t “what is the best way?”, but “what are the many ways?”, deferring judgement until the last possible moment. Judgement may feel good, but it has no value. The value is in the outcome.

And the outcome was often solid, stunning designs that were unconventional. Non-obvious solutions. From the outside and to other art directors, it appeared magical. But, from within the process, far less nuanced and intentional.


  1. See the 2012 iPad mini announcement here. Schiller’s comments begin around the 11:52 mark. 

|  May 18 2014




No Friday App Design Review This Week

I meant to post this yesterday, but there will be no Friday App Design Review this week. Family is visiting us. I’m working on a special post for iOS devs next week, though.

|  May 17 2014




AutoLayout Myths, Table View Performance, and Side-by-Side iPad App Multi-tasking

With the prospect of a new iPhone form factor and/or side-by-side iPad app multi-tasking on the horizon, iOS app designers and developers are likely to face new challenges when building their apps. Since it was introduced, a lot of people have championed AutoLayout as the cure for these challenges. This post is going to debunk that myth. Or more fairly, this post will show the limits of what problems AutoLayout is able to solve.

Multiple Layouts

For the purposes of this post, let’s assume you’re an iOS app developer planning the architecture for an app like Unread. We have to plan for scrollable table views with as many as 20,000 rows. Each row has a dynamic height based on several factors:

So, given a container with width w, how should we layout our elements?

As the developer, we’ve been handed some design mockups which we’ll use to write code that calculates the size and position of all our elements. Given a container with width w1, our layout might look like this:

The math and logic required to calculate the position of all these interface elements can potentially become very complex. It is this problem that AutoLayout is designed to solve. AutoLayout makes it easier to write and debug code that calculates the layout of a set of interface elements for a given container size.

It’s important to note that you can’t assume that the container width will always be equal to w1. Apps might have to handle at least two possible container widths: portrait and landscape. So what does our layout look like for a second width w2?

This layout is much different from the first one. AutoLayout makes it easier to calculate the correct positions of all these elements. But as the app developer, your problems are much bigger than just calculating a layout for a given container. You also have to deal with extrapolating that calculation across thousands of model objects and many possible containers. This problem is much more difficult and is beyond the scope of AutoLayout.

Cell Heights and Performance

Notice that the second layout above produces a different total height than the first. What are the implications now that your app has two layouts with two different heights?

When a table view is loading, it needs to know how many rows it will display and how tall each individual row will be. For designs that produce dynamic row heights (as the design above does), these metrics can be very expensive to calculate. You don’t want your app to pause for several seconds while calculating these heights. So what are your options?

Pre-calculation is a better solution, if you can pull it off. It’s difficult to do correctly. There are many factors to consider:

iPad Multi-tasking & Performance

In the example above, your app was already strained to achieve good performance with only two layouts of static widths. If iOS 8 allows iPad apps to enter a multi-tasking mode with dynamic widths, this would exponentially increase the difficulty of achieving good table view performance. Pre-calculation would be practically impossible.

Assuming the estimated row height API is still buggy and pre-calculation is impossible, the only other alternative would be for devs to start over from the beginning with a design plan that doesn’t allow elements with dynamic heights to also have dynamic widths. This would make good performance achievable, but it would defeat the purpose of a flexible app container. It is for this reason that I think if Apple adds iPad app multitasking, it will only be by scaling app containers without changing the underlying logical widths (768 or 1024 points for portrait or landscape respectively).

The Point

The point to remember is this: AutoLayout makes it easier to calculate a single layout for a single container, but it is irrelevant to the challenge of efficiently calculating a large number of layouts for multiple possible containers.

|  May 13 2014




Friday App Design Review – AnyList, Shared Grocery Lists

Every Friday I will post a detailed design review of an iOS app. If you’d like your app to be considered click here for more information. I am also available to consult privately on your projects.

This week’s Friday App Design Review is AnyList for iPhone, from the service of the same name. AnyList makes it easy to create grocery and shopping lists shared between you and and other members of your household.

My wife and I have been using AnyList for the last week and it works as advertised with fast, reliable syncing. We really like how it automatically sorts new grocery items by category. For the most part, we’ve been very pleased. It’s probably going to be our go-to app for grocery shopping from now on. I do have some qualms about the design, however, which I’ll address in this post.

AnyList is a freemium app, so rather than spend half of this post documenting how the app works, you should just download it now and try it for yourself. There are lots of extras and features peppered throughout the app, like built-in recipe storage, but this review is going to focus on just the list screen. My comments are applicable to the app as a whole, so hopefully this narrow focus will help clarify my points.

TL/DR

  1. Avoid fuzzy implied borders.
  2. Push beyond a stock aesthetic.

I. Avoid Fuzzy Implied Borders

There are two kinds of borders in an iOS app: real and implied. A real border should be self-explanatory. An implied border is obvious in context, even though it isn’t represented by a concrete visual border.

Most iOS toolbar icons have implied borders.

Implied borders are accomplished through the visual rhythm of multiple elements, identical in size and proportion, spaced at regular intervals. Elements of different sizes or shapes, or with an irregular arrangement, result in fuzzy implied borders. In an iOS app, fuzzy borders should be avoided, especially when arranging tappable elements.

The current AnyList list screen.

AnyList’s list screens suffer from numerous fuzzy borders. Because section headers don’t span the width of the screen, and because sections with only row don’t have any row separators, it is often difficult to tell where one tappable area ends and another begins.

Dark areas are the most visually confusing.

To sharpen these implied borders, the section headers would need to span the width of the screen:

By extending the section headers, the tappable areas of both the rows and the detail buttons becomes more obvious. Your eye would perceive the implied borders more easily:

While this is easier to use, it isn’t visually interesting. Perhaps there is a way to sharpen the fuzzy implied borders while also adding tasteful visual interest.

II. Push Beyond the Stock Aesthetic

AnyList adheres almost exclusively to the stock visual language of iOS 7. Buttons and icons are thin and wispy. A predominantly solid white background color is interrupted only by occasional horizontal borders. Unadorned text abounds, except where a single accent color is in use. With few exceptions, every interface element looks the way it would if you had just dragged it from the new object panel in Interface Builder.

This stock look doesn’t do AnyList any favors. As a subscription service, AnyList aspires to build a long-term relationship with its customers. Just like a dating relationship, this story will begin with visual attraction. AnyList needs a strong personality to draw in new customers and to help create an emotional bond with them as they grow familiar with the service. The current stock aesthetic feels too utilitarian.

The outliers in the current aesthetic are the AnyList logo and word mark:

The rounded, perky logo is fantastic. I love the movement it suggests. It looks almost anthropomorphic, like the Pixar lamp bouncing on a ball. What would AnyList for iPhone look like if the character of the logo were applied throughout the app? I’ve made a mockup of one possible approach:

My rough sketch of an alternate design.

Here’s the rationale for my suggested changes:

Get rid of Helvetica.

I’ve changed Helvetica to Creighton Pro. Helvetica is not only stock, it’s a poor choice for body text. AnyList is a predominantly text-based app. The font choice has the greatest influence on the look and feel of the app. Creighton Pro is just one of many possible alternatives. It echoes the rounded corners of the AnyList logo and word mark. It’s readable, stout, and casual.

Make the toolbar icons meaty.

While I’d argue that iOS 7’s wispy icons are terrible in general, in AnyList’s case they’re also not brand-appropriate. In this mockup I’ve made the toolbar iconography meatier and more rounded, like the logo.

Improve the section headers and row separators.

The pointed arrow section headers don’t fit within the aesthetic suggested by the AnyList logo. I replaced them with the full-width roundrects, which echo the similar shapes found in the AnyList logo. The repetition of these shapes makes the logo feel inevitable in hindsight. It creates a strong visual association between the layout of the app and the AnyList brand. This look also has the extra benefit of solving the fuzzy implied border problem described above. I’ve added negative space to the right margin. Horizontal row separators no longer touch the either edge of the screen. This subtle choice is suggested by the AnyList logo itself. It also helps visually separate the scrollable content from the navigation bar and toolbar.

|  May 09 2014





Links