I Hate Flying – A Summary of Our Day:

Nota Bene – If you are one of the many people expecting to meet with me in person or over Skype tomorrow, my apologies for probably missing our appointment.

|  1 July 2014

OvershareKit to be Maintenance-Only for iOS 8

Last fall, after I had finished the model controller layers of Unread, I felt a pit in my stomach. The next big task in front of me was sharing. I knew I wanted Unread to have lots of sharing options, but I dreaded writing them. I had already written tons of them in the course of making Riposte for App.net. Great sharing features are non-trivial to write. It’s even harder to make them reusable (Riposte’s were not). There are many concerns to think about:

There was no way to tackle all of these issues within the limitations of UIActivityViewController on iOS 7, which is why I decided to make OvershareKit. You can read the reasons in detail on OvershareKit’s Github page, but here are the highlights:

Those are just a few of the reasons why I decided to build something like OvershareKit for Unread. Justin Williams was in a similar position with his projects around the same time, so we decided to integrate his code for managing system accounts into the broader framework that I had conceived.

Bear in mind that at the time OvershareKit was being developed, it was still unclear whether Apple would ever provide better inter-app communication and sharing APIs. After seven OS’s, it was easy to believe that Apple frankly didn’t care. I was frustrated by their lack of concern, and dissatisfied by the lack of an open-source framework to bridge the gap. It was my hope that OvershareKit would become more than just a pet project between me and Justin.

But now that the iOS 8 developer preview is here, with its powerful Extensions APIs, what should be done about OvershareKit? Extensions are Apple’s answer to the problem that OvershareKit was created to solve. It seems to me that it’s better for all involved – both developers and users – for any app that’s using OvershareKit to migrate to Extensions and the UIActivity frameworks.

OvershareKit is now an unnecessary middleman between users and services like Pinboard and Instapaper. Almost any service that users want to access via OvershareKit already has great first- or third-party apps on the App Store. The burden of responsibility should belong to them to provide great sharing features via iOS 8 extensions.

I think this should apply to all future OvershareKit development, too. For example, many of Unread’s customers ask for Evernote support. Evernote has a rich and complicated API. It doesn’t make sense for outside developers like me to spend limited resources building support for Evernote. I want Unread to have Evernote, but the costs outweigh the benefits. It’s a sacrifice I was willing to make back when Apple provided no alternative. But now that Extensions are coming, the math has changed.

I don’t plan on adding any new features to OvershareKit. I will make sure that all of its existing features continue to operate bug free on iOS 7 and iOS 8. It may take several months (or longer) for extensions to be developed for all the services that OvershareKit currently supports. In the interim, I want to be sure that apps like Unread can continue to rely on OvershareKit.

If you have an app or service you can’t live without, I urge you to write to its developers and ask them to consider adding support for UIActivity extensions in iOS 8.

|  27 June 2014

Preferred Orientation

I’m pretty sure the following generalization is true, with few exceptions: every iPad app has been designed for a preferred orientation. I think this is true whether the designer was conscious of the preference or not.

iOS lock screen, awkward in landscape.

I’m not saying that the non-preferred orientations are poorly designed, nor am I saying that they’re not useful. I mean only that every iPad app has an orientation in which it looks and works best – the way we say of a person’s appearance that he or she has “a good side.”

Here are some examples, off the top of my head:



Notable Exceptions

|  26 June 2014

Happy Birthday, Henry

The boy turns one year old today.

Happy Birthday, Duders.

|  26 June 2014

Nitpicking iOS Notification Banners

We’re all familiar by now with the iOS notification banners that appear at the top of your screen. These slide into view from offscreen in a top-down direction.

In general these are great. They’re certainly a big improvement over the full-screen alerts from iOS 4 and earlier. But the banners can get annoying when they slide over app content you need to see, especially navigation bars.

The nuclear options – the ones that turn off banners altogether – are too extreme. Luckily iOS has a simpler way. You can dismiss a banner early by swiping up, from the bottom of a banner to the top edge of your screen.

Here’s my nitpick though: why is this gesture only allowed in a vertical direction? The target region is so small that in practice I often end up triggering a tap, i.e. the exact opposite of what I intended to do.

Perhaps it’s for logical consistency. The banner appears top-to-bottom, so dismissing it occurs in reverse. But this isn’t an important enough spatial rule in my opinion.

You should be able to swipe horizontally to flick a notification off screen. We did it this way in Riposte and it was awesome.

Can you think of a good reason why horizontal dismissals shouldn’t be allowed?

|  24 June 2014

David Rönnqvist on CALayer Animations

David Rönnqvist has a new post today, "Multiple Animations", on the interplay between competing explicit and implicit CALayer animations.

David’s site is gorgeous, but it’s also a textbook case for why we need RSS. David publishes new posts infrequently, yet none of them should be missed. Subscribe to his site here.

|  23 June 2014

Healthy Skepticism – My Critique of HealthKit as Both iOS Dev and Registered Nurse

Of the many new APIs announced at WWDC this summer, HealthKit has been particularly thought-provoking for me. At the risk of sounding like that guy, I think I have a somewhat priviledged perspective of HealthKit. There can’t be that many former registered nurses who’ve switched to iOS app development and tried to start a healthcare data company.

I’ve devoted the better part of the last four years to understanding the healthcare industry, both its current problems and its possible futures. Along the way I’ve learned many things – some hopeful, some downright depressing. I ought to describe how HealthKit looks from my vantage point.

Before jumping into HealthKit, let’s take a step back and look at the past and present state of healthcare information – what it is, where it’s stored, and how it’s transmitted and used. I’ll limit my description to the US since that is what I’m most familiar with.

Stacks of Paper

When I was a nurse, I worked in critical care. A typical patient at my hospital was brought in via ambulance or helicopter from an outlying urgent care facility. Though I worked at a hospital in Nashville, it was not uncommon for us to admit patients transferred to us from hospitals as far away as Kentucky. A transfer patient would be wheeled out of an ambulance and onto my ward by EMTs hired to ferry patients between hospitals. Tucked into the corner of the mattress, I’d find several fat manila envelopes filled with stacks of paper printouts from the outlying facility’s electronic health record system (EHR). There were so many pages it wasn’t possible to use them as a working reference. Instead, I primarily relied on the verbal report from the EMT to learn a patient’s past and present condition. It was only later, after our doctors had a chance to review the reams of paper printouts, that the full picture would begin to be revealed.

Though the transfer patient may have been in the outlying facility for days or weeks, as far as our EHR was concerned, today was Day Zero. Discontuity between caregivers’ records increases the likelihood of mistakes. Doctors and nurses go through a great deal of training in order to verbally communicate patient data as efficiently and safely as possible. This training helps us offset the risks of fractured medical records. Those stacks of paper became a supplementary reference, secondary to verbal reports. It would take a day or two before our own EHR would be populated with enough of that patient’s data to become a primary reference.

From Paper to EHRs

Until relatively recently, the vast majority of medical records in the US have been recorded on paper. From routine doctor visits to lengthy stays in critical care, every piece of data – lab results, medication orders, progress notes, etc. – were written or typed on paper and stored in massive warehouses. It wasn’t until the 1990s that electronic health records (EHRs) started to gain widespread traction. Doctors and hospitals were under no legal obligation to use EHRs, so the only providers to use them did so for organizational efficiency.

There have been numerous studies of the impact of EHRs on patient care, with mostly positive results. The consensus is that EHRs improve institutional logistics (billing accuracy, resource management, etc.) and help decrease medical errors, if sometimes at the expense of time spent at the bedside. They also contain latent possibilities for medical research and population health management – but only if most doctors and hospitals go fully paperless.

Though there are hundreds of EHR vendors, a mere handful of major players have dominated the market – companies like Epic, Cerner, Allscripts, and Meditech. Every vendor has its own unique software stack, from data storage to caregiver applications. There is no common database linking all these software products together. Every institution’s medical records are trapped within proprietary silos. Any interoperability with other EHRs has been made possible only on an ad hoc basis, at the whimsical discretion of EHR vendors and their customers. In practice, interoperability is virtually nonexistent. Patients are transferred between institutions with a stack of paper printouts, or nothing at all.

There are two main reasons why EHR interoperability hasn’t happened: it would be bad for business, and technical standards are lacking.

Interoperability Would Be Bad for Business

It’s disappointing but unsurprising that EHR vendors would keep medical data trapped inside their silos. If medical data were distributed via a shared database, their products would be reduced to either dumb pipes or thin client apps. Being a dumb pipe is bad for business. Selling thin clients isn’t a great option, either. EHR user interfaces are notorious for their terrible design. As a former registered nurse, I have plenty of interface design horror stories I could share with you. The reason these apps are so poorly designed is simple: they’re enterprise software. The customer is the hospital administrator, not the bedside nurse. The real money is in long-term, multi-million-dollar contracts with institutions who aren’t anyone else’s customers.

Interoperability isn’t in the interests of most healthcare providers, either. As a healthcare provider, you want the other institution to make it easy for you to see their data, so you can make your facility more efficient. But you have a neutral or negative interest in providing the same openness in return. Why would you invest in infrastructure that makes it easy for your patients to go somewhere else? Business models or legal requirements – or both – would have to change in order for EHR vendors and healthcare providers to be willing participants in a world of shared medical information.1

Interoperability Would Be Technically Challenging

There are technical obstacles to interoperability, too. Medical information is incredibly complex to model. It’s edge cases from top to bottom. Even something as simple as defining the possible values for a person’s gender raises difficult questions about biological versus preferred sex. Out of necessity, a number of protocols have been developed over the years that can encapsulate medical data in transit between subsystems within a given institution.

The most commonly used protocol is called HL7 – a gargantuan protocol with many variants. In the real world, no two institutions use the exact same implementation of HL7. Most systems in the US use one of the 2.x versions, which are pipe delimitted, prone to error, and not human-readable. Here’s a typical HL7 message for a lab result:

MSH|^~\&|GHH LAB|ELAB-3|GHH OE|BLDG4|200202150930||ORU^R01|CNTRL-3456|P|2.4
 PID|||555-44-4444||EVERYWOMAN^EVE^E^^^^L|JONES|19620320|F|||153 FERNWOOD DR.^
 OBR|1|845439^GHH OE|1045813^GHH LAB|15545^GLUCOSE|||200202150730|||||||||
 555-55-5555^PRIMARY^PATRICIA P^^^^MD^^|||||||||F||||||444-44-4444^HIPPOCRATES^HOWARD H^^^^MD
 OBX|1|SN|1554-5^GLUCOSE^POST 12H CFST:MCNC:PT:SER/PLAS:QN||^182|mg/dl|70_105|H|||F

Yeah. Right. That’s a far cry from the tidy, readable JSON response from your garden-variety social media API.

There is a newer 3.x series of HL7 that is based on XML, but few EHRs in the US are actually using it. Thus the 2.x sample above is the current state-of-the-art of medical data exchange. Since HL7 2.x is pipe-delimitted, it is easy for implementers to insert data between the wrong pipes, breaking the already weak links between EHR subsystems. This happens so frequently that an entire industry exists just to solve this problem.

The deeper problem with HL7, in my opinion, is that it isn’t designed for persistence. It’s a means to encode ephemeral messages. The actual work of when and how to send messages, and where to store their contents, is left up to each EHR vendor. Linking together EHRs from two different vendors would be an enormous engineering task. A shared repository of private medical records would need something much more readable and resilient than HL7. It would need to look more like the JSON messages used by modern, RESTful web APIs.

HITECH and Meaningful Use

Earlier in this post I wrote that EHRs didn’t begin to gain widespread traction until the 1990s. This was an overstatement of the facts. The reality of EHR usage is that – even as late as 2009 – fifty percent of US hospitals were only only halfway electronic. Most just converted the easy stuff to electronic records, like lab results. Less than one percent (!) of them had completely moved beyond paper records. Many still had no electronic records at all.

The 2009 ARRA act passed by the US Congress included a landmark set of reforms aimed to drag US medical institutions kicking and screaming into the 21st Century – or at least the 20th. Not to be confused with the “Obamacare” reforms, the HITECH Act obligated US healthcare providers to demonstrate “meaningful use” of electronic health records. Meaningful Use, as the program has come to be called, ties Medicare reimbursements to EHR usage. A series of requirements, broken up into stages, will be rolled out over the next decade. Each successive stage unveils more stringent rules. Institutions that meet or exceed the current criteria in a timely fashion will earn bonuses on their Medicare reimbursements. Institutions that don’t will face penalties. Medicare reimbursements are bread-and-butter for healthcare providers, so there is strong motivation to keep up with the demands made by Meaningful Use.

The Meaningful Use criteria are still being defined, but the ones that have already been put into play are praiseworthy. Institutions must be able to electronically transmit a Continuity of Care Document (CCD) upon demand. A CCD is a brief summary of a patient’s past and present medical conditions. This requirement is aimed to solve the “stacks of paper” problem above. The CCD is a glorified PDF, but it’s the next best thing to having truly interoperable EHRs. Other Meaningful Use requirements are aimed at improving patient safety by requiring barcode scanning before administering drugs (BCMA), or requiring doctors to use specially-designed software to write orders instead of pen and paper (CPOE).

The most intriguing part of Meaningful Use is that it places the burden of proof on medical care providers, not EHR vendors. It’s up to each institution to select an EHR that supports Meaningful Use criteria. EHR vendors are in a mad rush to update all their products to meet the minimum requirements in time.

It is not yet known if Meaningful Use will ever require true interoperability between EHRs. If that happens, I would be extremely pleased, as a software developer, a former nurse, and a patient. With congressional lobbying being what it is in the US, I doubt EHR vendors or healthcare providers will ever let true interoperability become a legal obligation.

The False Promise of HealthKit

To a layperson, the introduction of HealthKit at WWDC looks like Apple might hope to provide the foundation for a future of shared medical data. The example use cases looked pretty cool at a glance. According to Apple, your doctor could conceivably have easy access to vital signs obtained by a Withings blood pressure cuff connected to your iPhone. The list of HealthKit partners, like the Mayo Clinic and Epic Systems, was particularly impressive. But I don’t think either HealthKit or Apple is in a strategic position to escape the forces that keep our medical data trapped in the status quo.

The first problem with HealthKit is that it can only model a tiny fraction of the spectrum of medical data. There is a very long list of things it can’t do: track medication doses, doctor’s orders, procedural notes, etc. But let’s assume for sake of argument that HealthKit eventually ships with model classes for every conceivable type of medical data. It still wouldn’t be able to bring about EHR interoperability.

As I discussed above, interoperability is technically challenging no matter who attempts it. Apple clearly has the capacity to tackle the technical issues if it really wanted to. The central problem for interoperability is one of motivation. Who has the power to compel all the hospitals and EHR vendors in the US to open up read/write access to their medical records?

In my estimation, there are only two entities capable of doing so. The first and obvious one is the government. If Meaningful Use ever mandates one-hundred-percent interoperability, then the industry would have no choice but to comply.

The second entity would be a for-profit company that offers healthcare providers a mutually-benefical partnership. This company would compel hospitals to allow them access, but with a carrot instead of a stick. If there was a way that hospitals could benefit from partnering with an open EHR framework, then they might happily allow their siloed data to flow freely between competing institutions.

Unless I am misjudging Apple’s intentions, HealthKit looks like it’s another way to keep high-end customers loyal to the iPhone and other Apple products. As such, it’s against Apple’s interests to make HealthKit available on competing platforms like Android or Windows. But for stored medical data to be of any significant use to healthcare providers, it can’t be limited to just A) patients who own iPhones and use HealthKit apps and B) providers with EHRs configured to access those apps. It’s unreasonable to expect that either healthcare providers or EHR vendors would devote limited engineering resources for the sake of a handful of patients, especially when the laundry list for pending Meaningful Use requirements is still so long.2

In practice, I expect HealthKit will have little or no impact on professional healthcare delivery.3 I think the experimental partnerships between Apple and the companies listed during the WWDC Keynote will remain exactly that: experimental. It will take a lot more than HealthKit to make a dent in the universe of healthcare.

  1. Clayton Christensen’s book on the business of healthcare offers a fascinating exploration of these kinds of problems. 

  2. This logic is the same for any hypothetical Apple wearable device, too. 

  3. The personal fitness industry is another story, however. HealthKit is an excellent, well, fit there. 

|  19 June 2014

Thanking My Dad for Caring About “Getting It Right”

It’s Fathers Day. I’m relinking to this post about my dad’s lesson on always doing your best work. My dad cared enough about “getting it right” to make creative work an issue of character, not just a hobby. Thanks, Dad. I hope to teach this to my son, too.

|  15 June 2014

Maglus Stylus Review

Full Disclosure: Applydea gave me review samples of the black Maglus and interchangeable tips to try out for this article. Even so, everything I write below is what I really think.

The best iPad stylus is also the one you’ve probably never heard of: the Maglus by Applydea. There’s a lot to like about the Maglus. Its sturdy aluminum body was – to my knowledge – the first to be shaped like a carpenter’s pencil. It has strong magnets hidden under the rubber pads, which make it easy to snap onto a Smart Cover or the side of a cabinet. Most important of all, it has the best tip of any stylus I’ve used. The silicone material registers touches faster than any other stylus out there. Its nearly-spherical shape retains its form under a wide range of pressures, which helps with accuracy as well as feel.

Made with Paper and Maglus.

The Maglus’ team reached out to me to see if I’d be interested in trying out their newer anodized black model. Having been happy with the standard aluminum finish, I expected not to like the black one as much, but I was wrong. The black finish looks really nice in person. If you’re a fan of darker iPhones and iPads, you’ll appreciate it. For reasons I can’t quite express, the darker color feels more appropriate to a drawing tool than the aluminum finish, at least to me.

New anodized black model, with extras.

Applydea also included an interchangeable microfiber tip for me to try. It looks like a tiny version of the wire mesh that encloses a microphone like the Yeti from Blue, but feels like smooth cloth.

Alternate microfiber tips.

The microfiber outer layer is wrapped tightly around some kind of dense material. I was expecting it to feel spongy, but instead the tip feels stiffer than the silicone version. More force is required to get it to register a touch, but there is less overall friction between the tip and the iPad’s display. I still prefer the original silicone tip more. I tend to write and draw with light pressure, so the microfiber tip posed problems for me. If you have a heavier hand than me, you might prefer the microfiber tip.

If you’re curious about how the Maglus compares to the Pencil by Fifty Three, I wrote a comparison review last year. Everything I wrote then still applies today. The Maglus is without a doubt the best all-around stylus you can buy.

|  12 June 2014

Unread for iPad

Unread for iPad is available on the App Store today. It’s a brand new app with a clean, distraction-free reading experience. It has all the sharing features you’ve come to expect from the iPhone version, as well as the full set of syncing services: FeedWrangler, Feedly, NewsBlur, Feedbin, and Fever.

I’m proud of the way this app came together. Compared to the iPhone, designing for the iPad is especially difficult. The iPad presents a challenging mixture of established interface patterns, awkward display dimensions, and a comparatively infinite canvas of pixels. Unread for iPad balances all these constraints against an overarching goal of mental and physical comfort.

You can navigate anywhere in the app from the edges of the screen. There’s no need to constantly reposition your hands. Just sit back and read your favorite online writers wherever you’re most comfortable.

Unread for iPad is $4.99 (USD) on the App Store. Also, in case you missed it: Version 1.3 of Unread for iPhone was released to the App Store last week. It has lots of bug fixes and performance improvements, especially for older iOS devices. Two new hidden themes, too.

|  9 June 2014

Smartphones, the Internet of Things, and the Death of Software

Inventions that change our lives are magical. They pry us free from physical laws. The printing press enabled the thoughts of a distant writer to multiply, spread, and live forever. The telephone stretched casual conversations – conversations that would have barely crossed a dining room table – until they spanned the globe. Remember what Steve Jobs called the personal computer? A bicycle for your mind.

For the next big thing to be the Next Big Thing, it must be magical. It must free us from some constraint that seemed immovable the day before. In what ways are we still bound to a technological or mechanical necessity?

The Internet in Your Pocket

What is it about the smartphone that has made it so influential? At a tangible level, the smartphone is a combination of technologies: a touch screen, user-friendly software, mobile chips, compact batteries. But at a more abstract level, the smartphone is The Internet in Your Pocket. Of all its contributions, I think it’s the always-on, always-connected, and always-with-you nature of the smartphone that has been its defining trait. The smartphone connects us to the teeming whole of human ideas, at all times and everywhere.

The untethered freedom of the Internet in Your Pocket has had both quantitative and qualitative effects on how we use the Internet. We spend more time on it than ever, and we also spend that time in new ways: messaging, social media, sharing photos, watching TV and movies, etc. Almost every app of consequence on my iPhone is backed by some kind of Internet-based API. My iPhone is pretty boring when it’s in Airplane Mode.

The smartphone transformed the Internet from a thing we use in one place into a thing we use anyplace. The difference between the corner of your kitchen and everywhere is hard to overstate. It’s for this reason that I respond to some people’s exuberance about the Internet of Things with a smirk. The Internet in Your Pocket is way more interesting than the Internet in Your Toaster. The latter is an incremental change that builds upon what the smartphone has begun. I don’t expect web-connected home appliances to change the lives of the people who buy them, certainly not at the magnitude that the smartphone has changed them.

The Death of Software

Rather than an Internet of Things, I like to imagine that a truly intelligent, ubiquitous artificial intelligence would change our lives to a similar degree that the smartphone has.

Through the present day, our concept of software has been a more-or-less static arrangement of logic and design. The user has a goal (manage her tasks, be entertained, etc.). The app is built to help her meet that goal. But the user has to squeeze her life into a shape that conforms to the software. If she’s lucky, there’s at least one app that fits her well enough to get the job done. But even the best piece of software still has rough edges. It’s indirect. It has a learning curve. It’s unaware of her context, and unwilling or unable to act in concert with other apps the user needs.

A truly intelligent artificial entity, as I envision it, would turn this situation upside down. Instead of the user conforming to the software, the software would conform to the user – a deceptively simple change that would have vast implications.

Software concepts that have been with us since the beginning of the personal computer would no longer be relevant. For example, apps as discreet experiences would be obsolete. There would no longer be any need for a web browser, a messaging app, a todo list app, etc. There would only be one app: the interaction between the user and the AI. Everything else would be built on an ad-hoc basis, in real-time, then thrown away:

"What do I have to do today?"The AI constructs a todo list, artfully typeset and formatted to compliment the tastes of the user.

"My kid won’t stop crying. Can you make him a game?"The AI constructs a simple game pitting the child’s dog as a hero versus his villainous school teachers. The levels progress according to patterns established by well-designed games of yesteryear.

"Where should we eat?"The AI presents what amounts to a Yelp-like interface, built from scratch using everything it knows about your family, what you eat in general, food allergies, what food you haven’t had lately, how long it takes to arrive and order food, etc. It’s not a startup’s MVP. It’s just for you.

And these are just the effects that such an AI might have on a personal electronic device. One can easily imagine the huge changes that such an entity could bring to medical care documentation, scientific research, and more. For every stereotypical bit of AI science fiction, there are dozens of life-changing applications that would be too boring to put in a film, even if they’d make a fortune.

Software, instead of feeling like a sea of half-baked ideas with a few rare gems, would feel like the bicycle of the mind you’ve always wanted but never thought possible.

I like to imagine this kind of AI growing out of an industry like video games. It’s not hard to imagine a time when gaming hardware is so powerful that there aren’t enough artists to create objects at the full level of detail that the hardware is able to render. To keep pushing the level of realism, a team of game developers would undertake the task of creating an AI with intuition and taste. Level designers would interact with the AI in loose, human terms:

"Make it gloomier."

"Put a neighborhood here with two story houses. Wait, three stories. These four need flood damage."

"The guy who lives here reads comics and he’s been on vacation for a few months."

The AI level designer would respond to comments like these by assembling realistic worlds and objects – not procedurally generated stuff, which would look intentionally random, but realistically generated stuff: a tarp covering a leaky roof; dog’s nose prints on a storm door; soggy U-Haul boxes; a stack of mail. The game developers will think they’ve built a design tool, but what they’ll actually have built is the death of software as we know it.

The question that makes me uncomfortable with this idea: if this were to happen, what would happen to software developers?

|  27 May 2014

Friday App Design Review – Castro for iPhone

Every Friday I will post a detailed design review of an iOS app. If you’d like your app to be considered click here for more information. I am also available to consult privately on your projects.

This week’s Friday App Design Review is Castro, the podcast app from Supertop. There’s a lot to like about Castro. I like how well Castro balances the constraints of iOS 7, the need for visual affordances, and Supertop’s creative impulse for originality. I especially like how thoughtfully it uses borders.

As I have said many times, few things are as important in iOS app design as borders. Borders aren’t necessarily literal borders drawn around an element. A border is any area where two or more edges meet. A border can be literal, as in the case of a one-pixel horizontal score between rows. A border can also be implied, like the invisible borders around the square margins of toolbar icon buttons.

iOS 7’s confusing visual language has made it harder for third-party apps to handle borders. There are mixed messages suggested by Apple’s stock apps. iOS 7 insists on text-only buttons, yet not for certain glaring cases. It has a general tendency toward unclear borders between logical sections, though it sometimes uses them with abandon. There isn’t yet a clear pattern for us to imitate. In the absence of best practices, each app seems to strike out into its own unique territory, often with awkward results.

Castro’s particular mixture of literal and implied borders is fantastic. It’s almost always easy to know where one tappable area ends and the next one begins. Literal borders break up the screen in logical ways, reinforcing the navigation hierarchy. Most impressively, Castro manages to do all this within the aesthetic constraints of iOS 7. Let’s look at some of the ways Castro uses borders, and explore ways to make them even better.

Episodes List

One of the biggest risks in Castro is the absence of literal borders between rows of episodes. Without careful planning, one row could easily blur into the next. Castro uses several techniques to solve this problem.

Episodes List

The bold episode titles create a strong implied border at the top of each row.

Implied top borders

The alternating rhythm of the large bold titles and small light body text helps break up the content, too.

The wide left margins are broken up only by podcast artwork, like tabs peeking out of the top of a Rolodex. These thumbnails accentuate the rhythm created by the episode titles.

Artwork folder tab effect.

Notice how the episode summaries are allowed to run into four lines. Your eyes subconsciously parse a summary paragraph as if it’s a big rectangle.

Large summary paragraphs

This suggests a strong implied border along the bottom of the row. The large paragraph also counterbalances the concentrated heaviness of the artwork on the far left. The weight of visual elements looks balanced across the width of the row. In a list like this, each cell should feel like an iPad with its center of gravity squarely in the middle.

Individually these elements might not be enough to create strong implied borders. But together the implied borders are unmistakable. The user never doubts where she can tap in order to select an episode. The strength of the implied borders has another benefit: it makes it possible for section headers to have literal bottom borders without blurring the separation of adjacent rows.

Section headers group by date.

Podcasts List

The podcasts list employs most of the techniques as the episodes list. But notice how the absence of long summary paragraphs diminishes the strength of the implied borders.

Podcasts List

Each row also feels lopsidedly heavy on the left. It’s as if the artwork is a bowling ball near the edge of a plank.

Both the episodes list and podcasts list have variable row heights. Variable row heights can obscure the visual rhythm of implied borders. This effect is more noticeable in the podcasts list because the average row height is shorter. I would suggest adding an additional line or two of metadata to each row, perhaps the date of latest episode. This would increase the average row height thus strengthening the rhythm of the implied borders. It would also distribute visual weight more evenly across the row.

Navigation Bar Border

Castro’s navigation bar has a literal border separating it from the main content. It’s more bold than what is typical on iOS 7, which is laudable. But I think there’s room for improvement.

Here’s a detail view of the navigation bar’s bottom border:

It’s an opaque grey color, most likely:

[UIColor colorWithWhite:0.65 alpha:1.0]

When viewed at a natural distance, it looks like a thin dark line between two white areas. But there’s a problem whenever dark content is scrolled underneath the border. Against the dark content, the border looks like a light gray color. In the detail view above, you can see this in the portion of the border that overlaps the 99% Invisible artwork. At a natural viewing distance, the border loses it’s crispness. An alternative that works well against any kind of content would be to use a translucent black color:

[UIColor colorWithWhite:0.0 alpha:0.33]

I would use this color and have the border overlap the scrollable content. Here’s a mocked up detail view with this alternate color:

At a natural viewing distance, this border would look crisp against any kind of content.

Playback Toolbar Border

The playback toolbar also has a strong border. The toolbar’s background is solid black, which would otherwise disappear against the predominantly dark episode content during playback:

The toolbar has a border which, like the navigation bar, is also an opaque gray:

[UIColor colorWithWhite:0.3 alpha:1.0]

While this border looks okay against the dark episode content, it doesn’t look crisp when the toolbar overlaps the predominant white of the episodes list:

Click to see enlarged version

At a natural viewing distance, this grey border looks more like misaligned pixels than a border. The toolbar would look better if the black extended all the way to the edge:

Click to see enlarged version

But wouldn’t this undermine the purpose of the grey border when viewing the episode details? Yes, but there’s another way to draw the border which would look crisp in both contexts. First, here’s what the existing border looks like when scrolling between the episodes list and the episode details:

Instead of the opaque grey color, I suggest using a translucent white color:

[UIColor colorWithWhite:1.0 alpha:0.12]

Using this color, I’d extend the border so it overlaps the content above the toolbar. This would both accentuate the crisp dark edge of the toolbar when set against white content and form a strong border when set against dark content.

This has the added benefit of letting the color of the episode details seep into the border, which is in keeping with the aesthetics of the rest of the details screen.

|  24 May 2014

Seeking Advice for a Right-to-Left Language Bug in Unread

This is cross-posted from this Stack Overflow question. If you know the answer I’d appreciate your help.

In Unread, I’m using the NSAttributedString UIKit Additions to draw attributed strings for article summaries in a UIView subclass. The problem I have is that despite using a value of NSWritingDirectionNatural for the baseWritingDirection property of my paragraph style, text always defaults to left-to-right.

Here’s how I form the attributed string (simplified example):

NSString *arabic = @"العاصمة الليبية لتأمينها تنفيذا لقرار المؤتمر الوطني العام. يأتي ذلك بعدما أعلن اللواء الليبي المتقاعد خليفة حفتر أنه طلب من المجلس الأعلى للقض الدولة حتى الانتخابات النيابية القادمة";

NSMutableParagraphStyle *paragraph = [[NSMutableParagraphStyle alloc] init];
paragraph.baseWritingDirection = NSWritingDirectionNatural;
paragraph.lineBreakMode = NSLineBreakByWordWrapping;

NSMutableDictionary *attributes = [[NSMutableDictionary alloc] init];
attributes[NSParagraphStyleAttributeName] = paragraph;

NSAttributedString *string = [[NSAttributedString alloc] 

And here’s how I draw the text:

- (void)drawRect:(CGRect)rect {
    [self.attributedText drawWithRect:rect 

And yet it still flows from left to right:

What am I missing?

UPDATE: – B.J. Titus has answered my SO post correctly. It turns out that NSWritingDirectionNatural, despite what it sounds like it does, doesn’t actually introspect the string to determine an appropriate writing direction. It just uses whatever is the base writing direction of the current system language. It will even apply a right-to-left margin to left-to-right runs of text. The workaround is to manually determine the appropriate writing direction and set an explicit LTR or RTL direction.

|  22 May 2014

My Reasonable iPhone 6 Prediction

Since a larger iPhone is all but a given at this point, the interesting question is how will Apple do it? There are several directions Apple could take. Before I delve into speculation, let’s rally around some terms.

Now for some fun speculation.

@3x Scale, Same Logical Size

Apple could increase the iPhone’s scale from @2x to @3x, re-using an existing logical size (either 320x480 or 320x568). This would allow them to use the same display panel already in use in the iPhone 5s, but cut it into a larger shape. This is more or less what Apple did with the first iPad mini; its display panel was the same as that of the iPhone 3G, just larger. The problem with this approach is that it would result in a phone that seems comically large for an Apple product:

320x480 points at 3x scale.

This would look even more ridiculous if I’d mocked up a 320x568 point ratio at an @3x scale.

Historically, Apple’s designers opt for modest differences in the physical sizes of a given product range (for example, 13-, 15-, and 17-inch MacBooks Pro). So if Apple chose to bump the scale to @3x, I would expect them to also use a higher pixel density than 326 PPI. This would require a new LCD panel, which might be prohibitively challenging. From what I understand, shipping LCDs with a new pixel density would require significant engineering and infrastructure resources, as compared to merely cutting existing panels to a larger size.

@2x Scale, New Logical Size

Alternatively, Apple could re-use the current retina iPhone LCDs, but cut them to fit into a new logical size. I think a logical resolution of 396x656 would be an interesting choice. This would increase the home screen layout by one row and one column:

396x656 points at 2x scale.

To my eyes, this looks like a more sensible size increase and a better use of the larger display. It also has the benefit of re-using the same 326 PPI display panel technology already being manufactured.

My Bet

For all these reasons, I think Apple is much more likely to ship an iPhone 6 with a new logical resolution at an @2x scale than an existing logical resolution at an @3x scale.

|  20 May 2014

A Practical Introduction to Photoshop for iOS Developers

What follows is a crash course in Photoshop for iOS developers. I’m going to take a very nuts-and-bolts approach. I hope to demystify what it is that an iOS app designer means when she says things like “working in vector” or “pushing pixels.” Beware the following caveat: this is an article about tools, not design. If this were an article about ice sculpture, it would teach you how to turn on the chainsaw. It’s up to you to sculpt an angel without losing a limb.

I’ll give you a winter prediction: it’s gonna be cold, it’s gonna be grey, and it’s gonna last you for the rest of your life.

Photoshop is a big beast. In some places its interface design capabilities feel tacked-on as an afterthought. When mocking up an iOS app in Photoshop, you’ll find that you only need a fraction of the available features. The unused features make it hard for newcomers to know where to begin. It helps to find your bearings before opening your first document.

A Stack of Layers

A photoshop document is a stack of layers that are composited in real time down to a single two-dimensional image. Every layer has several components:

1. Layer Content

Setting aside any other effects or styles that may be applied, a layer’s content is its most basic component. There are five main layer types, each with its own kind of content: raster, fill, shape, text, and smart object.

2. Layer Masks

Every layer has an optional set of masks, which function like stencils. An individual layer can have up to two masks: a raster mask and vector mask (except shape layers, which can only have a raster mask, since they already have a vector mask by definition). For example, a raster layer could have a heart-shaped vector mask:

Masked Crusader

3. Layer Styles

Each layer has an array of options that apply styles to the inner and outer regions of its content. Layer styles include things like drawing a border around the visible edges of layer content (a stroke), or adding a drop shadow that casts a shadow on layers underneath.

Layer Styles Window

There are lots of layer styles, each with its own suitable purposes and range of possible effects. I’ll go into detail about some of them later on.

4. Blend Modes and Opacities

Non-opaque areas of layer content are composited with underlying layers according to the selected blend mode for that layer. The blend mode selector defaults to “Normal”, but there are many other choices. Many of the blend modes have exaggerated photographic effects, as you can see here:

Three blend modes, same shape and color.

Except for certain specific cases, you should always set each layer’s blend mode to “Normal.” When it comes time to save image slices as PNGs to use in your app, Photoshop will blend non-opaque areas with an empty translucent background, thus losing the information produced by a dynamic blend mode.

There are also two opacity sliders for each layer. The one officially dubbed opacity adjusts the opacity for the entire layer, including any layer styles that have been applied. The other opacity slider is called fill. The fill slider adjusts the opacity of the layer’s contents without affecting the opacity of the styles. The difference between opacity and fill is easier to understand with a visual example:

Opacity versus Fill

5. Layer Groups

Layers can be organized into a group, which looks like a folder in the the layer panel. Since Photoshop CS6, layer groups have their own layer styles and masks, as well as opacities and blend modes. This can be difficult to wrap your head around in the beginning, but it comes in handy when mocking up complex layouts.

Working in Vector

With new device form factors always on the horizon, it’s important for iOS designers to build mockups and image resources in ways that are easy to scale up or down as needed. The recommended approach is called “working in vector,” which is not necessarily a reference to working with SVGs. Most iOS app image resources are loaded as PNG files. But that doesn’t mean the Photoshop documents that generated them aren’t vector-based.

A vector-based Photoshop document is composed entirely of shape layers and fill layers. It is trivial to scale a non-retina @1x resolution document up to an @2x document, as long as it’s entirely composed of these two layer types. It’s often as simple as using the “Image Size…” menu item.

Some designers do all their mockups at non-retina scale and then scale up to retina for final processing. I prefer the opposite. Others make large sprite sheet documents that have both normal and retina scale images for app resources, side-by-side, sliced up for easy exporting. Whatever your approach, the most important thing is to avoid using raster layers at all costs. Scaling up a raster layer to a higher resolution will make your hard work look terrible:

Vector versus Raster

Sample Project: A Classic Button

Let’s put together everything discussed above in a sample project. It may not be in-fashion these days, but a classic iOS button is a great learning project for experimenting with shape layers, layer styles, and vector-based documents.

“Gee, our old LaSalle ran great…”

1. Create a New Document

New Document Modal

A document for mocking up an iOS app should be in the RGB color space with a resolution of 72 pixels per inch. I usually work in a 16 bit color depth since it produces smoother gradients. If you plan on exporting PNGs for use in an actual app (say, for button states), be sure to have a transparent background.

2. Disable Color Management

Notice in the screenshot above that I selected “Don’t Color Manage This Document.” Photoshop processes color differently than other OS X apps. When you’re designing for mobile apps or for the web, you’ll want to disable all forms of Photoshop’s color management. In addition to disabling color management for all new documents, you’ll also want to select “Monitor RGB – Display” for “RGB Working Space” under the “Color Settings…” menu item:

Color Settings

Using the native RGB space of your Mac’s display, your Photoshop document will look similar to what you’ll see on a device. But there’s no substitute for using something like Skala Preview to preview your designs in situ on an iPhone or iPad. Marc Edwards from Bjango has an excellent article that goes into detail on color management and Photoshop.

3. Enable Pixel-Snapping

All vertical and horizontal edges in your mockup should be aligned to whole integer pixel margins. It’s possible for shape layers to have path segments that are out of alignment with whole pixel margins. When this happens it makes the edges of your shapes look fuzzy:

Pixel snapping is crucial.

There is an option in Photoshop’s general preferences screen called “Snap Vector Tools and Transforms to Pixel Grid” which, when enabled, makes this much easier to manage.

This button toggles pixel snapping.

If you’re working on a retina resolution document (1536 by 2048 pixels for a portrait iPad), try to make all horizontal and vertical edges line up with even-numbered pixel margins. That way when you scale the document down for a non-retina screen, your edges won’t fall on sub pixel boundaries (which leads to fuzzy edges).

4. Add a Solid Color Fill Layer

Using a fill layer makes it easy to non-destructively tweak the background color of the document whenever you wish. To speed up my work, I use John Gruber’s system keyboard shortcut for the Help menu search box. I just start typing N-E-W-F-I-L-L until the desired item appears in the drop-down.

New Fill Layer

After picking a color, your document will have a backdrop to go behind the button.

Fill layer in action.

5. Add the Button Shape Layer

First put up some guides to mark where you want the button to go, either with my menu item trick (N-E-W-G-U-I-D-E) or by manually dragging inward from the rulers.

Measure twice, draw once.

Next, choose the rounded rectangle shape tool from the tools panel. You may need to click and hold to switch between the shape tools from the sub-menu.

Shape Tool (click and hold)

When you’ve selected the rounded rectangle tool, the options toolbar changes to show the options for this tool, including a corner radius:

Corner radius option

If you can’t see the options menu, toggle it’s visibility under the “Window” menu item.

Change the corner radius to something neither too small or too large. Since this is a big button at retina scale, I think 16 pixels looks good. Now draw in your shape:

Newly-drawn Shape Layer

Notice that there is now a layer in the layers pane called “Rounded Rectangle 1”. I recommend giving a proper name to every layer in your document.

6. Change the Button’s Color

To change the fill color of your rounded rectangle, double click in the thumbnail preview for that layer in the layer pane (can it be less obvious?). This will make the color picker appear. Pick a bright blue color, but not too saturated. Something like this:

New Shape Layer ready to go

7. Experiment With Layer Styles

With the options in the Layer Styles window, there is shocking variety of possible effects you can achieve – even with just a single shape layer. The full breadth of each tool is outside the scope of this post, but I’ll give you a taste with the following recipe. To show the Layer Styles window, double click in the empty gray area of a layer row in the Layers Pane (yeah, I know, even less obvious).

Inner Shadow

Inner Glow

Gradient Overlay

Gradient Overlay Submenu

Outer Glow

Drop Shadow

Bonus Points: Add an Icon Shape Layer

For bonus points, switch the shape tool to the custom shape option and add a white icon to your button:

Style the icon to make it look sort of like mine does here. Remember this star is just one shape layer. You should be able to create this glossy, raised look with just its Layer Styles window alone.

|  19 May 2014