Dynamic affordance: Can the iOS 8 interface be made smarter?

Depending on who you are and how you feel, iOS 7 either took the training wheels off to fully embrace digital design, or it removed so much interface as to crush usability. Both are true. iOS is used by a wide range of people, from the digital and mobile immigrant — those who grew up reading newspapers or using PCs — to the digital and mobile native — those who were born to iPhone and iPad. One, single, static default choice can never properly meet the needs of everyone across that range. But what if iOS 8 could make affordance and accessibility dynamic?

Affordance — the characteristics of an interface element that help hint at the actions it can perform — is essential in human machine interaction. One of the design principles for iOS 7 was deference and in its service a lot of interface chrome — the bars, borders, and other structural or decorative elements — was stripped away in favor of making content more prominent. Most famously, the shapes around buttons were removed leaving only the naked text or glyphs/icons behind. Where previously the touch-target — the area that can trigger an action — of a button was visually apparent, now only its center-point remains.

It was cleaner but it offered far less affordance. Instead of seeing something that looked like a button, you had to know or figure out it was a button. So, responding to the complaints, Apple added an Accessibility toggle to restore button shapes to some iOS navigation elements.

But what if toggles weren't necessary, or at least were a secondary, manual option? What if iOS could determine when people were struggling to use an interface element and bump up affordance and accessibility automatically, and then eased back as/when people got more comfortable.

For example, if someone taps near a button over and over again in a short period of time, iOS could realize they're trying to hit it and are missing, and automatically turn on button shape hinting — perhaps fading it in — and even increase the tap target size temporarily so the next touch triggers it even if it's still a tiny bit off.

Likewise, iOS assumes taps higher up on the screen are coming in at a greater angle, which can sometimes frustrate people — and robots — holding their iPhone in less common ways and tapping from less common angles.

Apple's multitouch display technology, however, has the ability to detect capacitance some distance from the screen and use that information to figure out things like which finger on a hand is likely the one doing the tapping. Based on that electrical guess work, perhaps a the telemetry could also be guessed, and if people are tapping frequently and missing slightly, perhaps the tap target could again be dynamically adjusted to better suit their angle.

As gestures become increasingly common, gesture collisions become increasingly common as well. Swipe to favorite, swipe to reply, even horizontal scrolling are all great unless, while simply trying to go down a list or a page, the angle you're using keeps misfiring those horizontal gestures. It'd be great if iOS could detect when those gestures are aborted or reversed repeatedly and temporarily increase the angle or emphasis needed to trigger them.

Thanks to sensors, iOS devices know their positions is space, perhaps angles could be dynamically adjusted to account for how you're holding your device. Standing, walking, lying down, lounging, straight up, slightly or steeply angled, compensating could be tricky but could also be beneficial.

Hey, in some whacky future world perhaps an iPhone or iPad could use frequent pinch-to-zoom, proximity, or even PrimeSense-detected squinting to dynamically start increasing text size if people are having trouble reading.

And perhaps many of these methods, and others still, could be brought to bear to create a next-generation keyboard for iOS 8 that's as far ahead of current keyboard technologies as Apple's original iPhone keyboard was its virtual competition of 2007.

We talked a little bit about of dynamic affordance on a recent Iterate podcast and some really, really smart designers poked some really, really smart holes into idea and the realties of its implementation, but I still can't help but long for some form of smarts to enter into the iOS interface.

We're in the midst of a contextual awakening, and as much as that can make everything around us and our device better, here's hoping it can make what's on our devices, and how we use them, better as well.

Are there any accessibility or interface issues you'd like to see Apple automate in iOS 8, or would you rather keep all of that completely under your own, manual control?

Rene Ritchie
Contributor

Rene Ritchie is one of the most respected Apple analysts in the business, reaching a combined audience of over 40 million readers a month. His YouTube channel, Vector, has over 90 thousand subscribers and 14 million views and his podcasts, including Debug, have been downloaded over 20 million times. He also regularly co-hosts MacBreak Weekly for the TWiT network and co-hosted CES Live! and Talk Mobile. Based in Montreal, Rene is a former director of product marketing, web developer, and graphic designer. He's authored several books and appeared on numerous television and radio segments to discuss Apple and the technology industry. When not working, he likes to cook, grapple, and spend time with his friends and family.