Thursday, 31 October 2013

Samsung S3 Touchscreens Are More Accurate Than iPhone, But Only If You're A ... - Forbes

Finnish test and measurement automation company OptoFidelity has gotten a lot of attention for their new one finger robot with a study headlined as "OptoFidelity TPPT tester proved significant fails in Apple iPhone touch accuracy." Specifically, the company tested the accuracy of the responses to very precise touches across the entire screen array of Apple's Apple's flagship iPhone 5S ("the world's most advanced smartphone"), the iPhone 5C and last year's Samsung Galaxy S3.

The results, shown in image above, seem damning of the new iPhones to the uncritical eye, and certainly the tabloid press has gone to town in that direction. A more considered reaction, for instance from Nick Arnott on his Neglected Potential blog, reveals another story:

But something sticks out about the iPhone results.

The green area for the iPhone results, where it registered taps within 1mm of the actual tap location, fall into an area that would be easily tappable with your thumb when holding your phone with your right hand.

Yes, that is striking, isn't it? This pattern, nearly identical in the two iPhones tested, would suggest that this is an engineered difference, a compensation—not a bug, but a feature! No one outside of Apple knows for sure that this is so, Arnott admits, but it makes too much sense not to be true.

Interestingly, John Gruber of Daring Fireball writes that, "A little birdie tells me they 'don't think there's a right-thumb bias' in iOS." This "little birdie" is clearly an Apple insider, and while Gruber interprets this as attributable to an error in OptoFidelity's testing procedure, I think that it is likely that your iPhone quickly figures out your handedness and adjusts accordingly. By that line of thinking, either the units that OptoFidelity tested had already been used by a right handed person (not unlikely) or iOS does have a right hand preference right out of the box that is easily overridden by initial use. I don't happen to have a brand new iPhone to try this out on, but I'm sure someone will try this out and report on it.

What can we learn from this supposed iPhone "fail"? First, as Arnott points out, it is important to align your tests with actual use patterns. Other than certain kinds of games, most smartphone applications do not make use of the entire touchscreen equally. There are "sweet spots" which in fact coincide with the sensitive areas shown in the iPhone tests. Second, as Greylock partner (and former Mozilla CEO) John Lilly points out in a post on Medium, Apple is not a hardware company or a software company but a "personal computing systems company." So where Samsung (truly a hardware company) would approach the touchscreen as OptoFidelity has, as a collection of technical attributes like touch sensitivity, pixel density, brightness, etc., Apple sees it as an experience delivery medium.

This distinction is at the heart of what makes Apple Apple. Optimization requires differentiation. Certain areas of the screen are more important than others and the user's ability to accurately access different areas of the screen is also differentiated. So far from being an inaccuracy, I would see the difference between the Apple and Samsung approaches as a disagreement about the importance of human factors engineering. There is an awful lot of subtle engineering involved in making things "feel right," and that is what Apple is after, not just raw specs.

I would further argue that although Apple's optimizations do not always work as intended (and sometimes fail), the company's ongoing attempt to work on that level is behind the real value of its products. OptoFidelity has succeeded in garnering some publicity, but its testing may have proven the opposite of what they intended. In doing so, the Finnish company has also pointed to the engineering required to make their own testing apparatus align with actual user experience (see the impressive video below.)

As with digital cameras, it's not just about the number of pixels, or even the raw accuracy of the sensor, but the way all of that data works together to make beautiful pictures. The almost complete lack of human inflection revealed by Samsung's uniform accuracy could give us a clue to why that company's products can feel impressive but soulless. The "ghost in the machine" is actually the human! 

Dangling Question: Why didn't they test the S4?

– – – – – – – – – – – – – – – – – – – –

To keep up with Quantum of Content, please subscribe to my updates on Facebook, follow me on Twitter and App.net or add me on Google+.

Also on Forbes:

No comments:

Post a Comment