Google Glass Needs A Full Audio UI

Google Glass’ UI requires the touchpad today, yet using it becomes painful after only 1-2 minutes! Glass needs a complete voice command UI to avoid “Glass shoulder” syndrome.

Google Glass’ touchpad is OK when a verbal command might be awkward, but an audio UI becomes imperative when your hands are occupied (e.g. covered in dough while cooking, busy carrying things, or just relaxing). The Glass UI relies too heavily on the touchpad and this is, literally, painful. Tom Chi concisely explained why in his talk at “Mind the Product 2012” (http://vimeo.com/55741515, 7m18s):

[Tom describes the first test subjects trying a prototype gesture UI]
“…and about a minute and a half in I started seeing them do something weird, they were going like this [Tom rolls his shoulders and kneads them], and I was like “What’s wrong with you?” and they responded “Well my shoulder sort of hurts.” and we learned from this set of experiments that if your hands are above your heart then the blood drains back and you get exhausted from doing this in about a minute or two and you can’t go more than five. “

That’s why using Glass for more than a minute or two just isn’t practical right now; the touchpad is above your heart, yet much of the UI requires it.

Hopefully future revisions of Glass will make the entire UI available via audio.  A simple test for completeness is covering the display and then using Glass with just your voice and ears (and possibly head movements).

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: