Sunday 23 February 2014

Displayless interfaces

Thanks to the tube strike the other week, I was forced to get some exercise. I walked the few miles from Liverpool St to the office where I work, and because I wasn’t familiar with the route and didn’t want to walk head-first into a lamp-post while looking at the map on my phone screen, I put an earphone in one ear and enabled voice-guidance on Google Navigation, leaving the phone in my pocket, so that I could concentrate on avoiding lamp-posts and not getting run over.

Disappointingly, the voice directions proved to be less than adequate. For some reason, the spoken instructions are simpler when Walk is selected instead of Drive. At all but the simplest of junctions, the terse “turn left” or “continue straight” message in my ear was ambiguous, and if I turned around to consider the options then any sense of direction was quickly lost.

What I needed was more context. “Turn left into Aldgate St,” instead of just, “Turn left”, would have been a good start. Better still, if I could ask questions such as, “Which road should I take?”, “Describe this junction,” or the kids’ favourite - “Are we nearly there yet?”

Google Glass could have put the map into my view, but who wants to look like part of the Borg Collective? And for that matter, I don’t much want to be seen talking to myself either, so give me an earphone for one ear and some buttons I can press without looking at the them, and I’ll be happy.

I think Google may have missed a trick - Google Glass is expensive, but Google Ear could be a free downloadable app.

There’s a similar problem when using SatNav in my car. If I want to detour to get fuel, then using the touchscreen to zoom out and look around the local area is tricky, not mention dangerous and illegal. There’s trend among car manufacturers now to replace dashboard controls with a large touchscreen, expanding this problem to even more tasks, from turning up the fan to changing radio station. (At least someone is thinking about this, but the communication is not rich enough in that interface for what I want.)

If I was using Google Ear to navigate while driving, and the set of standard buttons I can operate without looking at are mounted on the steering column, I could reroute without even looking at the screen, let alone touching it.

The key requirements are these:
  1. A set of buttons that can be operated without looking at them. Not too few, and not too many, and I’ll need a set I can use in my car, and a set I can use while walking, and perhaps a set at my desk.
  2. The set of buttons needs to be standardised, so that multiple manufacturers can supply them in various forms - a set on a bluetooth-connected key-fob, a set for the steering column, a set on the side of a bluetooth earpiece (killing two birds with one stone), ...
  3. An intuitive set of conventions for the use of said buttons, which remain the same in all contexts. Think of the buttons on a games console controller - the left and right buttons always mean the same thing, so their use quickly becomes second nature.
  4. Audible communication from the device. It wouldn’t all have to be spoken - short sounds can indicate status, progress, etc
  5. Some conventions for the structure of the voice output, which fit in with the button input conventions, and aim for efficient interaction between device and user.  For example, if I search for something, the spoken response could start with a simple, "28 results," and then I can decide if I want it to start listing the results or if I will refine the search first.
  6. Voice input commands for situations where button input would be too complex - for example, trigger voice mode with the buttons and say, “Find nearby petrol stations.”
  7. It lives in my phone, so I have it with me at all times.

What do you think? Would you use it?