Over my last couple of days out running I’ve been grinding myself through Gruber’s latest The Talk Show podcast with Craig Hockenberry.

And so I decided to add my ‘Medium-style think piece’. You’re welcome, in advance.

Gruber on his show often debates the latest possible product releases by Apple and its competitors and tries to find use cases for the devices.

But I wanted turn this around and to talk about some use cases for possible wearables for a moment. Use cases, which could highlight some possible product directions.

Here are my frickin’ pipe-dreams around ‘wearable computing’

Wearable computing is all about extending our connection with the computer, and in many ways with the internet. Extending beyond the main way we are already accessing and manipulating that data.

This means we will be using any possible device away from our desks, away from a ‘regular computing device’. On the road, public transportation, while doing anything other than staring into a screen.

Devices, which are being used in the public need to solve many challenges, but the big one is: They can’t be distracting. They need to enhance what we already do, but can’t divert our attention away from what we are already doing.
Using the device while we’re out and about, must be enhancing what we’re doing. The device must be light and possibly close to invisible.


Larry Page didn’t wear Glass at TED. Why? Because, it would’ve been distracting and rude.


Sitting at our desk we can dream of maximizing our productivity by adding more screens, timelines and workflows, but generally we can only focus on one thing well.

So, where would ‘wearables’ be used and how?

On foot

Trying to look at a screen and possibly be inputting data is distracting, it requires our attention away from where we are and doesn’t help us directly.
I might get a notification that I have emails in my inbox while I’m out running, but I certainly don’t want to stop and answer them.
The phone is for emergency. I would stop my activity if it would require immediate attention, but for that people would have to call me.
Out and about, I take pictures and might need to get directions.
To take pictures I use my hands, so I can use my phone. For directions, I could listen to those, if they are preprogrammed, or if I could interact with the app in a way that it wouldn’t require me looking at a screen and typing with my fingers.
The walkman brought music to our ears, and podcasts and the NIKE Fuelband started tracking our every move.
Siri and Google Now is trying to get us to use our apps hands-free, but the adoption is not what we are used to in the age of rapid user adoption and millions of daily new sign ups.

Public transportation

A person in public transportation, on the bus, train, in a cab or at the airport craves distraction and needs information. Distraction, because the commute is mainly passive. And information, because a train might be late.
The phone is perfect for this situation. Sitting in transit doesn’t require our attention most of the time and our hands are free. Using our phone one-handed or even two-handed is possible and pretty well optimized.
Perhaps the vibrator/ringer could be improved by a device that’s extending the phone’s reach onto our wrist or finger so we don’t miss a notification while the device is in our pocket, but it would the device would be an extension of our main screen and input device. A satellite to our new hub, the phone, the portable and personal computer in our pocket.
The device would also need to be smart enough to automatically stop alerting us, if we have the phone in our hands.
Sitting on a train, voice control is out of the question, but headphones could extend the delivery of our content to us.

In the car

Any interaction with the device has to keep us safe or make our driving safer. Visual distraction is the most dangerous while driving. But audible signals could work. Radio has worked for decades, Hands-free calling works, Siri giving us directions helps. In fact spoken direction given to us while driving will help us concentrate on the road more. We don’t need to redirect our eyes from the Maps screen to the road.

At a job site (the few jobs left that don’t require people to sit in front of a computer all the times)

Any Google-Glass like floating screen in front of our faces will only make sense in a very limited field, that’s predictable and where we need additional, enhancing data.
A Amazon warehouse worker could ride is Segway down the aisles while scanning the customer order floating in front of him.
A surgeon could get information about a procedure, while using both hands performing is operation.
But in both cases the data that’s been fed is very limited, controlled and fixed. You wouldn’t want the surgeon to update his status while messing with your heart.


The way we use computers and receive information has been the perfect union between input and output over many decades.
We receive information mainly by looking at a screen and we input information by typing on a keyboard.

Text to speech has been trying to get a foothold in the market for years now and even with Siri or Google Now consumers are struggling with the notion that dictating with our voice without looking at a screen is hard. We need our hands and we need our eyes. So far.


So, what’s the killer application?

What action would a killer device perform that would revolutionize the way we interact with our devices?

Passive data gathering in the form of a Fuelband to Fitbit is pretty well-established already. Yes, it can be more accurate. But the key here is, that the data is gathered passive. We’re not being prompted to read and respond to notifications or stop what we’re doing and answer an email.
One big challenge the ‘passive data gathering products’ have is that in the end they are giving us even more stuff to do when we’re back in front of the screen. It’s enhancing, but also makes us work. So, the product is competing again with our time in front of the screen.
And unless we, as a society see a direct benefit from tracking our health and fitness it will only attract the individuals who are already wanting to be fit. Perhaps if we could get our health insurance coverage significantly lowered if we wear a fitness tracker and prove certain results a mass adoption would occur. The NSA would sponsor that in a heartbeat.

Any device that’s essentially another screen will need battery management, and can never replace the phone as the main personal computer. On this I 100% agree with Gruber. And if the challenges of Glass and the current smart watch offerings are any indicators than we can assume that people don’t want another screen. And any smaller screen can’t replace our main screen, the phone.

If the ‘passive product’ market is already established Apple certainly could release their version of that type of product. But I can’t see it revolutionize the category. Some enhancements certainly are possible, especially with the close tie-in with iOS and the iPhone. But the device couldn’t cost more than the current products on the market, not just because of the prices of the competitors, but because we’d be approaching iPod/iPad prices.

As I am running around my neighborhood an killing kilometer after kilometer, listening to podcasts and was thinking that I’d love a better pair of headphones.
For that I don’t need a noise-canceling, surround sound, in-ear, audiophile experience.
I take my iPhone with me when running. I’m using the Runkeeper app to track my run while listening to podcasts with the podcast app. I don’t need the phone to be smaller, sit on my wrist or be attached to my pants. I actually like the phone with me, in case I do need a map, make a phone call or check on something important. I wouldn’t want a reduced experience that would notify me of email, but wouldn’t let me respond.

But a pair of headphones that would let me bring the internet with me and let me hear what’s going on would be interesting.
Could certain notifications be read out loud? To give me an idea of what is going on while I’m away from my screen?

If Apple could create a small, unobtrusive earpiece without cables, that would give all full audio from the phone we’d have the problem of the delivery of data solved.
But it still wouldn’t solve the input. For that you still would need a screen in our current world.
Perhaps voice control could improve and we could give direction to our devices that way. We’d be talking to ourselves in public like we did a few years ago when everyone ran around talking into tiny bluetooth earpieces. So, perhaps that product area has already proven itself to be a nonstarter. But, perhaps it’s worth taking another look and seeing if there is a play.

If you made it all the way down this long article, you might be disappointed that I am offering you an idea as boring as an ear piece. But I like to be practical, offer solutions. I’m not somebody who tears down everything and calls everything impossible. I like to weigh my options and then be pragmatic and pick one. An idea I would investigate further. Not invest in yet, but allow the thoughts to expand on. Perhaps it’s a stupid idea.

Would love to hear what you’re thinking. Can you think of other use cases for wearables in public?
Have you used an ear piece? Would you use one?