I was going through Humane’s Ted Talk video snippets, and while most of it was a demo whose merits can only be judged once people spend time with it, some points stood out to me.
The most compelling parts are when its use as an interface for artificial intelligence
I was really impressed with the demo where Imran asks whether he can have a particular chocolate or not, and the device is able to process the chocolate’s ingredients and tell him if its suitable for him based on his dietary preferences. That’s really cool.
In general, I believe this device’s ability to process the user’s environment and surface contextually helpful information is going to be its key feature.
The translation and dietary requirements demos present that most compellingly.
The projection bits of the demo were far less compelling to me. I fail to see how it’s more convenient or better than having this interface on, say a phone.
I certainly won’t be buying the Humane device if I were only looking for a different surface to present my phone interface on.
Of course, it’s a very small video clip and how well these things work in real world environments remains to be seen.
But I did find the AI bits intriguing, because given how much of a black box AI is, creating an interface around it is going to make AI accessible to more people.
There’s a lot that is not known about it.
Part of ubiquitous computing is knowing when and where to surface information. So far, the demo only presents user-initiated actions. I wonder, if Humane’s device can also proactively find and display information.
What I am thinking is, the device helping the user translate signs that are not in their native language, or going through a lengthy document and presenting a summary on their behalf, without the user having to trigger it to do something. Essentially the device being an extension of the user’s ability to process what they see.
Humane’s demo reminds me of the first Google Glass demos in many ways. Google Glass too was a very camera driven device and it too had some compelling use cases. But it was a social acceptance disaster.
I fear the same could also be true for Humane’s product. No one takes kindly to having a camera being pointed at them without their consent. Sure you could have the most privacy focused systems but people are wary of such devices since they look suspiciously like spy-cams.
So Humane could have the perfect device, but if people get wary of it in public setups, its use is going to be very limited.
As much as I am sceptical about its success, I am personally happy to see a new take on computers. For far too long we’ve only seen iterations and reiterations of a touch screen computer. New tech is exciting.