Sometimes in order to give myself a mental break from the nitty gritty of everyday web production, I like to daydream about where in the world this is all going. I mean, it’s a web designer/developer’s responsibility to stay update on new technologies and this requires constant diligence and a fundamental intellectual curiosity about information technology in general.
Because it can all be broken down to interaction with information, and whether we are talking about viewing a company’s portfolio website on your laptop, browsing your Facebook news feed on your iPad, or sharing the Instagram photo shoot of your family vacation to the Grand Canyon (does anyone still do this?), what we are talking about is the consumption or creation of digital information that is simply an abstraction of a real world object or construct through some type of computer interface.
We have seen technology take us from computers that take up entire wings of research labs, to personal home computers, laptops and now mobile computers. But it seems to me that the interface no matter how small is still primitive; the natural progression of this interaction with digitized information is to free it from the device and fuse it with the world around us. The answer to this of course is augmented reality. For a primer, I like this article at HowStuffWorks.com.
Two of the most promising technologies that point the way to our augmented future are Google Glass and SixthSense. Two different approaches, from very different originators – the first being a corporate behemoth that practically owns all information – the second being an open source project from a university research team – and alternate takes on the interaction interface. A good primer on Google Glasses is this blog post from David Pogue at The New York Times (Google Glass and the Future of Technology), and a good start for SixthSense is on creator Pranav Mistry’s website http://pranavmistry.com (be sure to watch the first video, a viral hit from the 2009 TED Conference video that introduced the SixthSense project to the world).
From David Pogue’s blog post, Google Glass “looks like only the headband of a pair of glasses — the part that hooks on your ears and lies along your eyebrow line — with a small, transparent block positioned above and to the right of your right eye. That, of course, is a screen, and the Google Glass is actually a fairly full-blown computer. Or maybe like a smartphone that you never have to take out of your pocket.” So the concept here is that you have a personal interface to the digital world that is controlled through very subtle physical actions, and becomes hand’s free. I won’t go into the whole article here, but one area that I disagree slightly with David is when he says “We’ve seen that the masses can’t even be bothered to put on special glasses to watch 3-D TV; it may take some unimagined killer app to convince them to wear Google Glass headsets all day.”
I think a more relevant parallel would be headsets or earpieces in cell phones; I know many people prefer to talk and listen on their mobile device hand free, and I see people walking in the park, driving (obviously), sitting at cafe’s, etc interfacing in this way. I think if the technology was revolutionary enough but also a pleasant experience (no dizziness or odd physical side effects), I think you could possibly see wide adoption.
If you watch the TED video on SixthSense, you see that this technology alternately projects the digital world onto the physical world around you; you interact with it through touch but instead of needing a device interface, you use tables, walls, even your hand as a projection surface. Both technologies use cameras to capture and interpret the physical world in the digital realm, and fuse it alongside the physical world. Thereby augmenting our reality with information.
The challenge with this approach I believe is simply in designing a computer/projection device that better integrates with you physically; the hanging mobile computer lanyard approach strikes me as a bit clunky. Perhaps some hybrid of the two approaches? A Google Glass that is also capable of surface projection? Could be a winner.
This is all truly the stuff of science fiction, but in the same way that mobile phones seemed to be anticipated in fantasies such as Star Trek decades before they became reality, I believe we are much less distant from this information paradigm shift than we were at the advent of this technological revolution. I’m not willing to say that we are going to end up as fully realized cyborgs anytime soon, but I believe that one of these companies or innovators is going to figure out the perfect solution for integrating these devices into our everyday use. And when that happens, we are going to see another leap in the same way we saw with simple mobile phones and later smartphone devices. Count me excited by the proposition!