Do you remember the computer interface of the future from Minority Report? (Here is a trailer if you haven’t seen it). It seems like we might not be too far from this becoming a reality. Last week, I watched a researcher search a map, play a platformer, and completely control a computer with his hands instead of a mouse. This was one of twenty cool exhibits at Techfest, an event where Microsoft Research demos some of their projects for the public (the event is also called “the & in R&D,” as its purpose is to facilitate the transition from pure research to development). Many of the demos were quite exciting, so I just wanted to share my thoughts about a small subset of the cool things I saw.
The day began with a keynote address by Microsoft’s Chief Research Officer, Rick Rashid, who talked about the history and motivation behind Techfest. He finished with a video of new MSR speech recognition software being used during a talk he gave in China. During the first part of his talk, he was speaking in English while the software generated real time subtitles of his words (English speech to English text) at a significantly lower error rate than the previous gold standard. The software then translated the English text to Chinese text for the audience (I think computers have been pretty good at this part for a while now). But at the end, the program went one step further and had the computer speak Chinese back to the audience in Rick Rashid’s own voice. So at the end of the day, this program could allow you to speak out loud to an audience in a language you don’t understand, in your own voice. If you’re interested, you can see for yourself in the keynote address. The demo starts at ~48:30.
During the keynote address, Rashid also invited researcher Bongshin Lee onstage to give a demo of a new application called SketchInsight, which is a presentation tool for a large touchscreen display. The demo was a short mock talk on energy consumption in which Lee was able to create charts and edit displays with ease to tell a story. It’s hard to do justice to how smoothly the presentation flowed, but you can check it out ~42:00 minutes into the keynote address. What really struck me was that it seems like such a convenient way to give presentations: you still have the same freedom as drawing on a whiteboard, but can also incorporate computer-generated graphs and diagrams on the fly.
Afterwards the audience was turned loose to explore the exhibits on the demo floor. They included new technology for automatically generating test questions, using one’s hands in a computer interface instead of a mouse (as mentioned earlier), and about twenty others. A particularly appealing new app was one that removed the shakiness from a video while filming. As an example, the app took as input a video taken on a highway overpass scanning passing traffic from left to right. The raw video was extremely shaky and watching it made me feel stressed and nauseous. After running through their app, the video looked like it was shot from a tripod by an experienced user changing the angles. What I found most impressive was that the app somehow was able to accurately distinguish between intended movement (scanning left to right) and accidental movement (random shaking), preserving what the camera was trying to capture, but removing everything else.
The day concluded with a talk by principal researcher Bill Buxton on the future of computers and what constitutes great progress. If you had asked me before this talk when touch screen technology was first developed, I probably would have guessed sometime within the last 5 years. I might have believed you if you told me that we’d had this technology after 2000. Maybe. But I would have been way off – Buxton showed us a touch screen watch (with a built-in calculator!) that was released in 1984, four years before I was even born. That’s crazy. So what took us so long to use touch screens on our phones and tablets? The point of the talk was that sometimes great products – ones we can’t seem to live without today – are born when good pre-existing ideas come together the right way. I mean, who in 1984 needed a touch screen on their watch? Without the Internet, or fancy games, or mobile apps, we wouldn’t need touch screens on our phones either, and touch screen technology might never have become so essential. I think the idea of the talk was that the next big thing is always going to be a clever combination of tools we already have, rather than something revolutionary that we’ve never seen before.