Microsoft’s Craig Mundie on the Future of Computing

Earlier today, Craig Mundie, our Chief Research and Strategy Officer, hosted his fifth TechForum gathering here at Microsoft’s Redmond HQ. Every year, Craig invites a small group of leading tech journalists and bloggers to share an in-depth look at the company’s strategic and technical vision for the future. It’s also an opportunity to showcase some of Microsoft’s latest ideas and prototypes.

This year, Craig was joined by colleagues Don Mattrick, Qi Lu, Ted Kummert and Rick Rashid to discuss and demonstrate how the company is investing in developing a number of important technologies that will help drive a new era in computing. We are moving beyond the age of personal computers to an exciting new era of personal computing. In the next hour, I’ll have more videos of the demos that accompanied the day, but in the meantime, here’s the scoop on what was discussed and my News Center colleagues have a great photo gallery

Two key themes ran throughout the day, both of which I’ve touched on over the last year on this blog

 

  1. We can realize profound insights from the combination of big data and machine learning
  2. We’re seeing an increased blending of digital and physical

Today, Craig and the team dove much deeper into both areas and explained how they relate to other key trends such as natural user interfaces (NUI) and computers becoming more like helpers. Fourteen demonstrations helped to punctuate these topics, which I’ll explain a little below, but look out for videos of some incredible new Microsoft Research (MSR) demos over the next few hours and days on this blog.

Big Data and Machine Learning

As we interact with devices, social networks and the sensors in our world (think GPS, light, heat, motion) we – as individuals and collectively – are generating massive amounts of data, more than could be processed by a single computer. Though we’re doing a better job of capturing, storing and managing this data, we’re only at the beginning of tapping its true potential. Machine learning is the secret sauce here, and transforms the world of big data (see my post with John Platt of MSR for a quick primer: Machine Learning for Dummies). This technology is integral to Bing, for example – more data leads to deeper insights and moves us beyond a search engine to a “doing” engine. Bing’s Autosuggest Flight Price is one example of basic machine learning in action. The Windows Phone text entry system is another – and testament to the expertise we have in MSR in this field. During TechForum, a number of leading MSR researchers were on hand to show some of their latest applications of machine learning.

techforum12bigdata_lg

Eric Horvitz demonstrated Lifebrowser, a project that leverages machine learning and reasoning to help people navigate through large stores of their own personal information, appointments, photos, and activities, including their history with searching and browsing on the Web over days, months and years.

Kristin Tolle demonstrated Microsoft Translator Hub is a self-service model for building a highly customized automatic translation service between any two languages. This Azure-based service enables users to upload language data for custom training, and then build and deploy custom translation models. These machine translation services are accessible using the Microsoft Translator APIs or a Webpage widget.

 

Blending of Digital and Physical

As we collect more and more data, it becomes possible to build a digital representation of the real world. Don Mattrick was on hand today, talking about the journey to remove barriers between the digital and physical worlds. We’ll no longer simply sit back and watch television; instead, we will have the option to become part of the show. Beyond the living room, we’re seeing amazing momentum with Kinect: there are over 300 participants in Microsoft’s technology-adoption program using Kinect for Windows. Demonstrations of how Nissan and Whole Foods envision using Kinect in their businesses were shown today; and novel applications have already emerged in hospitals, schools, patient and child therapy, and more.

NUI capabilities such as gesture, touch, and speech will play a pivotal role in this blending of digital and physical – along with an array of other technologies, including machine learning. Again, a number of demonstrations given today showcased our explorations.

techforum12nui_lg

Steven Bathiche and his Applied Sciences lab team are no strangers to this blog, and I’m personally an unabashed fan of their quest to build the “magic window.” Today, we saw a few more steps on that journey as the team demonstrated how they’re combining technology such as Kinect, transparent OLED displays and a technology known as “the wedge” to get us closer to the dream.

Andy Wilson of LightSpace fame demonstrated a project called Holoflector, a unique, interactive augmented-reality mirror. Graphics are superimposed correctly on your own reflection to enable a blended-reality experience unlike anything I’ve seen before. He used the combined abilities of Kinect and Windows Phone to infer the position of a phone and render graphics that seem to hover above it.

A personal favorite is Sasa Junuzovic’s IllumiShare. At first glance, it seemed such an obvious application of technology. But then I realized how tricky it is to achieve. IllumiShare enables remote people to share any physical or digital object on any surface. It’s a low-cost, peripheral device that looks and lights like a desk lamp, except IllumiShare shares the lighted surface with someone who may be in a remote location. To do this, IllumiShare uses a camera-projector pair where the camera captures video of the local workspace and sends it to the remote space; the device also projects video of the remote workspace onto the local space. It’s incredibly fun and intuitive to use. For example, people can sketch back and forth using real ink and paper. Other potential applications are remote meetings where attendees can interact with a shared conference room whiteboard regardless of location, or children can engage in a play date remotely and even share the same toys.

 

techforumcraig_lg

TechForum is a chance to glimpse over the horizon and hear from some of the brightest minds at Microsoft on where technology is headed. There is another more subtle theme across Microsoft: the progress that is being made on making the interaction between all of these experiences seamless. Craig talks about this as orchestrated ecosystems – the devices we use today each carry a whole ecosystem that support them – the cloud, a steady stream of applications and updates, storage, operating system updates, user interface, peripherals, data services, all connecting with different aspects of your life and things you want to get done. Each screen you interact with has its own ecosystem that it needs to function. In the future, you won’t have to stitch them together: Microsoft’s goal is to create an orchestrated experience out of these formerly disparate ecosystems. Metro style is an example of this work today. This will be the hallmark of future computing experiences – truly personal computing.

 

Now that I’ve whet your appetite about some of the cool and transformative work taking place at Microsoft, I hope you’ll check out the demonstrations in action. And next week is Microsoft Research TechFest, where I’ll have even more amazing new technology to report on.