Next at Microsoft http://blogs.microsoft.com/next Tue, 20 Sep 2016 05:09:51 +0000 en-US hourly 1 https://wordpress.org/?v=4.6.1 How Microsoft computer scientists and researchers are working to “solve” cancer http://blogs.microsoft.com/next/2016/09/19/microsoft-computer-scientists-researchers-working-solve-cancer/ http://blogs.microsoft.com/next/2016/09/19/microsoft-computer-scientists-researchers-working-solve-cancer/#respond Tue, 20 Sep 2016 05:08:28 +0000 http://blogs.microsoft.com/next/?p=57815 Scientists at Microsoft’s research labs are trying to use computer science to solve one of the most complex and deadly challenges humans face: Cancer. And, for the most part, they … Read more »

The post How Microsoft computer scientists and researchers are working to “solve” cancer appeared first on Next at Microsoft.

]]>
Scientists at Microsoft’s research labs are trying to use computer science to solve one of the most complex and deadly challenges humans face: Cancer.

And, for the most part, they are doing so with algorithms and computers instead of test tubes and beakers.

One team of researchers is using machine learning and natural language processing to help oncologists figure out the most effective, individualized cancer treatments for their patients.

Another is pairing machine learning with computer vision to give radiologists a more detailed understanding of how their patients’ tumors are progressing.

Yet another group of researchers has created powerful algorithms that help scientists understand how cancers develop and what treatments will work best to fight them.

And another team is working on moonshot efforts that could one day allow scientists to program cells to fight diseases, including cancer.

While the individual projects vary widely, they share the core philosophy that success depends on both biologists and computer scientists bringing their expertise to the problem.

“The collaboration between biologists and computer scientists is actually key to making this work,” said Jeannette M. Wing, Microsoft’s corporate vice president in charge of the company’s basic research labs.

To learn about these efforts to solve cancer with the help of algorithms and computers, read the full story.

The post How Microsoft computer scientists and researchers are working to “solve” cancer appeared first on Next at Microsoft.

]]>
http://blogs.microsoft.com/next/2016/09/19/microsoft-computer-scientists-researchers-working-solve-cancer/feed/ 0
Microsoft researchers achieve speech recognition milestone http://blogs.microsoft.com/next/2016/09/13/microsoft-researchers-achieve-speech-recognition-milestone/ http://blogs.microsoft.com/next/2016/09/13/microsoft-researchers-achieve-speech-recognition-milestone/#respond Tue, 13 Sep 2016 18:13:34 +0000 http://blogs.microsoft.com/next/?p=57728 Microsoft researchers have reached a milestone in the quest for computers to understand speech as well as humans. Xuedong Huang, the company’s chief speech scientist, reports that in a recent … Read more »

The post Microsoft researchers achieve speech recognition milestone appeared first on Next at Microsoft.

]]>
Microsoft researchers have reached a milestone in the quest for computers to understand speech as well as humans.

Xuedong Huang, the company’s chief speech scientist, reports that in a recent benchmark evaluation against the industry standard Switchboard speech recognition task, Microsoft researchers achieved a word error rate (WER) of 6.3 percent, the lowest in the industry.

In a research paper published Tuesday, the scientists said: “Our best single system achieves an error rate of 6.9% on the NIST 2000 Switchboard set. We believe this is the best performance reported to date for a recognition system not based on system combination. An ensemble of acoustic models advances the state of the art to 6.3% on the Switchboard test data.”

This past weekend, at Interspeech, an international conference on speech communication and technology held in San Francisco, IBM said it has achieved a WER of 6.6 percent.  Twenty years ago, the error rate of the best published research system had a WER of greater than 43 percent.

“This new milestone benefited from a wide range of new technologies developed by the AI community from many different organizations over the past 20 years,” Huang said.

speech-graphicSome researchers now believe these technologies could soon reach a point where computers can understand the words people are saying about as well as another person would, which aligns with Microsoft’s strategy to provide more personal computing experiences through technologies such as its Cortana personal assistant, Skype Translator and speech- and language-related cognitive services. The speech research is also significant to Microsoft’s overall artificial intelligence (AI) strategy of providing systems that can anticipate users’ needs instead of responding to their commands, and to the company’s overall ambitions for providing intelligent systems that can see, hear, speak and even understand, augmenting how humans work today.

Both IBM and Microsoft cite the advent of deep neural networks, which are inspired by the biological processes of the brain, as a key reason for advances in speech recognition. Computer scientists have for decades been trying to train computer systems to do things like recognize images and comprehend speech, but until recently those systems were plagued with inaccuracies.

Neural networks are built in a series of layers. Earlier this year, Microsoft researchers won the ImageNet computer vision challenge by utilizing a deep residual neural net system that utilized a new kind of cross-layer network connection.

Another critical component to Microsoft researchers’ recent success is the Computational Network  Toolkit.  CNTK implements sophisticated optimizations that enable deep learning algorithms to run an order of magnitude faster than before. A key step forward was a breakthrough for parallel training on graphics processing units, or GPUs.

Although GPUs were designed for computer graphics, researchers have in recent years found that they also can be ideal for processing complex algorithms like the ones used to understand speech. CNTK is already used by the team that helps Microsoft’s virtual assistant, Cortana. By combining the use of  CNTK and GPU clusters, Cortana’s speech training is now able to ingest 10 times more data in the same amount of time.

Geoffrey Zweig, principal researcher and manager of Microsoft’s  Speech & Dialog research group,  led the Switchboard speech recognition effort.  He attributes the company’s industry-leading speech recognition results to the skills of its researchers, which led to the development of new training algorithms, highly optimized convolutional and recurrent neural net models, and the development of tools like CNTK.

“The research team we’ve assembled brings to bear a century of industrial speech R&D experience to push the state of the art in speech recognition technologies,” Zweig said.

Xuedong Huang in his office

Xuedong Huang (Photography by Scott Eklund/Red Box Pictures)

Huang adds that the speech recognition milestone is a significant marker on Microsoft’s journey to deliver the best AI solutions for its customers. One component of that AI strategy is conversation as a platform (CaaP); Microsoft outlined its CaaP strategy at the company’s annual developer conference earlier this year.  At that event, CEO Satya Nadella said CaaP could have as profound an impact on our computing experiences as previous shifts, such as graphical user interfaces, the web or mobile.

“It’s a simple concept, yet it’s very powerful in its impact.  It is about taking the power of human language and applying it more pervasively to all of our computing,” Nadella said.

Related:

Follow Richard Eckel on Twitter

The post Microsoft researchers achieve speech recognition milestone appeared first on Next at Microsoft.

]]>
http://blogs.microsoft.com/next/2016/09/13/microsoft-researchers-achieve-speech-recognition-milestone/feed/ 0
Microsoft researcher translates defense intelligence to business intelligence http://blogs.microsoft.com/next/2016/08/15/microsoft-researcher-translates-defense-intelligence-business-intelligence/ http://blogs.microsoft.com/next/2016/08/15/microsoft-researcher-translates-defense-intelligence-business-intelligence/#respond Mon, 15 Aug 2016 16:00:05 +0000 http://blogs.microsoft.com/next/?p=57429 On June 7, 2010, Christopher White attended a kickoff meeting in suburban Washington D.C. for a project to rapidly develop and deploy big data analytics and visualization tools to aid … Read more »

The post Microsoft researcher translates defense intelligence to business intelligence appeared first on Next at Microsoft.

]]>
On June 7, 2010, Christopher White attended a kickoff meeting in suburban Washington D.C. for a project to rapidly develop and deploy big data analytics and visualization tools to aid the war effort in Afghanistan.

“In Chris’s mind, he was going to come to D.C. for two weeks during the summer, work on this program he literally didn’t know anything about, and that’s it,” says Randy Garrett, who was the program manager for Project Nexus 7 at the Defense Advanced Research Projects Agency (DARPA).

“Well, it didn’t quite turn out that way.”

White, an expert in training computers to extract information from troves of digitally processed information, had just finished his first year as a postdoctoral fellow at Harvard University. His advisor was a DARPA contractor, which gave White a sought-after opportunity to transfer computer science research into real-world applications. The process just happened much quicker than he anticipated.

A mere three months after the kickoff meeting, White was on a plane to Afghanistan to brief the top U.S. military commander along with the general’s senior staff and one of the most senior intelligence officers in the world on the tools he was developing.

“He was able to show things about Afghanistan that no one had ever seen before,” says Garrett, who is now senior vice president of technology at IronNet Cybersecurity.

Connecting past to present
White is hesitant to discuss his time in Afghanistan. Much of it remains classified, he says, and is only tangentially related to the research he is doing now on Microsoft’s business intelligence platform, Power BI.

But nudged, he leans back from a laptop running a demonstration of Power BI’s new brand and campaign management solution template for Twitter released Monday, and, reluctantly, agrees to provide a few details that connect his past to the present.

“The challenge of the work in Afghanistan was like the big data problem in general – there are a lot of data coming in from different places: from the air, from people wearing sensors, from vehicles, from the news. And the challenge was making that data useful to the warfighter in context,” says White, now a principal researcher within Microsoft’s research organization.

Those contexts range from an Army general wanting to understand how the war is impacting countrywide economic development to whether a soldier on patrol is likely to encounter a roadside bomb in a specific quadrant of a city.

While the contexts vary across space and time, the data used to understand them are similar, White says. He and his collaborators built tools to exploit the myriad flows of data in ways that provide decision makers a sense of what is going on – from their point of view.

White is now helping teams do the same for business intelligence through Power BI. Whether an executive is projecting quarterly earnings or a store manager wants to know how yesterday’s news will effect foot traffic today, they can use similar tools to assess data and make decisions.

“Those tools include interfaces to data, artificial intelligence services that transform data, and infrastructures that can serve data,” White says.

DARPA style management
By 2012, White was a program manager at DARPA back in suburban D.C. There he created and ran the agency’s big data program, XDATA, to develop computational techniques and software tools for processing and analyzing large, imperfect and incomplete datasets for defense activities.

He also created the Open Catalog for dissemination of publicly funded fundamental research including papers, source code, software and data.

The second program he developed, Memex, was a suite of tools to help local law enforcement agencies extract and visualize information about illicit activities such as human trafficking, drug smuggling and arms shipments from the deepest, darkest corners of the internet.

“You can start to put together a mosaic of the activities and see where the money flows and where other goods flow,” says Norman Whitaker, a former deputy office director at DARPA who is currently a distinguished scientist and managing director of Special Projects for Microsoft Research NExT.

The Memex work was featured in national media ranging from 60 Minutes and the Wall Street Journal to a TEDx talk that White gave at Oklahoma State University, where he earned an undergraduate degree in electrical engineering before earning his master’s degree and PhD from Johns Hopkins University.

The leadership qualities White exhibited in making Memex successful caught Whitaker’s attention, and he helped recruit his former DARPA colleague to join Microsoft’s research organization, which was pivoting to focus more resources on projects with the potential for profound impact on the company, its products and customers.

The project focus within Whitaker’s Microsoft research organization is similar to how DARPA operates – where program managers oversee budgets, negotiate contracts and talk to customers in addition to their technical expertise in highly advanced fields.

Democratization of technology
For White, Microsoft’s global reach appeals to his interest in translating advanced computer science research into applications that don’t require a PhD to understand and use.

He joined Whitaker’s special projects team in early 2015 and set out to identify the applications best suited for his expertise – where he could have the greatest impact.

The team elected to focus on bringing artificial intelligence to users through Power BI, helping businesses make sense of their own business data, much of it private and closely guarded.

“Every business has its own data,” White says. “We want to give them tools so they can do things with their own data that they just couldn’t do before.”

The team has already released seven interfaces that help users visualize and interact with their data. The interfaces can be used individually to solve particular business problems, or used in combination to solve other business problems. The solutions templates are designed for users who want a plug-and-play option.

The transition from DARPA to Microsoft, White says, has required a different way to think about and approach solving big data problems.

At DARPA, he explains, the data processing and visualization tools he built were narrowly focused and precise like a laser cutter. Now, at Microsoft, he is building tableware – knives and forks. “Although they may not be as sharp,” he says, “everyone can use them and they allow everyone to eat.”

Related:

John Roach writes about Microsoft research and innovation. Follow him on Twitter.

The post Microsoft researcher translates defense intelligence to business intelligence appeared first on Next at Microsoft.

]]>
http://blogs.microsoft.com/next/2016/08/15/microsoft-researcher-translates-defense-intelligence-business-intelligence/feed/ 0
Microsoft Pix gives the iPhone camera an artificial brain http://blogs.microsoft.com/next/2016/07/27/microsoft-pix-gives-iphone-camera-artificial-brain/ http://blogs.microsoft.com/next/2016/07/27/microsoft-pix-gives-iphone-camera-artificial-brain/#respond Wed, 27 Jul 2016 14:23:18 +0000 http://blogs.microsoft.com/next/?p=57349 Microsoft released an iPhone app on Wednesday that puts the skill of a professional photographer in your pocket. Microsoft Pix captures a burst of 10 frames with each shutter click … Read more »

The post Microsoft Pix gives the iPhone camera an artificial brain appeared first on Next at Microsoft.

]]>
Microsoft released an iPhone app on Wednesday that puts the skill of a professional photographer in your pocket.

Microsoft Pix captures a burst of 10 frames with each shutter click – some from before the tap – and uses artificial intelligence to select up to three of the best and unique shots. Before the remaining frames are deleted, the app uses data from the entire burst to remove noise, and then intelligently brightens faces, beautifies skin and adjusts the color and tone.

While the app is selecting and enhancing the best of the burst, another set of algorithms sorts through the frames to determine whether any motion such as a person’s hair tussled by the wind or the cascade of a waterfall in the background can be looped for a Harry Potter-esque effect called Live Image.

To learn more about the artificial intelligence Microsoft Pix puts in the iPhone camera, read the full story.

The post Microsoft Pix gives the iPhone camera an artificial brain appeared first on Next at Microsoft.

]]>
http://blogs.microsoft.com/next/2016/07/27/microsoft-pix-gives-iphone-camera-artificial-brain/feed/ 0
Project Malmo, which lets researchers use Minecraft for AI research, makes public debut http://blogs.microsoft.com/next/2016/07/07/project-malmo-lets-researchers-use-minecraft-ai-research-makes-public-debut/ http://blogs.microsoft.com/next/2016/07/07/project-malmo-lets-researchers-use-minecraft-ai-research-makes-public-debut/#respond Fri, 08 Jul 2016 04:00:05 +0000 http://blogs.microsoft.com/next/?p=57190 Microsoft has made Project Malmo, a platform that uses the world of Minecraft as a testing ground for advanced artificial intelligence research, available for novice to experienced programmers on GitHub … Read more »

The post Project Malmo, which lets researchers use Minecraft for AI research, makes public debut appeared first on Next at Microsoft.

]]>
Microsoft has made Project Malmo, a platform that uses the world of Minecraft as a testing ground for advanced artificial intelligence research, available for novice to experienced programmers on GitHub via an open-source license.

The system, which had until now only been open to a small group of computer scientists in private preview, is primarily designed to help researchers develop sophisticated, more general artificial intelligence, or AI, that can do things like learn, hold conversations, make decisions and complete complex tasks.

That’s key to creating systems that can augment human intelligence — and eventually help us with everything from cooking and doing laundry to driving and performing lifesaving tasks in an operating room.

Katja Hofmann, a researcher in Microsoft’s Cambridge, UK, research lab, who leads the development of Project Malmo, said the system will help researchers develop new techniques and approaches to reinforcement learning. That’s an area of AI in which agents learn how to complete a task by being given a lot of room for trial and error and then being rewarded when they make the right decision.

“We’re trying to put out the tools that will allow people to make progress on those really, really hard research questions,” Hofmann said.

For example, computer scientists have gotten exceptionally good at creating tools that can understand the words we say, whether we’re asking a gadget for directions or navigating an automated customer service line.

But when it comes to actually comprehending the meaning of those audio waves – well, in most cases a baby could do better.

“We’ve trained the artificial intelligence to identify patterns in the dictation, but the underlying technology doesn’t have any understanding of what those words mean,” Hofmann said. “They’re just statistical patterns, and there’s no connection to any experience.”

Microsoft researchers working on Project Malmo include, from top left, Fernando Diaz, Evelyne Viegas, David Bignell, Alekh Agarwal, Matthew Johnson, Akshay Krishnamurthy, Katja Hofmann and Tim Hutton. Photography by Scott Eklund/Red Box Pictures

Microsoft researchers working on Project Malmo include, from top left, Fernando Diaz, Evelyne Viegas, David Bignell, Alekh Agarwal, Matthew Johnson, Akshay Krishnamurthy, Katja Hofmann and Tim Hutton. (Photography by Scott Eklund/Red Box Pictures)

Beyond understanding to comprehension
Teaching AI agents to comprehend humans in the same way we comprehend each other is one of the core goals of advanced artificial intelligence research. With Project Malmo’s public launch, the team has added functionality that will let computer scientists create bots that can learn to talk to each other, and to people.

Project Malmo also can be used to teach AI to do crafting – using tools and resources to build things like a table or a sword – and to learn how to get around on their own without falling down a hill or into a lava pit. They also can learn to build with blocks, navigate mazes and do any number of other tasks that mimic the types of things we might want AI to one day do in real life.

The researchers who have been part of Project Malmo’s private preview say Minecraft, with its rich, immersive world and endless possibilities for collaboration and exploration, is ideally suited for general AI research.

“Minecraft is very close to the real world in many ways,” said Jose Hernandez-Orallo, a professor at the Technical University of Valencia, Spain, who has been part of the private preview. “There are so many possibilities.”

Doing this kind of research requires a lot of trial and error, with small and incremental victories along the way. That’s why, when Project Malmo launches publicly, it also will have another new feature: Overclocking, or the ability to run experiments faster than the usual pace of Minecraft’s world.

Evelyne Viegas, director of AI outreach at Microsoft Research, said that will allow researchers to get results, and make adjustments, more quickly.

“It’s accelerating the pace of those experiments,” she said.

A standard for measuring progress
The AI researchers who have gotten a sneak peek at Project Malmo say another key advantage to the system is that it will let researchers compare their progress against the work of others, by seeing how well their theories perform in the same environment.

Hernandez-Orallo said AI researchers are often developing their own systems for testing their theories and algorithms. That allows them to solve isolated problems, but it can be tough to know how those results compare to, or would complement, the work of others.

With a system like Project Malmo, he said researchers can test their systems in the same Minecraft setting. The ability to use the same testing ground “is music to my ears,” said Hernandez-Orallo, who has a particular interest in AI evaluation and is spending the summer at Microsoft’s UK lab so he can work directly with the Project Malmo researchers.

The open-source environment also allows researchers to much more easily collaborate, sharing research insights and bringing their findings together.

“There’s no question that it vastly speeds up the research process,” said Matthew Johnson, the development lead on Project Malmo, who also works in Microsoft’s Cambridge, UK, lab.

All coders welcome
Hofmann and her team created Project Malmo to help seasoned AI researchers conduct their research. But they’ve been pleasantly surprised to find that everyone from tweens with an early passion for programming to professors trying to train the next generation of AI researchers want to work with it as well.

Viegas said more novice coders can experience the system.

“You need to know how to program, but you don’t need to be an advanced programmer,” she said.

The Project Malmo platform consists of a mod for the Java version and code that helps AI agents sense and act within the Minecraft environment. The two components can run on Windows, Linux or Mac OS, and programmers can use most popular programming languages.

The team also has heard from several professors who want to incorporate Project Malmo into their lesson plans.

That makes sense. Hernandez-Orallo said his students – who may well spend their free time playing Minecraft – are going to be a lot more excited by an assignment using Project Malmo than by one that asks them to work with a more generic algorithm pulled from a research paper.

“This is going to have an impact in education, at least at the university level,” he said.

Johnson said they are already seeing people produce academic research based on Project Malmo, and that’s the core reason for doing a project like this. But he concedes that it’s also fun to imagine that a more mainstream audience might want to check it out.

“If I come across some YouTube video showing off some exciting new functionality enabled by our mod, that would make my day,” he said.

Related:

Allison Linn is a senior writer at Microsoft. Follow her on Twitter.

The post Project Malmo, which lets researchers use Minecraft for AI research, makes public debut appeared first on Next at Microsoft.

]]>
http://blogs.microsoft.com/next/2016/07/07/project-malmo-lets-researchers-use-minecraft-ai-research-makes-public-debut/feed/ 0
Microsoft and University of Washington researchers set record for DNA storage http://blogs.microsoft.com/next/2016/07/07/microsoft-university-washington-researchers-set-record-dna-storage/ http://blogs.microsoft.com/next/2016/07/07/microsoft-university-washington-researchers-set-record-dna-storage/#respond Thu, 07 Jul 2016 12:30:08 +0000 http://blogs.microsoft.com/next/?p=57202 Researchers at Microsoft and the University of Washington have reached an early but important milestone in DNA storage by storing a record 200 megabytes of data on the molecular strands. … Read more »

The post Microsoft and University of Washington researchers set record for DNA storage appeared first on Next at Microsoft.

]]>
Researchers at Microsoft and the University of Washington have reached an early but important milestone in DNA storage by storing a record 200 megabytes of data on the molecular strands.

The impressive part is not just how much data they were able to encode onto synthetic DNA and then decode. It’s also the space they were able to store it in.

Once encoded, the data occupied a spot in a test tube “much smaller than the tip of a pencil,” said Douglas Carmean, the partner architect at Microsoft overseeing the project.

Think of the amount of data in a big data center compressed into a few sugar cubes. Or all the publicly accessible data on the Internet slipped into a shoebox. That is the promise of DNA storage – once scientists are able to scale the technology and overcome a series of technical hurdles.

Test tube holding data next to pencil

Digital data from more than 600 basic smartphones can be stored in the faint pink smear of DNA at the end of this test tube. Photo by Tara Brown Photography/University of Washington.

The Microsoft-UW team stored digital versions of works of art (including a high-definition video by the band OK Go!), the Universal Declaration of Human Rights in more than 100 languages, the top 100 books of Project Guttenberg and the nonprofit Crop Trust’s seed database on DNA strands.

Demand for data storage is growing exponentially, and the capacity of existing storage media is not keeping pace.  That’s making it hard for organizations that need to store a lot of data – such as hospitals with vast databases of patient data or companies with lots of video footage – to keep up. And it means information is being lost, and the problem will only worsen without a new solution.

DNA could be the answer.

It has several advantages as a storage medium. It’s compact, durable – capable of lasting for a very long time if kept in good conditions (DNA from woolly mammoths was recovered several thousand years after they went extinct, for instance) – and will always be current, the researchers believe.

“As long as there is DNA-based life on the planet, we’ll be interested in reading it,” said Karin Strauss, the principal Microsoft researcher on the project. “So it’s eternally relevant.”

This explains why the Microsoft-UW team is just one of a number of research groups around the globe pursuing the potential of DNA as a vast digital attic.

The researchers acknowledge they have a long way to go.

Luis Henrique Ceze, a UW associate professor of computer science and engineering and the university’s principal researcher on the project, said the biotechnology industry made big advances in both “synthesizing” (encoding) and “sequencing” (decoding) data in recent years. Even so, he said, the team still has a long way to go to make it viable as an archival technology.

But the researchers are upbeat.

They note that their diverse team of computer scientists, computer architects and molecular biologists already has increased storage capacity a thousand times in the last year. And they believe they can make big advances in speed by applying computer science principles like error correction to the process.

Carmean, who was involved in development of Intel’s microprocessor architecture beginning in 1989, puts it this way:

“It’s one of those serendipitous partnerships where a strong understanding of processors and computation married with molecular biology experts has the potential of producing major breakthroughs.”

To get an idea of how the Microsoft-UW team does its work, flash back to high school biology and recall that DNA – or deoxyribonucleic acid – is a molecule that contains the biological instructions used in the growth, development, functioning and reproduction of all known living organisms.

“DNA is an amazing information storage molecule that encodes data about how a living system works. We’re repurposing that capacity to store digital data — pictures, videos, documents,” said Ceze, who is conducting research in the team’s Molecular Information Systems Lab (MISL), which is housed in a basement on the University of Washington campus. “This is one important example of the potential of borrowing from nature to build better computer systems.”

Storing digital data on DNA works like this:

First the data is translated from 1s and 0s into the “letters” of the four nucleotide bases of a DNA strand — (A)denine, (C)ytosine, (G)uanine and (T)hymine.

Karin Strauss

Karin Strauss. Photo by Scott Eklund/Red Box Pictures

Then they have vendor Twist Bioscience “translate those letters, which are still in electronic form, into the molecules themselves, and send them back,” Strauss said. “It’s essentially a test tube and you can barely see what’s in it. It looks like a little bit of salt was dried in the bottom.”

Reading the data uses a biotech tweak to random access memory (RAM), another concept borrowed from computer science. The team uses polymerase chain reaction (PCR), a technique that molecular biologists use routinely to manipulate DNA, to multiply or “amplify” the strands it wants to recover. Once they’ve sharply increased the concentration of the desired snippets, they take a sample, sequence or decode the DNA and then run error correction computations.

The lab tour complete, one question needed asking: Why an OK Go video?

“We like that a lot because there are many parallels with the work,” Strauss said with a laugh. “They’re very innovative and are bringing different things from different areas into their field and we feel we are doing something very similar.”

Related:

Learn more about Microsoft’s DNA storage project

Read the University of Washington story and Q&A on the project

Read the Twist Bioscience press release

Follow Karin Strauss on Twitter

New York Times: Data storage on DNA can keep it safe for centuries

Mike Brunker is a freelance writer and editor. Follow him on Twitter.

 

The post Microsoft and University of Washington researchers set record for DNA storage appeared first on Next at Microsoft.

]]>
http://blogs.microsoft.com/next/2016/07/07/microsoft-university-washington-researchers-set-record-dna-storage/feed/ 0
Talking with your hands: How Microsoft researchers are moving beyond keyboard and mouse http://blogs.microsoft.com/next/2016/06/26/talking-hands-microsoft-researchers-moving-beyond-keyboard-mouse/ http://blogs.microsoft.com/next/2016/06/26/talking-hands-microsoft-researchers-moving-beyond-keyboard-mouse/#respond Mon, 27 Jun 2016 04:00:09 +0000 http://blogs.microsoft.com/next/?p=57052 Kfir Karmon imagines a world in which a person putting together a presentation can add a quote or move an image with a flick of the wrist instead of a … Read more »

The post Talking with your hands: How Microsoft researchers are moving beyond keyboard and mouse appeared first on Next at Microsoft.

]]>
Kfir Karmon imagines a world in which a person putting together a presentation can add a quote or move an image with a flick of the wrist instead of a click of a mouse.

Jamie Shotton envisions a future in which we can easily interact in virtual reality much like we do in actual reality, using our hands for small, sophisticated movements like picking up a tool, pushing a button or squeezing a soft object in front of us.

And Hrvoje Benko sees a way in which those types of advances could be combined with simple physical objects, such as a few buttons on a piece of wood, to recreate complex, immersive simulators – replacing expensive hardware that people use today for those purposes.

Microsoft researchers are looking at a number of ways in which technology can start to recognize detailed hand motion — and engineers can put those breakthroughs to use in a wide variety of fields.

The ultimate goal: Allowing us to interact with technology in more natural ways than ever before.

“How do we interact with things in the real world? Well, we pick them up, we touch them with our fingers, we manipulate them,” said Shotton, a principal researcher in computer vision at Microsoft’s Cambridge, UK, research lab. “We should be able to do exactly the same thing with virtual objects. We should be able to reach out and touch them.”

This kind of technology is still evolving. But the computer scientists and engineers who are working on these projects say they believe they are on the cusp of making hand and gesture recognition tools practical enough for mainstream use, much like many people now use speech recognition to dictate texts or computer vision to recognize faces in photos.

That’s a key step in Microsoft’s broader goal to provide more personal computing experiences by creating technology that can adapt to how people move, speak and see, rather than asking people to adapt to how computers work.

“If we can make vision work reliably, speech work reliably and gesture work reliably, then people designing things like TVs, coffee machines or any of the Internet of Things gadgets will have a range of interaction possibilities,” said Andrew Fitzgibbon, a principal researcher with the computer vision group at the UK lab.

That will be especially important as computing becomes more ubiquitous and increasingly anticipates our needs, as opposed to responding to our commands. To make these kinds of ambient computing systems truly work well, experts say, they must be able to combine all our senses, allowing us to easily communicate with gadgets using speech, vision and body language together – just like we do when communicating with each other.

The team working on hand track in Microsoft's UK lab includes Tom Cashman (top left, standing), Andrew Fitzgibbon, Lucas Bordeaux, John Bronskill, (bottom row) David Sweeney, Jamie Shotton, Federica Bogo. Photo by Jonathan Banks.

The team working on hand tracking in Microsoft’s UK lab includes Tom Cashman (top left, standing), Andrew Fitzgibbon, Lucas Bordeaux, John Bronskill, (bottom row) David Sweeney, Jamie Shotton, Federica Bogo. Photo by Jonathan Banks.

Smooth, accurate and easy

In order to accomplish a component of that vision, Fitzgibbon and other researchers believe the technology must track hand motion precisely and accurately, using as little computing power as possible. That will allow people to use their hands naturally and with ease, and for consumer gadgets to respond accordingly.

It’s easier said than done, in large part because the hand itself is so complex. Hands can rotate completely around, and they can do things like ball up into a fist, which means the fingers disappear and the tool needs to make its best guess as to where they’ve gone and what they are doing. Also, a hand is obviously smaller than an entire body, so there’s more detailed motion to track.

The computer vision team’s latest advances in detailed hand tracking, which are being unveiled at two prestigious academic research conferences this summer, combine new breakthroughs in methods for tracking hand movement with an algorithm dating back to the 1940s – when computing power was less available and a lot more expensive.  Together, they create a system that can track hands smoothly, quickly and accurately – in real time – but can run on a regular consumer gadget.

“We’re getting to the point that the accuracy is such that the user can start to feel like the avatar hand is their real hand,” Shotton said.

The system, still a research project for now, can track detailed hand motion with a virtual reality headset or without it, allowing the user to poke a soft, stuffed bunny, turn a knob or move a dial.

What’s more, the system lets you see what your hands are doing, fixing a common and befuddling disconnect that happens when people are interacting with virtual reality but can’t see their own hands.

From dolphins to detailed hand motion

The project, called Handpose, relies on a wealth of basic computer vision research. For example, a research project that Fitzgibbon and his colleague Tom Cashman worked on years earlier, looking at how to make 2D images of dolphins into 3D virtual objects, proved useful in developing the Handpose technology.

The researchers say that’s an example of how a long-term commitment to this kind of research can pay off in unexpected ways.

Although hand movement recognition isn’t being used broadly by consumers yet, Shotton said that he thinks the technology is now getting good enough that people will start to integrate it into mainstream experiences.

“This has been a research topic for many, many years, but I think now is the time where we’re going to see real, usable, deployable solutions for this,” Shotton said.

A virtual sense of touch

The researchers behind Handpose say they have been surprised to find that a lack of haptics – or the sense of actually touching something – isn’t as big of a barrier as they thought when people test systems like theirs, which let people manipulate virtual objects with their hands.

That’s partly because of how they are designing the virtual world. For example, the researchers created virtual controls that are thin enough that you can touch your fingers together to get an experience of touching something hard. They also developed sensory experiences that allow people to push against something soft and pliant rather than hard and unforgiving, which appears to feel more authentic.

The researchers say they also notice that other senses, such as sight and sound, can convince people they are touching something real when they are not – especially once the systems are good enough to work in real time.

Andy Wilson, left, and Hrvoje Benko are among the researchers working on haptic retargeting.

Andy Wilson, left, and Hrvoje Benko are among the researchers working on haptic retargeting. Photo by Jeremy Mashburn.

Still, Benko, a senior researcher in the natural interaction group at Microsoft’s Redmond, Washington, lab, noted that as virtual reality gets more sophisticated, it may become harder to trick the body into immersing itself in the experience without having anything at all to touch.

Benko said he and his lab colleagues have been working on ways to use limited real-world objects to make immersive virtual reality experiences seem more like what humans expect from the real world.

“There’s some value in haptics and so we’re trying to understand what that is,” said Andy Wilson, a principal researcher who directs Microsoft Research’s natural interaction group.

But that doesn’t mean the entire virtual world needs to be recreated. Eyal Ofek, a senior researcher in the natural interaction group, said people can be fooled into believing things about a virtual world if that world is presented with enough cues to mimic reality.

For example, let’s say you want to build a structure using toy blocks in a virtual environment. Using the haptic retargeting research project the Microsoft team created, one building block could be used over and over again, with the virtual environment shifting to give the impression you are stacking those blocks higher and higher even as, in reality, you are placing the same one on the same plane.

The same logic could be applied to a more complex simulator, using just a couple of simple knobs and buttons to recreate a complex system for practicing landing an airplane or other complex maneuvers.

“A single physical object can now simulate multiple instances in the virtual world,” Ofek said.

The language of gesture

Let’s say you’re talking to a colleague over Skype and you’re ready to end the call. What if, instead of using your mouse or keyboard to click a button, you could simply make the movement of hanging up the phone?

Need to lock your computer screen quickly? What if, instead of scrambling to close windows and hit keyboard shortcuts, you simply reach out and mimic the gesture of turning a key in a lock?

Researchers and engineers in Microsoft’s Advanced Technologies Lab in Israel are investigating ways in which developers could create tools that would allow people to communicate with their computer utilizing the same kind of hand gestures they use in everyday life.

The goal of the research project, called Project Prague, would be to provide developers with basic hand gestures, such as the one that switches a computer off. And it also makes it easy for developers to create customized gestures for their own apps or other products, with very little additional programming or expertise.

The system, which utilizes machine learning to train systems to recognize motions, runs using a retail 3D camera.

“It’s a super easy experience for the developers and for the end user,” said Karmon, a principal engineering manager who is the project’s lead.

To build the system, the researchers recorded millions of hand images and then used that data set to train the technology to recognize every possible hand pose and motion.

Eyal Krupka, a principal applied researcher and head of the lab’s computer vision and machine learning research, said the technology then uses hundreds of micro artificial intelligence units, each analyzing a single aspect of the user’s hand, to accurately interpret each gesture.

The end result is a system that doesn’t just recognize a person’s hand, but also understands that person’s intent.

Adi Diamant, who directs the Advanced Technologies Lab, said that when people think about hand and gesture recognition, they often think about ways it can be used for gaming or entertainment. But he also sees great potential for using gesture for everyday work tasks, like designing and giving presentations, flipping through spreadsheets, editing e-mails and browsing the web.

People also could use them for more creative tasks, like creating art or making music.

Diamant said these types of experiences are only possible because of advances in fields including machine learning and computer vision, which have allowed his team to create a system that gives people a more natural way of interacting with technology.

“We chose a project that we knew was a tough challenge because we knew there was a huge demand for hand gesture,” he said.

Related:

Allison Linn is a senior writer at Microsoft. Follow her on Twitter.

 

The post Talking with your hands: How Microsoft researchers are moving beyond keyboard and mouse appeared first on Next at Microsoft.

]]>
http://blogs.microsoft.com/next/2016/06/26/talking-hands-microsoft-researchers-moving-beyond-keyboard-mouse/feed/ 0
How web search data might help diagnose serious illness earlier http://blogs.microsoft.com/next/2016/06/07/how-web-search-data-might-help-diagnose-serious-illness-earlier/ http://blogs.microsoft.com/next/2016/06/07/how-web-search-data-might-help-diagnose-serious-illness-earlier/#respond Tue, 07 Jun 2016 20:27:57 +0000 http://blogs.microsoft.com/next/?p=56890 Early diagnosis is key to gaining the upper hand against a wide range of diseases. Now Microsoft researchers are suggesting that records of the topics that people search for on … Read more »

The post How web search data might help diagnose serious illness earlier appeared first on Next at Microsoft.

]]>
Early diagnosis is key to gaining the upper hand against a wide range of diseases. Now Microsoft researchers are suggesting that records of the topics that people search for on the Internet could one day prove as useful as an X-ray or MRI in detecting some illnesses before it’s too late.

The potential of using engagement with search engines to predict an eventual diagnosis – and possibly buy critical time for a medical response — is demonstrated in a new study by Microsoft researchers Eric Horvitz and Ryen White, along with former Microsoft intern and Columbia University doctoral candidate John Paparrizos.

In a paper published Tuesday in the Journal of Oncology Practice, the trio detailed how they used anonymized Bing search logs to identify people whose queries provided strong evidence that they had recently been diagnosed with pancreatic cancer – a particularly deadly and fast-spreading cancer that is frequently caught too late to cure. Then they retroactively analyzed searches for symptoms of the disease over many months prior to identify patterns of queries most likely to signal an eventual diagnosis.

“We find that signals about patterns of queries in search logs can predict the future appearance of queries that are highly suggestive of a diagnosis of pancreatic adenocarcinoma,” – the medical term for pancreatic cancer, the authors wrote. “We show specifically that we can identify 5 to 15 percent of cases while preserving extremely low false positive rates” of as low as 1 in 100,000.

The researchers used large-scale anonymized data and complied with best practices in ethics and privacy for the study.

Eric Horvitz

Eric Horvitz, a technical fellow and managing director of Microsoft’s Redmond, Washington, research lab (Photography by Scott Eklund/Red Box Pictures)

Horvitz, a technical fellow and managing director of Microsoft’s research lab in Redmond, Washington, said the method shows the feasibility of a new form of screening that could ultimately allow patients and their physicans to diagnose pancreatic cancer and begin treatment weeks or months earlier than they otherwise would have. That’s an important advantage in fighting a disease with a very low survival rate if it isn’t caught early.

Pancreatic cancer — the fourth leading cause of cancer death in the United States – was in many ways the ideal subject for the study because it typically produces a series of subtle symptoms, like itchy skin, weight loss, light-colored stools, patterns of back pain and a slight yellowing of the eyes and skin that often don’t prompt a patient to seek medical attention.

Horvitz, an artificial intelligence expert who holds both a Ph.D. and an MD from Stanford University, said the researchers found that queries entered to seek answers about that set of symptoms can serve as an early warning for the onset of illness.

But Horvitz said that he and White, chief technology officer for Microsoft Health and an information retrieval expert, believe that analysis of search queries could have broad applications.

“We are excited about applying this analytical pipeline to other devastating and hard-to-detect diseases,” Horvitz said.

Horvitz and White emphasize that the research was done as a proof of concept that such a “different kind of sensor network or monitoring system” is possible. The researchers said Microsoft has no plans to develop any products linked to the discovery.

Instead, the authors said, they hope the positive results from the feasibility study will excite the broader medical community and generate discussion about how such a screening methodology might be used.  They suggest that it would likely involve analyzing anonymized data and having a method for people who opt in to receive some sort of notification about health risks, either directly or through their doctors, in the event algorithms detected a pattern of search queries that could signal a health concern.

But White said the search analysis would not be a medical opinion.

“The goal is not to perform the diagnosis,” he said. “The goal is to help those at highest risk to engage with medical professionals who can actually make the true diagnosis.”

White and Horvitz said they wanted to take the results of the pancreatic cancer study directly to those in a position to do something with the results, which is why they chose to first publish in a medical journal.

“I guess I’m at a point now in my career where I’m not interested in the potential for impact,” White said of the decision. “I actually want to have impact. I would like to see the medical community pick this up and take it as a technology, and work with us to enable this type of screening.”

And Horvitz, who said he lost his best childhood friend and, soon after, a close colleague in computer science to pancreatic cancer, said the stakes are too high to delay getting the word out.

“People are being diagnosed too late,” he said. “We believe that these results frame a new approach to pre-screening or screening, but there’s work to do to go from the feasibility study to real-world fielding.”

Horvitz and White have previously teamed up on other search-related medical studies – notably a 2008 analysis of “cyberchondria” – or “medical anxiety that is stimulated by symptom searches on the web,” as Horvitz puts it – and analyses of search logs that identify adverse effects of medications.

Related:

Decades of computer vision research, one ‘Swiss Army knife’

From gaming system to medical breakthrough

Eric Horvitz receives AAAI-Allen Newell Award

Follow Eric Horvitz on Twitter

Article on data, privacy, and the greater good

Mike Brunker is a freelance writer and editor. Follow him on Twitter.

The post How web search data might help diagnose serious illness earlier appeared first on Next at Microsoft.

]]>
http://blogs.microsoft.com/next/2016/06/07/how-web-search-data-might-help-diagnose-serious-illness-earlier/feed/ 0
Eric Horvitz receives ACM-AAAI Allen Newell Award for groundbreaking artificial intelligence work http://blogs.microsoft.com/next/2016/04/27/eric-horvitz-receives-acm-aaai-allen-newell-award-groundbreaking-artificial-intelligence-work/ http://blogs.microsoft.com/next/2016/04/27/eric-horvitz-receives-acm-aaai-allen-newell-award-groundbreaking-artificial-intelligence-work/#respond Wed, 27 Apr 2016 13:00:12 +0000 http://blogs.microsoft.com/next/?p=56836 In his many years as an artificial intelligence researcher, Eric Horvitz has worked on everything from systems that help determine what’s funny or surprising to those that know when to … Read more »

The post Eric Horvitz receives ACM-AAAI Allen Newell Award for groundbreaking artificial intelligence work appeared first on Next at Microsoft.

]]>
In his many years as an artificial intelligence researcher, Eric Horvitz has worked on everything from systems that help determine what’s funny or surprising to those that know when to help us remember what we need to do at work.

On Wednesday, Horvitz, a technical fellow and managing director of Microsoft’s Redmond, Washington, research lab, received the ACM – AAAI Allen Newell Award for groundbreaking contributions in artificial intelligence and human-computer interaction. The award honors Horvitz’s substantial theoretical efforts and as well as his persistent focus on using those discoveries as the basis for practical applications that make our lives easier and more productive.

Harry Shum, the executive vice president of Microsoft’s technology and research group, said Horvitz epitomizes a style of research that is unique to places like Microsoft because it is focused on having an impact in both the research and industry domains.

“People talk about basic research and applied research. What we are doing here is Microsoft research,” Shum said. “It’s not just about doing theoretical research and writing more papers. It’s also about applying those technologies in Microsoft products.”

Jeannette M. Wing, the corporate vice president overseeing Microsoft’s core research labs, said that Horvitz’s research has had an impact on countless research projects and commercial products, ranging from systems that help make our commutes easier to ones that seek to prevent hospital readmissions.

“His impact is immeasurable,” she said.

But Wing noted that Horvitz also has been able to step back and see the big picture, becoming a visionary and a thought leader in a field that is growing increasingly complex.

“He asks big questions: How do our minds work? What computational principles and architectures underlie thinking and intelligent behavior? How can computational models perform amidst real-world complexities such as sustainability and development? How can we deploy computation systems that deliver value to people and society?” Wing said.

The Newell award is given to a researcher whose work has breadth within computer science or spans multiple disciplines. Horvitz’s work has combined multiple computer science disciplines and he has been a leader in exploring the interrelationships between artificial intelligence and fields like decision science, cognitive science and neuroscience.

The award comes at a time when the artificial intelligence field is exploding.

Until a few years ago, artificial intelligence wasn’t often part of the public consciousness, except when it came up in a science fiction novel or blockbuster movie.

Now,  thanks to breakthroughs in the availability of data and our ability to process it, artificial intelligence applications are  suddenly everywhere, including systems that can understand and translate language, recognize and caption photos and do increasingly smart and useful things for us.

During a time often referred to as the “AI winter,”  Horvitz was among the nation’s hard-charging researchers plugging away at the difficult work of laying the groundwork for these systems and thinking about how they would work in the real world. Although artificial intelligence was out of the spotlight during that time, researchers were making major breakthroughs in bringing together the logical methods of traditional artificial intelligence work with research in fields such as decision science. This led to new applications that used both logic and probability.

Horvitz said that many of his research projects over the last fifteen years – which have looked at things like what we are most likely to remember or forget and when it’s worth it to interrupt someone while working – foreshadow practical applications that he expects to see in the future.

“To me, Eric is such an epic example of those brilliant researchers who have this huge confidence — not over-confidence, but just confidence — to keep pushing forward,” Shum said.

Horvitz’s attention to both research advances and practical applications of artificial intelligence research began while he was pursuing his Ph.D. on principles of bounded rationality. That’s the idea that when people or computers make decisions, they are limited by time, available information and their reasoning abilities.

Horvitz said he was interested in how computing systems immersed in the real world could make the best decisions in time-critical situations. His research looked at the value of continuing to think about a problem versus stopping early with a good enough answer.

His research considered emergency room scenarios, in which artificial intelligence systems could help doctors with timely recommendations. The work foreshadowed his later research on using similar ideas to guide solutions to some of the hardest challenges known in artificial intelligence, in the realm of theorem proving.

Horvitz also showed how artificial intelligence systems could be used to better understand people’s goals and intentions and provide the best information to decision makers. He collaborated with NASA’s Mission Control Center on how to provide flight engineers with the most valuable information about space shuttle systems when the engineers are under intense time pressure.

To solve these problems — and many more after — Horvitz brought together artificial intelligence methods with ideas drawn from disciplines like probability theory, decision theory and studies of bounded rationality.

In the future, Horvitz said he sees vast possibilities for how artificial intelligence can help to augment human intelligence.

“There’s a huge opportunity ahead in building systems that work closely with people to help them to achieve their goals,” Horvitz said.

Related:

The post Eric Horvitz receives ACM-AAAI Allen Newell Award for groundbreaking artificial intelligence work appeared first on Next at Microsoft.

]]>
http://blogs.microsoft.com/next/2016/04/27/eric-horvitz-receives-acm-aaai-allen-newell-award-groundbreaking-artificial-intelligence-work/feed/ 0
You might not see the next wave of breakthrough tech, but it’s all around you http://blogs.microsoft.com/next/2016/04/18/you-might-not-see-the-next-wave-of-breakthrough-tech-but-its-all-around-you/ http://blogs.microsoft.com/next/2016/04/18/you-might-not-see-the-next-wave-of-breakthrough-tech-but-its-all-around-you/#respond Mon, 18 Apr 2016 15:05:51 +0000 http://blogs.microsoft.com/next/?p=56797 Think of your favorite pieces of technology. These are the things that you use every day for work and play, and pretty much can’t live without. Chances are, at least one … Read more »

The post You might not see the next wave of breakthrough tech, but it’s all around you appeared first on Next at Microsoft.

]]>

Think of your favorite pieces of technology. These are the things that you use every day for work and play, and pretty much can’t live without.

Chances are, at least one of them is a gadget – your phone, maybe, or your gaming console.

But if you really think about it, chances also are good that many of your most beloved technologies are no longer made of plastic, metal and glass.

Maybe it’s a streaming video service you use to binge watch “Game of Thrones” on or an app that lets you track your steps and calories so you can fit into those jeans you wore back in high school. Maybe it’s a virtual assistant that helps you remember where your meetings are and when you need to take your medicine, or an e-reader that lets you get lost in your favorite book via your phone, tablet or even car speakers.

Perhaps, quietly and without even realizing it, your most beloved technologies have gone from being things you hold to services you rely on, and that exist everywhere and nowhere. Instead of the gadgets themselves, they are tools that you expect to be able to use on any type of gadget: Your phone, your PC, maybe even your TV.

They are part of what Harry Shum, executive vice president in charge of Microsoft’s Technology and Research division, refers to as an “invisible revolution.”

“We are on the cusp of creating a world in which technology is increasingly pervasive but is also increasingly invisible,” Shum said.

Read the full story.

The post You might not see the next wave of breakthrough tech, but it’s all around you appeared first on Next at Microsoft.

]]>
http://blogs.microsoft.com/next/2016/04/18/you-might-not-see-the-next-wave-of-breakthrough-tech-but-its-all-around-you/feed/ 0