Next at Microsoft http://blogs.microsoft.com/next Fri, 02 Dec 2016 16:04:40 +0000 en-US hourly 1 https://wordpress.org/?v=4.6.1 As machine learning breakthroughs abound, researchers look to democratize benefits http://blogs.microsoft.com/next/2016/12/01/machine-learning-breakthroughs-abound-researchers-look-democratize-benefits/ http://blogs.microsoft.com/next/2016/12/01/machine-learning-breakthroughs-abound-researchers-look-democratize-benefits/#respond Fri, 02 Dec 2016 06:00:14 +0000 http://blogs.microsoft.com/next/?p=58544 When Robert Schapire started studying theoretical machine learning in graduate school three decades ago, the field was so obscure that what is today a major international conference was just a … Read more »

The post As machine learning breakthroughs abound, researchers look to democratize benefits appeared first on Next at Microsoft.

]]>
When Robert Schapire started studying theoretical machine learning in graduate school three decades ago, the field was so obscure that what is today a major international conference was just a tiny workshop, so small that even graduate students were routinely excluded.

Machine learning still isn’t exactly a topic of discussion at most family dinner tables. But it has become one of the hottest fields in computer science, turning once-obscure academic gatherings like the upcoming Annual Conference on Neural Information Processing Systems in Barcelona, Spain, into a sold-out affair attended by thousands of computer scientists from top corporations and academic institutions.

“It’s been really something to see this field develop, and to see things that seemed impossible become possible in my lifetime,” said Schapire, a principal researcher in Microsoft’s New York City research lab whose machine learning research is widely used in the field.

The NIPS conference, which starts Monday, is so popular because machine learning has quickly become an indispensable tool for developing technology that consumers and businesses want, need and love. Machine learning is the basis for technology that can translate speech in real time, help doctors  read radiology scans and even recognize emotions on people’s faces. Machine learning also helps you sort the spam out of your inbox and remember your day’s tasks.

Robert Schapire. Photo by John Brecher.

Robert Schapire is a prominent AI researcher. Photo by John Brecher.

It’s a far cry from Schapire’s early days in the field, when he said some of the hard problems were things like getting a computer to accurately read handwritten digits.

“Bit by bit, we’ve really been building this field from the bottom up, starting with basic problems,” Schapire said. “Machine learning has become applicable to such a huge array of problems. It’s really amazing.”

Related: Here’s a look at how Microsoft is participating in NIPS

Along the way, researchers say the field has benefited from people who dreamed about big breakthroughs with real-world benefits, such as the ability to create technology that can recognize words in a conversation as well as a person.

“Somehow the field of machine learning has been very fortunate in that we’ve had brilliant theorists who had a very practical outlook on things,” said Alekh Agarwal, a researcher in Microsoft’s New York lab.

Democratizing machine learning
Schapire, Agarwal and their colleagues at Microsoft and elsewhere say this is just the beginning.  With the work they are presenting at NIPS and beyond, they are investigating ways to make machine learning even more useful for – and accessible to – a broader array of people.

The Microsoft researchers say they are at the forefront of efforts to democratize machine learning by making it easier for developers and engineers without a machine learning background to take advantage of these breakthroughs. That puts them on the cutting edge of finding ways to share the benefits of these systems widely with the rest of us.

“Machine learning has traditionally been a field where if you didn’t have a Ph.D. you’d be at a loss – and if you did have a Ph.D. you might still be at a loss,” said John Langford, a principal researcher in Microsoft’s New York lab. “We’re trying to make these things useful to someone who’s a programmer without a lot of machine learning expertise.”

Researchers at Microsoft office in New York City. John Langford

John Langford is working on ways to democratize AI. Photo by John Brecher.

Machine learning is useful in part because it can help people make predictions about anything from how many servers they’ll need to deploy for a certain task to what news article a person might want to read. One of Langford’s recent projects is looking at ways to make multiple predictions less burdensome, by creating systems that systematically eliminate common data errors with applications that use reinforcement learning and structured learning.

With reinforcement learning, researchers aim to get systems to use trial and error to figure out how to achieve a task. For example, a program could learn how to win at backgammon by playing against itself over and over again, picking up on what worked and what didn’t over the course of those many games. The system is given very little outside guidance to make those decisions. Instead, decisions it makes early in the process can then affect how it succeeds later on.

Reinforcement learning is a counterpart to supervised learning, in which systems get better at doing things as they are fed more relevant data. For example, a supervised machine learning tool may learn to recognize faces in pictures after being shown a training set containing a huge array of faces.

Helping with decision making
In the more recent reinforcement approach Langford has been working on, the system also gets partial credit for choosing actions that are partially correct, making it easier to winnow down to the right answer.

Microsoft researchers say the decision service is such an exciting breakthrough because it can help systems make decisions using context.

“When you make a decision, you usually have some idea of how good it was,” said Siddhartha Sen, a researcher in the New York lab. “Here’s an opportunity to use machine learning to optimize those decisions.”

The researchers say the cloud-based system, which is available in preview, is groundbreaking in part because it can be applied to so many different situations.

For example, it could be used by a news service that wanted to personalize content recommendations, a mobile health app that could personalize fitness activities or a cloud provider looking to optimize server resources.

Sarah Bird

Sarah Bird

 

Sen said one key goal for the testing service is to make it easy and accessible for people who may not be able to build these kinds of machine learning techniques on their own.

“The way it’s democratizing machine learning is by making it very easy to interface with the system,” said Sen, who will help run a workshop on the intersection of machine learning and systems design at NIPS. “We tried to hide all the difficult steps.”

Microsoft has been developing the building blocks for a system like the decision service for years. But the system’s current abilities would not have been possible even a few years ago, said Sarah Bird, who began working on it as a postdoctoral researcher in Microsoft’s New York lab.

Bird, who is now a technical advisor in Microsoft’s Azure division, said systems like these are improving rapidly because all the elements needed for machine learning – the computing power of the cloud, the algorithms and the data – are improving quickly, and at the same time.

“It’s really amazing to watch all the pieces we need mature in parallel,” she said. “It’s a fun time for consumers and developers and researchers.”

Fast pace of change
Many researchers say reinforcement learning holds a lot of promise because it could be used to create artificial intelligence systems that would be able to make the type of independent and complex decisions that could truly augment and complement human abilities.

Researchers caution that they are still in the early stages of finding success with reinforcement learning, but they say what they are seeing so far is promising.

“The sense of what’s achievable is constantly changing, and that’s what makes it so exciting to me,” said Katja Hofmann, a researcher in Microsoft’s Cambridge, UK, research lab. Hofmann has led development of Project Malmo, which uses Minecraft as the testing ground for reinforcement learning, and which will be demonstrated at NIPS.

Together with her colleagues, Hofmann has most recently been looking at ways that artificial intelligence agents can learn to do several tasks, rather than just one, and can apply the experience of how they completed one task to another. For example, an artificial intelligence navigating one Minecraft space may learn to recognize lava, and then use that knowledge to avoid lava in another place. Some of this research is being presented at the European Workshop on Reinforcement Learning, which is co-located with NIPS.

Related:

Learn more about Microsoft’s presence at NIPS

Chris Bishop: Substance, not hype, powers AI excitement at premier machine learning conference

Jennifer Wortman Vaughan: Making better use of the crowd

Follow Sarah Bird  and Katja Hofmann on Twitter

Allison Linn is a senior writer at Microsoft. Follow her on Twitter.

 

The post As machine learning breakthroughs abound, researchers look to democratize benefits appeared first on Next at Microsoft.

]]>
http://blogs.microsoft.com/next/2016/12/01/machine-learning-breakthroughs-abound-researchers-look-democratize-benefits/feed/ 0
Microsoft doubles down on quantum computing bet http://blogs.microsoft.com/next/2016/11/20/microsoft-doubles-quantum-computing-bet/ http://blogs.microsoft.com/next/2016/11/20/microsoft-doubles-quantum-computing-bet/#respond Mon, 21 Nov 2016 00:48:06 +0000 http://blogs.microsoft.com/next/?p=58475 Microsoft is doubling down on its commitment to the tantalizing field of quantum computing, making a strong bet that it is possible to create a scalable quantum computer using what … Read more »

The post Microsoft doubles down on quantum computing bet appeared first on Next at Microsoft.

]]>
Microsoft is doubling down on its commitment to the tantalizing field of quantum computing, making a strong bet that it is possible to create a scalable quantum computer using what is called a topological qubit.

Longtime Microsoft executive Todd Holmdahl – who has a history of successfully bringing seemingly magical research projects to life as products – will lead the scientific and engineering effort to create scalable quantum hardware and software.

“I think we’re at an inflection point in which we are ready to go from research to engineering,” said Holmdahl, who is corporate vice president of Microsoft’s quantum program.

Holmdahl, who previously played a key role in the development of the Xbox, Kinect and HoloLens, noted that success is never guaranteed. But, he said, he thinks the company’s long investment in quantum research has been fruitful enough that there’s a clear roadmap to a scalable quantum computer.

“None of these things are a given,” Holmdahl said. “But you have to take some amount of risk in order to make a big impact in the world, and I think we’re at the point now that we have the opportunity to do that.”

Microsoft has hired two leaders in the field of quantum computing, Leo Kouwenhoven and Charles Marcus. The company also will soon bring on two other leaders in the field, Matthias Troyer and David Reilly.

Marcus is the Villum Kann Rasmussen Professor at the Niels Bohr Institute at the University of Copenhagen and director of the Danish National Research Foundation-sponsored Center for Quantum Devices.

Kouwenhoven is a distinguished professor at Delft University of Technology in the Netherlands and was founding director of QuTech, the Advanced Research Center on Quantum Technologies.

ms-stories-station-q-marcus-kouwenhoven-10-lo243w

From left, Leo Kouwenhoven and Charles Marcus attend the 2014 Microsoft’s Station Q conference in Santa Barbara, California. (Photo by Brian Smale)

Marcus and Kouwenhoven have been collaborating with Microsoft’s quantum team for years, with Microsoft funding an increasing share of the topological qubit research in their labs.  After they join Microsoft, they will retain their academic titles and affiliation to their host universities, continue to run their university research groups and contribute to building dedicated Microsoft quantum labs at their respective universities.

Both researchers say that joining Microsoft is the best path to ensuring that their breakthroughs can help create a scalable quantum computer.

“It’s very exciting,” Kouwenhoven said. “I started working on this as a student way back, and at that time we had not a clue that this could ever be used for anything practical.”

Kouwenhoven’s collaboration with Microsoft began casually enough, after a visit to the company’s Santa Barbara, California, lab and a “nice walk along the beach” with Michael Freedman, the lab’s director and a specialist in topological mathematics.

After years of scientific collaboration, Kouwenhoven said, they’ve reached a point where they can benefit from an engineer’s perspective on how to bring the work to reality.

“The engineering will also help move the science forward,” Kouwenhoven said.

That’s important because Microsoft isn’t just interested in creating one qubit that can work in one perfect lab environment – what Marcus calls “a demonstration of quantum information.”

Instead, the company hopes to create dependable tools that scientists without a quantum background can use to solve some of the world’s most difficult problems. By doing that, they believe they will help usher in a “quantum economy” that could revolutionize industries such as medicine and materials science.

Marcus – whose collaboration with Microsoft began almost by happenstance when he happened to be seated next to Microsoft’s Freedman at a dinner some years ago – said he came to realize that a quantum economy would never be realized unless the scientists and the engineers began partnering more closely.

“I knew that to get over the hump and get to the point where you started to be able to create machines that have never existed before, it was necessary to change the way we did business,” Marcus said. “We need scientists, engineers of all sorts, technicians, programmers, all working on the same team.”

That effort includes bringing other longtime collaborators on board.

Troyer is currently a professor of computational physics at ETH Zurich in Switzerland, one of the leading universities in the world.  Among his areas of expertise are simulations of quantum materials, the testing of quantum devices, optimization of quantum algorithms and the development of software for quantum computers.

Reilly, an experimental physicist, is a professor and director of the Centre for Quantum Machines at the University of Sydney in Australia. He leads a team of physicists and engineers working on the challenges of scaling up quantum systems.

Making the building blocks of a quantum computer
Microsoft’s approach to building a quantum computer is based on a type of qubit – or unit of quantum information – called a topological qubit.

Qubits are the key building block to a quantum computer. Using qubits, researchers believe that quantum computers could very quickly process multiple solutions to a problem at the same time, rather than sequentially.

One of the biggest challenges to building a working quantum computer is how picky qubits can be. A quantum system can only remain in a quantum state when it’s not being disturbed, so quantum computers are built to be in incredibly cold, unique environments.

The Microsoft team believes that topological qubits are better able to withstand challenges such as heat or electrical noise, allowing them to remain in a quantum state longer. That, in turn, makes them much more practical and effective.

“A topological design is less impacted by changes in its environment,” Holmdahl said.

At the same time as Microsoft is working to build a quantum computer, it’s also creating the software that could run on it. The goal is to have a system that can begin to efficiently solve complex problems from day one.

“Similar to classical high-performance computing, we need not just hardware but also optimized software,” Troyer said.

To the team, that makes sense: The two systems can work together to solve certain problems, and the research from each can help the other side.

“A quantum computer is much more than the qubits,” Reilly said. “It includes all of the classical hardware systems, interfaces and connections to the outside world.”

An even smarter cloud, and the ability to solve seemingly intractable problems
With effective quantum hardware and software, quantum experts say they could create vast computing power that could address some of the world’s most pressing problems, from climate change and hunger to a multitude of medical challenges.

That’s partly because the computers could emulate physical systems, speeding up things like drug development or our understanding of plant life. Researchers say the intelligent cloud could be exponentially more powerful, similar to how cell phones evolved into smart phones.

“There is a real opportunity to apply these computers to things that I’ll call material sciences of physical systems,” Holmdahl said. “A lot of these problems are intractable on a classical computer, but on a quantum computer we believe that they are tractable in a reasonable period of time.”

Kouwenhoven said that applies to the field of quantum physics itself, such as research into dark matter and other fundamental questions about our understanding of the universe itself.

“I would find it interesting to go back to my science background and use the quantum computer to solve quantum problems,” he said.

The transistor and the ash sucker
Then there’s the vast unknown. Computer scientists will often point out that when scientists invented the very first transistor, they had no way of conceiving of an application like a smart phone.

“My guess is that back in the 40s and 50s, when they were thinking about the first transistor, they didn’t necessarily know how this thing was going to be used. And I think we’re a little bit like that,” Holmdahl said.

One of those inventors was Walter Brattain. He grew up in the same small town of Tonasket, Washington, as Holmdahl. Being a technology history buff, Holmdahl has long been fascinated by Brattain’s life.

With quantum computing, Holmdahl said he sees an opportunity to be among the people who are following in Brattain’s footsteps.

“The opportunity to be at the beginning of the next transistor is not lost on me,” Holmdahl said.

When he took this role, Holmdahl also was thinking about another man who’s had a great influence on his life: His 20-year-old son, who told Holmdahl that if you think you’re one of the smartest guys at the table, you need to find a new table.

“This is definitely a new table for me,” said Holmdahl, a Stanford-educated engineer who now spends his free time reading about things like quantum physics and entanglement.

When Marcus thinks about what a quantum computer could do, he often thinks about an old car his family once had. It was a top-of-the-line car of its day, with all the latest technologies — including a dashboard gadget designed to suck ash directly from a cigarette.

At the time, Marcus has often thought, someone must have believed that was as good as it was ever going to get in car technology.

“Nobody, when they were designing ash suckers, was thinking about self-driving cars,” he said.

The same thing could easily apply to computational power.

“People who think of computation as being completed are in the ash sucker phase,” he said.

Related:

Allison Linn is a senior writer at Microsoft. Follow her on Twitter.

The post Microsoft doubles down on quantum computing bet appeared first on Next at Microsoft.

]]>
http://blogs.microsoft.com/next/2016/11/20/microsoft-doubles-quantum-computing-bet/feed/ 0
Microsoft researchers detect lung-cancer risks in web search logs http://blogs.microsoft.com/next/2016/11/10/microsoft-researchers-detect-lung-cancer-risks-web-search-logs/ http://blogs.microsoft.com/next/2016/11/10/microsoft-researchers-detect-lung-cancer-risks-web-search-logs/#respond Thu, 10 Nov 2016 16:00:12 +0000 http://blogs.microsoft.com/next/?p=58433 Smoking cigarettes is the leading cause of lung cancer, the most common cause of cancer death in the world. But nearly 20 percent of lung-cancer diagnoses are made in people … Read more »

The post Microsoft researchers detect lung-cancer risks in web search logs appeared first on Next at Microsoft.

]]>
Smoking cigarettes is the leading cause of lung cancer, the most common cause of cancer death in the world. But nearly 20 percent of lung-cancer diagnoses are made in people who are non-smokers. That means in addition to smoking, geographic, demographic and genetic factors play a role in the devastating disease.

A project from Microsoft’s research labs is exploring the feasibility of using anonymized web search data to learn more about lung-cancer risk factors and provide early warning to people who are candidates for disease screening.

TWEET THIS
“People tend to whisper their health concerns into search engines on a regular basis.” – Eric Horvitz

The findings, published Thursday in JAMA Oncology, extend research that team members published last June on the feasibility of using the text of questions people ask search engines to predict diagnoses of pancreatic cancer. The machine-learning method builds on patterns found in the search queries.

“Here, we are not just looking at the text of the queries; we also consider the locations that people are in when they issue these queries and we tie that back to contextual risk factors linked to those locations,” says study co-author Ryen White, chief technology officer for health intelligence at Microsoft Health in Redmond, Washington.

For example, the model developed by the researchers determines the ZIP code where the search was issued and correlates the location data with maps from the U.S. Geological Survey to determine environmental levels of radon gas, a known lung-cancer risk. Census data reveal the average age of homes in each region, which is relevant as older homes are poorly ventilated and thus can trap radon.

Knowing ZIP codes also helps the researchers infer users’ socioeconomic status and race, providing additional clues on cancer risk. According to the Centers for Disease Control and Prevention, people living below the poverty level have higher rates of smoking than the general population, and death rates for people with cancer are highest among black Americans.

In addition, the model uses algorithms to determine searchers’ likely gender and age from patterns of queries. Searches from the same mobile device within hours of each other from ZIP codes separated by hundreds, or thousands, of miles could indicate air travel.

Taken together, these data “allow us to discover new risk factors, things that might not have been thought of in the past that might actually be important,” White says. “We looked at air travel, for example, as one of the factors that might be tied to a higher likelihood.”

Ryen White

Ryen White, chief technology officer for Microsoft Health and an information retrieval expert. (Photography by Scott Eklund/Red Box Pictures)

The findings are associations, not evidence of a cause, emphasizes study co-author Eric Horvitz, technical fellow and managing director of Microsoft’s research lab in Redmond. But, he adds, they can suggest directions for future clinical studies on lung cancer.

Take plane travel, for example. Horvitz says that although it was useful in their predictive models, the researchers have yet to confirm a causal connection between plane travel and lung cancer. “However, the result frames a hypothesis that can be pursued and studied. Same with how radon gas and older homes link up,” he says.

To develop the model, Horvitz and White identified so-called experiential queries such as “I have just been diagnosed with lung cancer,” which are then followed up with behaviors that provide evidence of a recent diagnosis, such as multiple queries on treatment options and side effects.

The model then looks back in time at the anonymized logs for searches that might signal a pending diagnosis. These include searches about symptoms such as hoarseness and others that provide evidence of known and potential risk factors such as cigarette use, locations linked to elevated radon levels and frequent long-distance travel.

The researchers ran the model on the anonymized logs of nearly 5 million searchers and found that it can identify 1.5 percent to nearly 40 percent of searchers a year in advance of when they will input queries consistent with a lung cancer diagnosis. The percentages vary as the sensitivity of the model is shifted to limit false positive rates from 1 in 100,000 to 1 in 1,000. The approach performs more effectively for searchers identified as high risk, such as living in a ZIP code with elevated radon levels.

The research, explains Horvitz, who holds both a Ph.D. and MD from Stanford University, is part of a broader and ongoing effort to use the vast aggregations of data compiled from human interactions with the web to help advance clinical medicine.

“People tend to whisper their health concerns into search engines on a regular basis,” he says. “This kind of data can serve as a complement to more formal clinical information.”

The research, he adds, “shows promise for identifying new clinically relevant findings in multiple areas of healthcare.”

The researchers are still discussing how the research might eventually be used. For example, White says, at some point in the future people might consent to having relevant information and inferences from web search logs and other data streams shared with their doctor.

At this point, Horvitz notes, this is purely research. But with publication of the findings in the medical literature, the work could stimulate interest by clinical researchers and inform the development of future screening systems that can catch cancers earlier in their progression.

“The first step,” he says, “is to see if these kinds of things are feasible.”

Related:

Follow Eric Horvitz on Twitter

John Roach writes about Microsoft research and innovation. Follow him on Twitter.

The post Microsoft researchers detect lung-cancer risks in web search logs appeared first on Next at Microsoft.

]]>
http://blogs.microsoft.com/next/2016/11/10/microsoft-researchers-detect-lung-cancer-risks-web-search-logs/feed/ 0
Microsoft researchers release graph that helps machines conceptualize http://blogs.microsoft.com/next/2016/11/01/microsoft-researchers-release-graph-that-helps-machines-conceptualize/ http://blogs.microsoft.com/next/2016/11/01/microsoft-researchers-release-graph-that-helps-machines-conceptualize/#respond Tue, 01 Nov 2016 14:00:28 +0000 http://blogs.microsoft.com/next/?p=58334 “Jaguar.” To most computers, that word printed on an otherwise blank screen is simply a string of characters. It’s different for people. You see a word associated with a big … Read more »

The post Microsoft researchers release graph that helps machines conceptualize appeared first on Next at Microsoft.

]]>
“Jaguar.”

To most computers, that word printed on an otherwise blank screen is simply a string of characters.

It’s different for people. You see a word associated with a big cat, a large mammal. Given the context of valet parking, it might also bring to mind a luxury brand that is similar to Mercedes and BMW.

Put another way, you have a collection of ideas, or concepts, of what “Jaguar” means and the mental agility to use context to infer which concept the writer of the word intended to convey.

On Tuesday, a team of scientists from Microsoft Research Asia, Microsoft’s research lab in Beijing, China, announced the public release of technology designed to help computers conceptualize in a humanlike fashion.

dsc01441-v2

From left, Lei Ji, Jun Yan and Dawei Zhang of Microsoft Research Asia were key players in the development of Microsoft Concept Graph. (Photo credit: Microsoft.)

The Microsoft Concept Graph, as it is known, is a massive graph of concepts – more than 5.4 million and growing – that machine-learning algorithms are culling from billions of web pages and years’ worth of anonymized search queries.

“We want to provide machines some commonsense, high-level concepts” so that they can better understand, and process, human communication, says Jun Yan, a senior research manager at Microsoft Research Asia, who is working on the project.

TWEET THIS
“We want to provide machines some commonsense, high-level concepts.” – Jan Yun
Microsoft researchers release graph that helps machines conceptualize.

Knowledge graphs such as this one are a major component of ongoing efforts in industry and academia to computationally simulate human thinking, which computer scientists argue is a hallmark of true artificial intelligence.

“The limitation of computers is that they do not have commonsense knowledge or semantics. They can only understand the characters of words,” Yan explains. “But with humans it is different. Humans have a lot of background knowledge to understand things.”

Conceptual computing
The research behind the Microsoft Concept Graph has been ongoing for six years. The technology has potential applications that range from keyword advertising and search enhancement to the development of human-like chatbots.

For example, in traditional search advertising, a luxury car company buys a list of keywords related to products it wants to sell, such as various models of sport utility vehicles, or SUVs, Yan explains. When those models are queried, the engine surfaces an ad for the car company.

Using data from the Microsoft Concept Graph, the keyword sales team can also suggest that the car company buy related keywords, such as “upmarket SUV,” “top crossover” and potentially hundreds more.

illustration3-v2

“This is an opportunity to earn more revenue from the advertiser, and for the advertiser to reach a larger audience,” Yan says.

Daxin Jiang, a China-based principal development manager with Microsoft’s search engine Bing, has collaborated with the Concept Graph team for three years to incorporate conceptualization techniques to improve the ranking and relevance of search results.

For example, the graph recognizes certain phrases as single entities. When “Microsoft Research Asia” is queried, Bing ranks documents with the phrase “Microsoft Research Asia” higher than documents where “Microsoft,” “Research” and “Asia” are separated by additional words or punctuation.

His group is also leveraging the Concept Graph for question answering. For example, the graph can answer the question “What are the Asian developing countries?”

“The Concept Graph scans through web pages and extracts instances that belong to concepts,” Jiang explains. “’Asian developing countries’ is a concept and China, India, etc., are all instances for this concept.”

Learning conceptualization
To create the Microsoft Concept Graph, Yan and colleagues trained a machine-learning algorithm to search through the database of indexed web pages and search queries for word associations linked together by basic, common speech patterns including the phrases “such as” and “is a.”

For example, if a web page contains the text “an animal, such as a dog,” the algorithm selects “animal” as a candidate concept for the instance “dog,” Yan explains. The text “Microsoft is a technology company” results in the instance “Microsoft” paired with the concept “technology company.”

The algorithm also performs a statistical analysis to weed out rare or incorrect instance-concept pairs that arise from semantic ambiguity.

For example, on the first pass, the sentence “domestic animals other than dogs such as cats” produces two results: “cat is a dog” and “cat is a domestic animal,” which are both derived from the pattern “such as.”

As the algorithm processes more and more pages of text, it learns that “cat is a domestic animal” is more frequent than “cat is a dog.” When the frequency difference between the two ambiguous meanings crosses a defined threshold, the algorithm weeds out “cat is a dog.”

illustration1-v2

“We only keep the frequently mentioned things by different people on different webpages,” Yan says. “That way we have confidence in the instance and concept pair.”

Humans, too, are recruited to look over segments of the data for erroneous pairs, which helps improve the quality of the graph.

The result is millions of concepts, ranging from the common “cities” and “musicians” to the rare “wedding dress designers” and “acid blocking heartburn drugs.”

Each concept is linked to a set of instances and described by attributes such as person, thing and object as well as relationships such as located in, friend of and president of.

Tagging model
Along with the Microsoft Concept Graph, the researchers released a related technology called the Microsoft Concept Tagging Model, which automatically maps instances to concepts with a probability score, enabling machines humanlike conceptualization.

The model is based on a machine-learning algorithm that weights, or scores, matches for a given instance-concept pair. In this way, the most computationally useful concept, a so-called basic-level concept, is ranked highest.

For example, the instance “Microsoft” automatically maps to the concepts “company,” “software company” and “largest OS vendor.” Both “company” and “largest OS vendor” are highly related to Microsoft, but “software company” is the most useful, and thus highest ranked, concept.

Why?

Microsoft is certainly a “company,” but so too are ExxonMobil and McDonalds, which have little else in common with Microsoft. Whereas “largest OS vendor” applies only to Microsoft. “Software company” is a concept that relates to Microsoft, as well as similar companies such as IBM, Adobe and Oracle.

In other words, “software company” is specific without being too specific; it is general enough to be related to several other instances, which makes it useful for semantic computation such as performing searches or answering questions.

The accuracy of the model increases as it incorporates the context of surrounding words.

For example, for the sentence, “I want to eat an apple,” the tagging model gives the “fruit” concept more weight, as a person is unlikely to eat the well-known technology company. The weighting is reversed for “I want to visit Apple” since “visit” is more likely associated with “technology company.”

“Based on the context of previous terms, we can distinguish the detail of the concept to further filter out irrelevant concepts,” Yan explains. “When you see ‘eat apple’ we know the high probability thing is the fruit.”

Model release
The public release of the Microsoft Concept Graph and Microsoft Concept Tagging Model are intended to support research on natural language understanding for technologies such as search engines, chatbots and other artificial intelligence systems, according to Yan.

“We want to encourage more people to utilize our fundamental service,” he says.

Yanghua Xiao, an associate professor of computer science at Fudan University in Shanghai, China, for example, is using the graph in his research on enabling machines to understand human language, including natural language questions.

Take, for example, the question: “How many people are there in New York?” which is about the population of a city.

“Whatever the city is, say Shanghai or London, they share the same semantic template,” he notes. “The Concept Graph, which contains facts like ‘New York is a city’ can help us build the template so that the machine can understand the question with the template and answer the question with exact answers.”

The Microsoft Concept Graph and Microsoft Concept Tagging Model are available to download for research purposes. The current release includes the core version of concept data in English mined from billions of web pages and search queries.

Future releases will include conceptualization with context for understanding short and long texts as well as support for Chinese.

Related:

John Roach writes about Microsoft research and innovation. Follow him on Twitter.

The post Microsoft researchers release graph that helps machines conceptualize appeared first on Next at Microsoft.

]]>
http://blogs.microsoft.com/next/2016/11/01/microsoft-researchers-release-graph-that-helps-machines-conceptualize/feed/ 0
Microsoft releases beta of Microsoft Cognitive Toolkit for deep learning advances http://blogs.microsoft.com/next/2016/10/25/microsoft-releases-beta-microsoft-cognitive-toolkit-deep-learning-advances/ http://blogs.microsoft.com/next/2016/10/25/microsoft-releases-beta-microsoft-cognitive-toolkit-deep-learning-advances/#respond Tue, 25 Oct 2016 14:59:51 +0000 http://blogs.microsoft.com/next/?p=58244 Microsoft has released an updated version of Microsoft Cognitive Toolkit, a system for deep learning that is used to speed advances in areas such as speech and image recognition and search … Read more »

The post Microsoft releases beta of Microsoft Cognitive Toolkit for deep learning advances appeared first on Next at Microsoft.

]]>
Microsoft has released an updated version of Microsoft Cognitive Toolkit, a system for deep learning that is used to speed advances in areas such as speech and image recognition and search relevance on CPUs and NVIDIA® GPUs.

The toolkit, previously known as CNTK, was initially developed by computer scientists at Microsoft who wanted a tool to do their own research more quickly and effectively. It quickly moved beyond speech and morphed into an offering that customers including a leading international appliance maker and Microsoft’s flagship product groups depend on for a wide variety of deep learning tasks.

“We’ve taken it from a research tool to something that works in a production setting,” said Frank Seide, a principal researcher at Microsoft Artificial Intelligence and Research and a key architect of Microsoft Cognitive Toolkit.

Frank Seide. (Photography by Scott Eklund/Red Box Pictures)

Frank Seide (Photography by Scott Eklund/Red Box Pictures)

The latest version of the toolkit, which is available on GitHub via an open source license, includes new functionality that lets developers use Python or C++ programming languages in working with the toolkit.  With the new version, researchers also can do a type of artificial intelligence work called reinforcement learning.

Finally, the toolkit is able to deliver better performance than previous versions. It’s also faster than other toolkits, especially when working on big datasets across multiple machines. That kind of large-scale deployment is necessary to do the type of deep learning across multiple GPUs that is needed to develop consumer products and professional offerings.

It’s also key to speeding up research breakthroughs. Last week, Microsoft Artificial Intelligence and Research announced that they had, for the first time, created a technology that recognizes words in a conversation as well as a person does. The team credited Microsoft Cognitive Toolkit for vastly improving the speed at which they could reach this milestone.

The team that developed the Microsoft toolkit says the ability to work across multiple servers is a key advantage over other deep learning toolkits, which can see suboptimal performance and accuracy when they start tackling bigger datasets. Microsoft Cognitive Toolkit has built-in algorithms to minimize such degradation of computation.

“One key reason to use Microsoft Cognitive Toolkit is its ability to scale efficiently across multiple GPUs and multiple machines on massive data sets,” said Chris Basoglu, a partner engineering manager at Microsoft who has played a key role in developing the toolkit.

Chris Basoglu (Photography by Scott Eklund/Red Box Pictures)

Chris Basoglu (Photography by Scott Eklund/Red Box Pictures)

Microsoft Cognitive Toolkit can easily handle anything from relatively small datasets to very, very large ones, using just one laptop or a series of computers in a data center. It can run on computers that use traditional CPUs or GPUs, which were once mainly associated with graphics-heavy gaming but have proven to be very effective for running the algorithms needed for deep learning.

“Microsoft Cognitive Toolkit represents tight collaboration between Microsoft and NVIDIA to bring advances to the deep learning community,” said Ian Buck, general manager of the Accelerated Computing Group at NVIDIA.  “Compared to the previous version, it delivers almost two times performance boost in scaling to eight Pascal GPUs in an NVIDIA DGX-1™.”

Microsoft Cognitive Toolkit is designed to run on multiple GPUs, including Azure’s GPU offering, which is currently in preview. The toolkit has been optimized to best take advantage of the NVIDIA hardware and Azure networking capabilities that are part of the Azure offering.

Democratizing AI, and its tools
The toolkit is being released at a time when everyone from small startups to major technology companies are seeing the possibilities for using deep learning for things like speech understanding  and image recognition.

Broadly speaking, deep learning is an artificial intelligence technique in which developers and researchers use large amounts of data – called training sets – to teach computer systems to recognize patterns from inputs such as images or sounds.

For example, a deep learning system can be given a training set showing all sorts of pictures of fruits and vegetables, after which it learns to recognize images of fruits and vegetables on its own. It gets better as it gets more data, so each time it encounters a new, weird-looking eggplant or odd-shaped apple, it can refine the algorithm to become even more accurate.

ct-learningchart-v2

In this example of using Microsoft Cognitive Toolkit for training a Speech Acoustic Model, as more data is applied to the model it converges with better accuracy.

These types of achievements aren’t just research milestones. Thanks to advances in deep learning, fueled in part by big jumps in computing horsepower, we now have consumer products like Skype Translator, which recognizes speech and provides real-time voice translation, and the Cortana digital assistant, which can understand your voice and help you do everything from search for plane tickets to remember appointments.

“This is an example of democratizing AI using Microsoft Cognitive Toolkit,” said Xuedong Huang, Microsoft distinguished engineer.

More flexibility for more sophisticated work
When they first developed the toolkit, Basoglu said they figured many developers couldn’t, or wouldn’t, want to write a lot of code. So, they created a custom system that made it easy for developers to configure their systems for deep learning without any extra coding.

As the system grew more popular, however, they heard from developers who wanted to combine their own Python or C++ code with the toolkit’s deep learning capabilities.

They also heard from researchers who wanted to use the toolkit to enable reinforcement learning research. That’s a research area in which an agent learns the right way to do something – like find their way around a room or form a sentence – through lots of trial and error. That’s the kind of research that could eventually lead to true artificial intelligence, in which systems can make complex decisions on their own. The new version gives developers that ability as well.

Using Microsoft Cognitive Toolkit to avoid wasting food and live a healthier life
Although Microsoft Cognitive Toolkit was originally developed by speech researchers, it can now be used for a much wider variety of purposes.

Liebherr, the specialist in cooling, is using it to simplify daily life.

The company has installed cameras in its refrigerators that do more than just display images — they will actually recognize individual food items in the refrigerator and automatically incorporate this information into an inventory shopping list.

In the future, this technology will help in shopping and meal planning. The stored groceries can be recorded and monitored by using cameras with object recognition.

“People know at any time, and from anywhere, what is still in the fridge and what should be on the shopping list,” said Andreas Giesa, the ebusiness manager for Liebherr.

This will help customers avoid having food spoil and make daily life more comfortable.

The Bing relevance team uses it as part of its effort to find better ways to discover latent, or hidden, connections in search terms in order to give users better results.

For example, with deep learning a system can be trained to automatically figure out that when a user types in, “How do you make an apple pie?” they are looking for a recipe, even though the word “recipe” doesn’t appear in the search query. Without such a system, that type of rule would have to be engineered manually.

Clemens Marschner, a principal software development engineer who works on Bing relevance, said the team worked very closely with the toolkit’s creators to make it work well for developers doing other types of deep learning beyond speech. For them, the payoff was a system that lets them use massive computing power to quickly get results.

“No other solution allows us to scale learning to large data sets in GPU clusters as easily,” he said.

Microsoft also is continuing to use the Microsoft Cognitive Toolkit to improve speech recognition. Yifan Gong, a principal applied science manager in speech services, said they have been using the toolkit to develop more accurate acoustic models for speech recognition in Microsoft products including Windows and Skype Translator.

Gong said his team relied on the toolkit to develop new deep learning architectures, including using a technique called long short term memory, to deliver customers more accurate results.

Those improvements will make it easier for Microsoft systems to better understand what users are trying to say even when they are giving voice commands or interacting with Cortana in noisy environments such as at a party, driving on the highway or in an open floor plan office.

For the user, the benefits of this type of improvements are obvious.

“If you have higher recognition accuracy, you don’t have to repeat yourself as often,” Gong said.

Related:

Allison Linn is a senior writer at Microsoft. Follow her on Twitter.

The post Microsoft releases beta of Microsoft Cognitive Toolkit for deep learning advances appeared first on Next at Microsoft.

]]>
http://blogs.microsoft.com/next/2016/10/25/microsoft-releases-beta-microsoft-cognitive-toolkit-deep-learning-advances/feed/ 0
Microsoft computing method makes key aspect of genomic sequencing seven times faster http://blogs.microsoft.com/next/2016/10/18/microsoft-computing-method-makes-key-aspect-genomic-sequencing-seven-times-faster/ http://blogs.microsoft.com/next/2016/10/18/microsoft-computing-method-makes-key-aspect-genomic-sequencing-seven-times-faster/#respond Tue, 18 Oct 2016 13:00:46 +0000 http://blogs.microsoft.com/next/?p=58010 Microsoft has come up with a way to significantly reduce the time it takes to do the major computational aspects of sequencing a genome. Microsoft’s method of running the Burrows-Wheeler … Read more »

The post Microsoft computing method makes key aspect of genomic sequencing seven times faster appeared first on Next at Microsoft.

]]>
Microsoft has come up with a way to significantly reduce the time it takes to do the major computational aspects of sequencing a genome.

Microsoft’s method of running the Burrows-Wheeler Aligner (BWA) and the Broad Institute’s Genome Analysis Toolkit (GATK) on its Azure cloud computing system is seven times faster than the previous version, allowing researchers and medical professionals to get results in just four hours instead of 28.  BWA and GATK are two of the most common computational tools used in combination for genome sequencing.

The time savings is critical for a number of reasons. For example, it could allow doctors to diagnose rare and dangerous genetic conditions 24 hours earlier, getting the patient lifesaving treatment faster.

TWEET THIS
Faster genome sequencing with Broad Institute on Azure
Microsoft Research accelerates by 7x Broad Institute genome sequencing tools

“There’s a lot of actionable information in which speed is really important,” said Ravi Pandya, a principal software architect in Microsoft’s genomics group who has been key to this acceleration work.

Ravi Pandiya

Ravi Pandiya

Over time, experts say the ability to sequence genomic data of plants and animals also could hasten important breakthroughs in other research fields, such as renewable energy and efficient food production.

A ‘genomics revolution’
The quicker Azure-based offering comes as the ability to analyze genomic data is becoming much more affordable, making it available to more people who need it and fueling a genomics revolution.

David Heckerman, who directs Microsoft’s genomics group, said the requests from hospitals, clinics and research institutions to process genomics data is growing at an extremely high rate.

“It’s getting to the point where tens of thousands of genomes are being sequenced, so efficiency really matters,” Pandya said.

That wasn’t always the focus.

Geraldine Van der Auwera, who works for the Broad Institute on the GATK platform and directs its 36,000-user  online support forum, said that for a long time, genomic analysis was mainly used for research purposes instead of medical care. That meant there wasn’t as much of an urgency to shave hours or minutes off the time it took to do the computations.

In addition, she said, researchers were primarily focused on making sure their methods were right.

“For a long time we focused on accuracy at the expense of speed,” she said.

As the tools have matured and researchers have become more confident in the accuracy, that’s changed, she said.

Geraldine Van der Auwera

Geraldine Van der Auwera

“As this type of information is used more often in the clinical setting, the emphasis on speed becomes much stronger,” Van der Auwera said.

That’s where computer scientists can help.

Many of the tools used for genomic analysis were written by biologists who developed an interest in computer science because computation was becoming so valuable to their work.

Meanwhile, Pandya said, computer scientists such as himself started developing an interest in biological sciences because they saw so many possibilities. Now, those computer scientists are augmenting the work of biologists.

With BWA and GATK, the Microsoft team scoured the code for places where they could make the algorithms run more smoothly, efficiently and reliably, without compromising the attention to accuracy.

“We took Microsoft’s keen expertise in software development and applied it to the algorithms, making them faster,” Heckerman said.

Microsoft holds a nonexclusive license from the Broad Institute to provide GATK on Azure. It plans to work with the Broad Institute to incorporate these performance improvements into future versions of GATK. Broad Institute would then make these improvements available to researchers.

Heng Li, a research scientist at Broad who initially developed the BWA tool and worked with Microsoft to make it faster, said the collaborative nature of the work made for better results.

“They have knowledge that I don’t possess, but on the other hand I know what’s important and what’s not on the biological analysis side,” Li said.

The cloud for storage and computation
As genomic analysis becomes more critical for health and other applications, Broad has started working with Microsoft and other technology companies to move tools like GATK and BWA to cloud computing platforms.

Cloud computing is ideal for this type of computational work, because it takes a lot computing power, requires a lot of data storage and requests can come in fits and bursts. For most hospitals, research labs and other biological sciences facilities it would be too expensive to invest in the necessary computing capability, and impractical to take on the job of hosting all that data on their own, if only because the sheer volume of data is growing exponentially.

As these tools become more useful, most researchers and clinicians also want to focus on getting the results they need, rather than worrying about the technical side of things.

“When you get to this next level you just want answers,” Pandya said. “You want it to be really simple.”

Eventually, the Microsoft team hopes to use another company strength – developing an ecosystem around a technology – to help hospitals and other institutions implement these systems. Microsoft’s genomics team is talking to independent software vendors about ways to make that happen.

The tool is part of Microsoft’s broader health-related efforts. On Monday, as part of an update to its Cancer Moonshot initiative, the White House announced that Microsoft had joined an effort to maintain cancer genomic data in the cloud. The effort is a partnership between the public and private sector.

Related:

Allison Linn is a senior writer at Microsoft. Follow her on Twitter.

The post Microsoft computing method makes key aspect of genomic sequencing seven times faster appeared first on Next at Microsoft.

]]>
http://blogs.microsoft.com/next/2016/10/18/microsoft-computing-method-makes-key-aspect-genomic-sequencing-seven-times-faster/feed/ 0
Historic Achievement: Microsoft researchers reach human parity in conversational speech recognition http://blogs.microsoft.com/next/2016/10/18/historic-achievement-microsoft-researchers-reach-human-parity-conversational-speech-recognition/ http://blogs.microsoft.com/next/2016/10/18/historic-achievement-microsoft-researchers-reach-human-parity-conversational-speech-recognition/#respond Tue, 18 Oct 2016 12:52:28 +0000 http://blogs.microsoft.com/next/?p=58220 Microsoft has made a major breakthrough in speech recognition, creating a technology that recognizes the words in a conversation as well as a person does. In a paper published Monday, … Read more »

The post Historic Achievement: Microsoft researchers reach human parity in conversational speech recognition appeared first on Next at Microsoft.

]]>
Microsoft has made a major breakthrough in speech recognition, creating a technology that recognizes the words in a conversation as well as a person does.

In a paper published Monday, a team of researchers and engineers in Microsoft Artificial Intelligence and Research reported a speech recognition system that makes the same or fewer errors than professional transcriptionists.  The researchers reported a word  error rate (WER) of 5.9 percent, down from the 6.3 percent WER the team reported just last month.

The 5.9 percent error rate is about equal to that of people who were asked to transcribe the same conversation, and it’s the lowest ever recorded against the industry standard Switchboard speech recognition task.

“We’ve reached human parity,” said Xuedong Huang, the company’s chief speech scientist. “This is an historic achievement.”

The milestone means that, for the first time, a computer can recognize the words in a conversation as well as a person would. In doing so, the team has beat a goal they  set less than a year ago — and greatly exceeded everyone else’s expectations as well.

“Even five years ago, I wouldn’t have thought we could have achieved this. I just wouldn’t have thought it would be possible,” said Harry Shum, the executive vice president who heads the Microsoft Artificial Intelligence and Research group.

The research milestone comes after decades of research in speech recognition, beginning in the early 1970s with DARPA, the U.S. agency tasked with making technology breakthroughs in the interest of national security. Over the decades, most major technology companies and many research organizations joined in the pursuit.

“This accomplishment is the culmination of over twenty years of effort,” said Geoffrey Zweig, who manages the Speech & Dialog research group.

The milestone will have broad implications for consumer and business products that can be significantly augmented by speech recognition. That includes consumer entertainment devices like the Xbox, accessibility tools such as instant speech-to-text transcription and personal digital assistants such as Cortana.

“This will make Cortana more powerful, making a truly intelligent assistant possible,” Shum said.

Parity, not perfection
The research milestone doesn’t mean the computer recognized every word perfectly. In fact, humans don’t do that, either. Instead, it means that the error rate – or the rate at which the computer misheard a word like “have” for “is” or “a” for “the” – is the same as you’d expect from a person hearing the same conversation.

Zweig attributed the accomplishment to the systematic use of the latest neural network technology in all aspects of the system.

The push that got the researchers over the top was the use of neural language models in which words are represented as continuous vectors in space, and words like “fast” and “quick” are close together.

“This lets the models generalize very well from word to word,” Zweig said.

‘A dream come true’
Deep neural networks use large amounts of data – called training sets – to teach computer systems to recognize patterns from inputs such as images or sounds.

To reach the human parity milestone, the team used Microsoft Cognitive Toolkit, a homegrown system for deep learning that the research team has made available on GitHub via an open source license.

Huang said Microsoft Cognitive Toolkit’s ability to quickly process deep learning algorithms across multiple computers running a specialized chip called a graphics processing unit vastly improved the speed at which they were able to do their research and, ultimately, reach human parity.

The gains were quick, but once the team realized they were on to something it was hard to stop working on it. Huang said the milestone was reached around 3:30 a.m.; he found out about it when he woke up a few hours later and saw a victorious post on a private social network.

“It was a dream come true for me,” said Huang, who has been working on speech recognition for more than three decades.

The news came the same week that another group of Microsoft researchers, who are focused on computer vision, reached a milestone of their own. The team won first place in the COCO image segmentation challenge, which judges how well a technology can determine where certain objects are in an image.

Baining Guo, the assistant managing director of Microsoft Research Asia, said segmentation is particularly difficult because the technology must precisely delineate the boundary of where an object appears in a picture.

“That’s the hardest part of the picture to figure out,” he said.

The team’s results, which built on the award-winning very deep neural network system Microsoft’s computer vision experts designed last year, was 11 percent better than the second place winner and a significant improvement over Microsoft’s first place win last year.

“We continue to be a leader in the field of image recognition,” Guo said.

From recognition to true understanding
Despite huge strides in recent years in both vision and speech recognition, the researchers caution there is still much work to be done.

Moving forward, Zweig said the researchers are working on ways to make sure that speech recognition works well in more real-life settings. That includes places where there is a lot of background noise, such as at a party or while driving on the highway. They’ll also focus on better ways to help the technology assign names to individual speakers when multiple people are talking, and on making sure that it works well with a wide variety of voices, regardless of age, accent or ability.

In the longer term, researchers will focus on ways to teach computers not just to transcribe the acoustic signals that come out of people’s mouths, but instead to understand the words they are saying. That would give the technology the ability to answer questions or take action based on what they are told.

“The next frontier is to move from recognition to understanding,” Zweig said.

Shum has noted that we are moving away from a world where people must understand computers to a world in which computers must understand us. Still, he cautioned, true artificial intelligence is still on the distant horizon.

“It will be much longer, much further down the road until computers can understand the real meaning of what’s being said or shown,” Shum said.

Related:

Paper: Achieving Human Parity in Conversational Speech

Microsoft researchers achieve speech recognition milestone

Speak, hear talk: The quest to create technology that understands speech as well as a human

Follow Harry Shum and Xuedong Huang on Twitter

Allison Linn is a senior writer at Microsoft. Follow her on Twitter.

 

The post Historic Achievement: Microsoft researchers reach human parity in conversational speech recognition appeared first on Next at Microsoft.

]]>
http://blogs.microsoft.com/next/2016/10/18/historic-achievement-microsoft-researchers-reach-human-parity-conversational-speech-recognition/feed/ 0
The moonshot that succeeded: How Bing and Azure are using an AI supercomputer in the cloud http://blogs.microsoft.com/next/2016/10/17/the_moonshot_that_succeeded/ http://blogs.microsoft.com/next/2016/10/17/the_moonshot_that_succeeded/#respond Mon, 17 Oct 2016 13:01:53 +0000 http://blogs.microsoft.com/next/?p=58028 When we type in a search query, access our email via the cloud or stream a viral video, chances are we don’t spend any time thinking about the technological plumbing … Read more »

The post The moonshot that succeeded: How Bing and Azure are using an AI supercomputer in the cloud appeared first on Next at Microsoft.

]]>
When we type in a search query, access our email via the cloud or stream a viral video, chances are we don’t spend any time thinking about the technological plumbing that is behind that instant gratification.

Sitaram Lanka and Derek Chiou are two exceptions. They are engineers who spend their days thinking about ever-better and faster ways to get you all that information with the tap of a finger, as you’ve come to expect.

Now, they have a new superpower to help them out.

A team of Microsoft engineers and researchers, working together, has created a system that uses a reprogrammable computer chip called a field programmable gate array, or FPGA, to accelerate Bing and Azure.

TWEET THIS
“This was a moonshot project that succeeded.” – Sitaram Lanka
“I think a lot of people don’t know what FPGAs are capable of.” – Derek Chiou

Utilizing the FPGA chips, Lanka and Chiou’s teams can write their algorithms directly onto the hardware they are using, instead of using potentially less efficient software as the middle man. What’s more, an FPGA can be reprogrammed at a moment’s notice to respond to new advances in artificial intelligence or meet another type of unexpected need in a datacenter.

Traditionally, engineers might wait two years or longer for hardware with different specifications to be designed and deployed.

“This was a moonshot project that succeeded,” said Lanka, who runs ranking platform for Bing and has been a key collaborator on the project, called Catapult, since its inception about five years ago.

Sitaram Lanka (Photography by Scott Eklund/Red Box Pictures)

Sitaram Lanka (Photography by Scott Eklund/Red Box Pictures)

The end of Moore’s Law, and the beginning of Catapult
FPGAs aren’t new, but until recently no one had ever seriously tried to use them at large scale for cloud computing. That changed when Doug Burger, a distinguished engineer with Microsoft’s research division, and a team including James Larus and Andrew Putnam hit upon the idea of using the chips to solve a huge problem in the technology industry: The slow but eventual end of Moore’s Law.

Moore’s Law has long held that computing power would steadily become both faster and more affordable, allowing everyone from computer manufacturers to datacenter managers to comfortably assume that they could deliver better results at lower cost.

Burger wasn’t interested in figuring out incremental ways to counteract the slowing rates of improvement in silicon chips. He was looking for a radical change, and he found it in FPGAs.

“This is an industry shift,” he said.

Already, Catapult is being used to fuel gains in how quickly and accurately the Bing search engine can process search requests. In addition, it is being used to make Microsoft’s Azure the fastest cloud computing platform available. That allows the company to use fewer servers to deliver better results.

By the end of 2016, an artificial intelligence technique called deep neural networks will be deployed on Catapult to help Bing improve its search results. This AI supercomputer in the cloud will increase the speed and efficiency of Microsoft’s data centers – and anyone who uses Bing should notice the difference, too.

“The net effect is you get much more relevant results,” Lanka said.

New research gains
On Monday, the Catapult team released an academic paper providing more detail on how FPGAs are being deployed in Microsoft’s datacenters, including those supporting the Azure cloud, to accelerate processing and networking speeds.

To make data flow faster, they’ve inserted an FPGA between the network and the servers. That can be used to manage traffic going back and forth between the network and server, to communicate directly to other FPGAs or servers or to speed up computation on the local server.

Chiou, who led the Bing FPGA team and now heads up Microsoft Azure’s Cloud Silicon team, said FPGAs used to be relegated to the back room, performing tasks sent to them. Now, the FPGAs are the first to see every message going into the server, enabling them to both make decisions on how to handle each message and perform the work, often without the processor’s involvement.

“What we’ve done now is we’ve made the FPGA the front door,” Chiou said.

Derek Chiou

Derek Chiou (Photography by Scott Eklund/Red Box Pictures)

The Azure team sees this as the first of many ways they’ll use FPGAs, both to make the company’s cloud computing more efficient and to deliver better, more sophisticated services to customers.

“Microsoft is uniquely positioned to deliver innovation like this, in the cloud,” said Mark Russinovich, the chief technology officer for Azure.

Russinovich notes that’s partly because Azure engineers can build on the work that Bing and research engineers are doing. In fact, it was the Bing team’s success that gave him the confidence to jump on the Microsoft FPGA bandwagon.

“We started deploying them in every server knowing that when we were ready to use them, we wouldn’t have to wait,” he said.

That gamble paid off.  Microsoft Azure is now leapfrogging its cloud competition with both speed and efficiency gains.

Real-world application
From the beginning, the team also wanted to build something that could immediately be used in the real world – more specifically, in Microsoft products – rather than in the more utopian setting of a research lab.

“We weren’t building a hammer and looking for a nail to use it with,” Lanka said.

Burger felt lucky to find a partner in Lanka, who he said both understood the long-term vision and was willing to bet on it.

That led to lots of trial and error as the team worked to make their ideas work in a real-world setting – and one that was constantly changing.

Doug Burger (Photography by Scott Eklund/Red Box Pictures)

Doug Burger (Photography by Scott Eklund/Red Box Pictures)

For example, six years ago Lanka said no one could have predicted how big a role deep learning would start to play in everything from sending texts to searching the web.

As the advances in artificial intelligence become more apparent, they started to see how well-suited FPGAs were for that kind of work. That’s because FPGAs are especially good at efficiently doing parallel computing, which is when many computations are carried out simultaneously.

“I can do much better computation in the same or less time,” Lanka said.

The ability to do deep learning more quickly – using that AI supercomputer in the cloud – has broad implications. It could vastly speed up advances in automatic translation, accelerate medical breakthroughs and create automated productivity tools that better anticipate our needs and solve our workday problems.

With FPGAs, Burger said another key advantage is that you can quickly adapt to whatever the next technological breakthrough is, without having to worry too much about whether you anticipated it or not. That’s because you can easily reprogram the FPGAs directly, instead of using less efficient software or waiting as long as a few years to get new hardware.

Even now, Chiou said he thinks people may be underestimating how much potential these systems could have.

“I think a lot of people don’t know what FPGAs are capable of,” Chiou said.

Related:

Allison Linn is a senior writer at Microsoft. Follow her on Twitter.

The post The moonshot that succeeded: How Bing and Azure are using an AI supercomputer in the cloud appeared first on Next at Microsoft.

]]>
http://blogs.microsoft.com/next/2016/10/17/the_moonshot_that_succeeded/feed/ 0
How Microsoft is using prediction and polling tools to forecast the election http://blogs.microsoft.com/next/2016/10/07/microsoft-using-prediction-polling-tools-forecast-election/ http://blogs.microsoft.com/next/2016/10/07/microsoft-using-prediction-polling-tools-forecast-election/#respond Fri, 07 Oct 2016 13:00:14 +0000 http://blogs.microsoft.com/next/?p=57929 Who will win the U.S. presidential election? For David Rothschild, an economist with Microsoft’s research organization in New York, providing continuously updated odds for the major party candidates is a … Read more »

The post How Microsoft is using prediction and polling tools to forecast the election appeared first on Next at Microsoft.

]]>
Who will win the U.S. presidential election?

For David Rothschild, an economist with Microsoft’s research organization in New York, providing continuously updated odds for the major party candidates is a prominent part of his daily job.

Rothschild runs his own website, PredictWise, that hosts his probability forecasts about the outcomes of public contests including political elections, reality shows and sporting events. The predictions stem from his academic research on digital betting markets as well as an evolving and growing trove of online polling, web search and social media data.

“These are different ways that individuals engage with an online system and either implicitly or explicitly answer a question and provide a data point,” Rothschild explains. For PredictWise, he employs a “method of aggregating that data to create forecasts.”

TWEET THIS
“This is the tip of the iceberg for market intelligence,” Rothschild notes.
“How people act on the web correlates to how they vote,” explains Walter Sun, team lead for Bing Predicts.

The method includes publishing the forecasts in real time to help ensure that he gathers the most relevant data available. In addition, he says, the website keeps him honest by showing he can do what he says he can do.

“And, of course, a lot of people enjoy it,” he adds. “It is a good way to spread what I am doing.”

Market and business intelligence
While Rothschild’s PredictWise is a personal website, his research helps inform several other efforts across Microsoft product groups. All the efforts analyze aggregated, anonymized data based on online searches, polls and other publicly available information to gauge public opinion and predict the outcome of future events. They are on full display this election season.

When Bing users search for information related to political campaigns, for instance, Bing Predicts, a prediction engine embedded in the search tool, drives several features associated with Bing’s election experience.

For example, the prediction engine uses anonymized, aggregated web search data and other information to predict the outcome of the presidential race. A search on “Ohio state prediction” reveals what candidate is predicted to win the battleground state and how the prediction has changed over time.

The same data is also used to make forecasts about everything from professional sporting events to reality television shows.

“How people act on the web correlates to how they vote,” explains Walter Sun, team lead for Bing Predicts.

The prediction platform leans heavily on signals coming from aggregations of web data, known as the wisdom of the crowd, to capture and incorporate real-time information in the predictions, such as the impact of a candidate dropping out of a race or an injury to a star player on an NFL team.

Walter Sun

Walter Sun

Over on MSN, interactive polling widgets are being used to engage visitors with the website’s election coverage.

And Microsoft Pulse, an audience-response technology, is being used during the presidential debates by major news broadcasters to engage their audiences on second screens with real-time questions that probe sentiment and feelings.

“From the broadcasters’ perspective, it forces people to actually pay attention to what is being said versus being drawn out into a whole other conversation,” says Lee Brenner, who leads market development for Microsoft’s Technology and Civic Engagement team. Audiences, he adds, “feel like they can respond and have their voice heard.”

All of these initiatives come at a time when traditional polling methods are becoming less effective because people have abandoned, or don’t answer, landline telephones. Rothschild says Microsoft, with its built-in expertise and background online, can help fill that gap with market and business intelligence tools and services.

“This is a Wild West field right now,” he says. “You are talking about a massive industry that has reached a great disruption with the demise of its standard tool in the telephone and the rise of a new and innovative one in online, a place that we happen to be in.”

Intelligence from betting markets
Rothschild came to the field via a fascination with prediction markets, which are typically online exchanges where people buy and sell contracts on upcoming events. Contract values rise and fall in line with the market’s expectation of a certain outcome, such as whether a candidate will win an election.

For example, on PredictIt, a legal real-money prediction market that Rothschild closely monitors, traders buy and sell contracts on events such as the upcoming presidential election. Here’s how it works:

Trader “A” thinks Hillary Clinton has a 70 percent chance of winning so offers to buy a “yes” contract for 70 cents. The website matches the offer with a seller who thinks Clinton’s odds of winning are lower. Trader “A” can sell the “yes” contract up until the election results are tallied. If Clinton wins, the “yes” contract will pay out $1. If Trump wins, the “yes” contract is worthless.

“Ultimately, prediction markets are a more transparent futures market where people are buying and selling contracts on whether or not a company will be put at a certain valuation at some point,” Rothschild says.

Like other markets and exchanges, each prediction market has quirks. So, for PredictWise, Rothschild aggregates data streams from multiple markets in a way that provides a consistent probability forecast. His model also pulls in polling data and other information, weighting the data streams according to his research on how the individual markets and polls function and interact with each other.

Making the fruits of the academic research publicly available in real time, he notes, also provides a window onto how real-time data flows influence real-time decisions and feed off each other.

“This is the tip of the iceberg for market intelligence,” he notes, “and where market intelligence is going.”

Research flows across Microsoft
The teams that do prediction modeling work together closely. For example, Rothschild collaborates with the MSN team to develop the questions asked in its polling widget. The goal is to generate a high response rate and, in turn, translate the response data into projections of the voting population.

Microsoft Pulse, which is already being implemented by several television networks to retain audience attention during the presidential debates, also has been adopted by market researchers as a low-cost replacement for hardware commonly used to collect preference data from focus groups, notes Brenner.

Lee Brenner

Lee Brenner

The Bing Predicts team recently partnered with Cortana Intelligence to create an enterprise offering of predictive analytics uniquely bundled with web, search and social data anonymously collected and aggregated in the cloud.

“For example, we can look at how people are searching for Xbox around announcements of new release dates, new games, and correlate that to how well they will sell in a given quarter,” explains Sun. “We use that signal for sales forecasting.”

Validation for the enterprise offering, Sun adds, comes from Bing Predicts’ record on forecasting everything from reality shows and NFL matchups to the primary races in the current election cycle.

Elections, the ultimate advertising campaign
“Ultimately,” notes Rothschild, “a political election is a large-scale advertising campaign for a product.”

The current U.S. presidential election, he adds, is providing researchers an unusual look at the impact of advertising and get-out-the-vote efforts. Leading up to the televised debates, the Clinton team’s budget earmarked for both was much larger than the Trump team’s budget.

“There is a real possibility that it is going to be very lopsided in spending on the ground,” Rothschild says. “It will be very interesting to see what happens.”

Related:

John Roach writes about Microsoft research and innovation. Follow him on Twitter.

The post How Microsoft is using prediction and polling tools to forecast the election appeared first on Next at Microsoft.

]]>
http://blogs.microsoft.com/next/2016/10/07/microsoft-using-prediction-polling-tools-forecast-election/feed/ 0
Microsoft previews Project Springfield, a cloud-based bug detector http://blogs.microsoft.com/next/2016/09/26/microsoft-previews-project-springfield-cloud-based-bug-detector/ http://blogs.microsoft.com/next/2016/09/26/microsoft-previews-project-springfield-cloud-based-bug-detector/#respond Mon, 26 Sep 2016 15:00:10 +0000 http://blogs.microsoft.com/next/?p=57842 Microsoft is making available to its customers one of the most sophisticated tools it has for rooting out potential security vulnerabilities in software including Windows, Office and other products. The … Read more »

The post Microsoft previews Project Springfield, a cloud-based bug detector appeared first on Next at Microsoft.

]]>
Microsoft is making available to its customers one of the most sophisticated tools it has for rooting out potential security vulnerabilities in software including Windows, Office and other products.

The offering is code named  Project Springfield, and up until now, the team that built it has thought of it  as the million-dollar bug detector.

That’s because every time the system finds a potentially serious bug proactively, before a piece of software is released, it is saving a developer the costly effort of having to release a patch reactively, once the product is already public. With widely used software such as an operating system or productivity suite, deploying those patches can cost as much as $1 million, the researchers say.

Patrice Godefroid

Patrice Godefroid (Photography by Scott Eklund/Red Box Pictures)

“Those are the bugs that hackers will try to use,” said Patrice Godefroid, a principal researcher at Microsoft who invented a key technology behind Project Springfield and is the project’s chief scientist. “The more we can find those bugs ourselves, the more we can fix them before we ship the software.”

Microsoft announced a preview of Project Springfield on Monday at its Ignite technology conference in Atlanta. It has previously been testing the new cloud security service with a small number of customers and collaborators using software on a smaller scale than Windows and Office.

The company itself has been using a key component of Project Springfield, called SAGE, since the mid-2000s, testing products including Windows 7 prior to release.

Although the Windows 7 operating system code had already been checked by other, similar security tools, Godefroid said SAGE unearthed a number of additional vulnerabilities, eventually accounting for one-third of all the bugs this kind of security testing, which is called fuzz testing, discovered prior to the release.

The team overseeing the fuzz testing was impressed.

“There aren’t a lot of tools that can do what SAGE does,” said Mark Wodrich, a senior security engineer with Windows Defender Advanced Threat Protection.

One tool in the security toolbox

Fuzz testing is far from the only security measure developers use, but security experts say it’s an important one in the security development lifecycle.

David Molnar, the Microsoft researcher who leads Project Springfield, said fuzz testing is ideal for software that regularly incorporate inputs such as documents, images, videos or other pieces of information that may not be trustworthy. Fuzz testing looks for vulnerabilities that could open the door for bad actors to launch malicious attacks or simply crash the system, causing delays and other problems.

“These are the serious bugs that it’s worth investing to prevent,” Molnar said.

Broadly speaking, fuzz testing works like this: The system throws random, unexpected inputs at a piece of software to look for instances in which those unforeseen actions cause the software to crash, signaling a security vulnerability.

Project Springfield builds on that idea with what it calls “white box fuzz testing.” It uses artificial intelligence to ask a series of “what if” questions and make more sophisticated decisions about what might trigger a crash and signal a security concern. Each time it runs, it gathers data to hone in on the areas that are most critical. This more focused, intelligent approach makes it more likely that Project Springfield will find vulnerabilities other fuzzing tools might miss.

David Molnar

David Molnar (Photography by Scott Eklund/Red Box Pictures)

From software research to security product

SAGE grew out of years of Microsoft’s basic research into formal methods, which are systems for reasoning about code to look for imperfections.

As SAGE developed, the researchers were regularly publishing research papers detailing the advantages of their approach. That, in turn, drew the interest of security experts and other researchers who wanted to use the tool as well.

“Customers had asked about it for years, but we’d never been able to offer it to them,” Molnar said.

In order to make the software security tool available to a broader group of people with fewer resources and security expertise than the  Windows and Office organizations, the researchers built Project Springfield. It bundles SAGE with other tools for fuzz testing and adds an easy-to-use dashboard and other interfaces that make it accessible for people without an extensive security background.

Then, it runs its tests using an Azure cloud-based system, so individual clients don’t need to have data centers of their own. Finally, the results are delivered securely to the customers, so they can fix the bugs and test the code again.

“It’s very simple to use – it’s ‘fire and forget,’” said Gavin Thomas, a principal security software engineering manager with the Microsoft Security Response Center. “You set it up and you walk away.”

Thomas first used Project Springfield when a Microsoft customer came to him for help in looking for security vulnerabilities. Thomas said Project Springfield proved as easy to use as any app, and it was so effective at finding bugs that Thomas is in the process of implementing it in his own labs. That will save his expert security engineers the time of manually creating similar tools, allowing them to focus on other issues.

The team behind Project Springfield.

The team behind Project Springfield includes, from left, Stas Tishkin, William Blum, Marc Greisen, Cheick Omar Keita, Dave Tamasi, David Molnar (seated) , Theresa Pacheco, Marina Polishchuk, Patrice Godefroid and Ram Nagaraja. (Photography by Scott Eklund/Red Box Pictures)

Too many bugs, not enough security experts

It turns out that Microsoft customer’s challenge wasn’t unusual.

Project Springfield is being released at a time when many companies are facing a tough conundrum: Serious attacks on software are going up, but the supply of security engineers trained to fight those attacks is staying steady. That means plenty of companies can’t afford, or can’t find, the staff they need to do fuzz testing. They need an easier, more automated solution.

“Most companies may not have a security engineer and wouldn’t even know what a fuzzer is,” Thomas said.

It’s also coming at a time when many companies are revamping their systems to appeal to new digital tastes, adding mobile offerings, online sales or cloud-based services. Chad A. Holmes, a principal and cyber strategy, technology and growth leader for the professional services firm Ernst & Young LLP, said that means many companies need a system like Project Springfield, which has the cloud-based capacity to run a very high volume of security tests at the same time and root out the most critical concerns.

“That’s one of the largest challenges they run into, the scale of testing these applications,” Holmes said. “That’s where a tool like Springfield comes in.”

EY may offer Project Springfield as part of the security offerings it has for customers.

Making beer and finding bugs

For many companies, finding bugs is important not just because it can protect a company against hackers but also because it can save time and money.

Take the craft beer brewer Deschutes Brewery, for example. If there’s a glitch in the software it uses for analytics, it can literally mean that money – or, in this case, beer – has to go down the drain.

“The brewery doesn’t get a batch of beer back when something goes wrong,” said Bryan Owen, a cyber security manager with OSIsoft, which has been helping Deschutes build a system that can bring together data from multiple sources. “It’s just lost.”

OSIsoft used Project Springfield to proactively look for bugs and other vulnerabilities as part of an overhaul of Deschutes’ analytics systems, which included installing its PI System, PI Integrator for Microsoft Azure, and deploying the Cortana Intelligence Suite.

Deschutes Brewery’s brewmaster, Brian Faivre, said the new analytics systems have helped them figure out ways to make better beer, without having to worry about the technical details.

“Our job is really focusing on quality and making beer,” Faivre said. “If, at the end of the day, this is helping us do a better job, that’s what we really value and we care about.”

Peter Lee

Peter Lee (Photography by Scott Eklund/Red Box Pictures)

Beating the bad guys

Project Springfield also has been developed at a time in which Microsoft researchers are getting more aggressive about quickly translating their groundbreaking research into tools customers can use.

With Project Springfield, Peter Lee, the corporate vice president in charge of Microsoft Research’s New Experiences and Technologies organization, said the team was determined to make sure it was “literally rubbing elbows” with the clients who were participating in an early preview of the system, having regular, face-to-face meetings to make sure it would meet their security needs.

“I actually view it as a collaboration,” he said. “In my mind, we’re doing the research together.”

Lee said that type of collaboration between researchers and developers is especially important in the security field, because it’s so tough for the good guys in computer security to stay ahead of the bad guys. That’s because the bad guys have the tools, expertise and financial incentive to exploit vulnerabilities faster than the good guys can find them.

He sees cloud-based tools like Project Springfield as a key tool in the good guys’ arsenal.

“This is one of the areas where, finally, the good guys have an advantage,” he said.

Related:

VIDEO: How OSIsoft and Deschutes Brewery used Project Springfield

Learn more about Project Springfield

Read about OSIsoft’s PI Integrator for Microsoft Azure

Find out more about Ignite

Follow Peter Lee on Twitter

Allison Linn is a senior writer at Microsoft. Follow her on Twitter.

 

 

The post Microsoft previews Project Springfield, a cloud-based bug detector appeared first on Next at Microsoft.

]]>
http://blogs.microsoft.com/next/2016/09/26/microsoft-previews-project-springfield-cloud-based-bug-detector/feed/ 0