AI news from Microsoft’s Build developers conference

At Microsoft’s Build developers conference in Seattle this week, the company is unveiling a series of new and updated tools that will help developers incorporate artificial intelligence into their processes and applications, regardless of their background and training in the fast-emerging field of AI.

A suite of new and enhanced pre-trained models from Microsoft Cognitive Services, for example, allow developers to easily add AI across vision, speech, language, knowledge and search to their applications. Many of these pre-trained models are now customizable to meet the specific needs of companies and their customers.

Microsoft also is announcing a preview of Project Brainwave, a hardware architecture designed to accelerate real-time AI calculations. Project Brainwave is deployed on a type of chip from Intel called a field programmable gate array, or FPGA, and is integrated with Azure Machine Learning.

A limited preview will allow customers to bring Project Brainwave to the edge, meaning customers could take advantage of that computing speed in their own businesses and facilities, even if their systems lack a network or Internet connection.

The FPGA computer chips at the core of Project Brainwave can be quickly reprogrammed to respond to new advances in AI, making the chips more flexible than other types of chips used in AI applications. That’s important in a rapidly evolving field with an unyielding demand for computing power to run ever more sophisticated AI algorithms.

In addition to integration with Project Brainwave, Azure Machine Learning is announcing the preview availability of Azure ML SDK for Python. Azure Machine Learning offers a cloud service for data scientists to create their own AI models, and this latest SDK lets those developers execute key Azure Machine Learning workflows entirely in the programming language Python.

Azure Machine Learning also is announcing new Azure Machine Learning Packages, which are sets of algorithms that enable data scientists to easily build, train, fine tune and deploy highly accurate and efficient models for computer vision, text analytics and financial forecasting.

Among the new Cognitive Services being announced at Build is a unified Speech service that bundles improved models for speech recognition, speech translation and text-to-speech. The improvements include the ability to customize models for specific speaking styles and the vocabulary of an industry, and to create a unique brand voice, for example for an interactive bot on a customer’s e-commerce website.

Advances to the Custom Vision service being announced at Build provide new capabilities for identifying objects, extracting information from images and performing visual services. For example, a new feature in preview allows users to train models to identify objects within images – to pick out the trained object and show its location within the image.

Microsoft employee Anne Taylor uses a braille device with her PC. At Build, Microsoft announced a new $25 million, five-year AI for Accessibility program. Photo courtesy of Microsoft.

The advances in computer vision technologies also are reflected in updates to Cognitive Services that harness technology from Microsoft’s search engine Bing. For example, Microsoft is announcing the general availability of Bing Visual Search, which allows users to identify entities and text within images. That means they can do things like derive insights and find similar images, products and objects for categories including fashion, landmarks, flowers, celebrities and more. Bing Visual Search can extract information from business cards and can be customized for specific domains.

With Cognitive Services support for edge deployment, developers can build applications that leverage powerful AI algorithms to interpret, listen, speak and see on devices at the edge. This capability is initially available for the Custom Vision service, enabling devices such as drones and industrial equipment to take quick, critical action without reliable connectivity to the cloud or a network.

In addition, Microsoft is expanding the collection of Cognitive Services Labs, which provide developers an early look at emerging Cognitive Services technologies.

For example, Project Conversation Learner enables developers to build conversational interfaces that learn directly from example interactions. Project Personality Chat makes intelligent agents more complete and conversational by handling small talk in a consistent tone and reducing fallback responses such as “I don’t understand.” Project Personality Chat also allows developers to give their agents a personality, from professional to humorous, that aligns with a brand voice.

Project Conversation Learner and Project Personality Chat are part of the next generation of Conversational AI tools being announced at Build to help developers build, connect, deploy and manage intelligent bots to interact naturally with users.

Microsoft also is announcing the release of Bot Builder SDKv4, which provides building blocks to create ever more sophisticated bots from a simple starting point. Additional Conversational AI announcements at Build include major updates to Microsoft Cognitive Services Language Understanding and QnAMaker, which extracts questions and answers from documents.

In addition to all these new AI tools for developers, Microsoft also announced AI Lab, a new collection of AI projects designed to enable developers to get started with AI by exploring, experiencing, learning and coding the latest Microsoft AI technology innovations.

All the Build announcements illustrate how new technologies are rapidly changing the way people live, learn and work – progress that comes with an opportunity and responsibility to make sure technology is used well. That’s why Microsoft is launching AI for Accessibility, a new $25 million, five-year program aimed at harnessing the power of AI to amplify human capability for the more than 1 billion people around the world with disabilities.

“Disabilities can be permanent, temporary or situational.  By innovating for people with disabilities, we are innovating for us all,” Microsoft president Brad Smith explained in a Microsoft on the Issues blog about the program. “By ensuring that technology fulfills its promise to address the broadest societal needs, we can empower everyone – not just individuals with disabilities – to achieve more.”

Related: