As part of the companywide AI at Scale initiative, Microsoft announced at its Ignite conference that it plans to begin working with select customers to further develop its Turing natural language representation (NLR) models.
AI at Scale, which was announced at the Microsoft Build conference earlier this year, leverages the cloud computing power of Azure to train AI models with billions of parameters. One advantage of such large-scale models is that they only need to be trained once with massive amounts of data using AI supercomputing, and can then be fine-tuned with smaller datasets for specific tasks.
“In the future, AI at Scale will change the way AI is developed, enabling companies to customize state-of-the-art models for their own scenarios without the computing capability, data or skills required now,” said David Carmona, general manager of artificial intelligence at Microsoft.
Unlike AI models that rely on data that has been meticulously labeled manually, Microsoft’s Turing models are trained using billions of pages of publicly available text, from which they absorb the nuances of language. The models leverage the ONNX Runtime platform to optimize and accelerate both training and inference, and can be used for multiple language tasks, such as paraphrasing a lengthy speech, finding relevant passages across thousands of legal files or suggesting responses to email messages.
Since their development, the Turing NLR models have been trained and implemented across Microsoft to improve productivity offerings, including the latest features in Bing, Inside Look in OneDrive and SharePoint and Suggested Replies in Outlook.
By collaborating with select customers, partners and research organizations, Microsoft will broaden its efforts to learn about uses of the Turing NLR models and to explore new possibilities across various scenarios in advance of making these and other models widely available. One of Microsoft’s specific areas of focus is the responsible development and use of large-scale language models. The company will also be working to further the progress on several open research questions in this field as part of this limited release. Companies interested in learning more about Microsoft’s Turing NLR models can submit a request here.
AI at Scale is a broad initiative to build the next generation of AI and includes new approaches to computing and software systems. As part of AI at Scale, last month Microsoft launched the private preview of its AI supercomputer capabilities with the Azure NDv4 VM series and this month open sourced the latest version of DeepSpeed, a deep learning training optimization library that has crossed the 1 trillion parameter threshold.