Today, Microsoft is sharing an update on its AI safety policies and practices ahead of the UK AI Safety Summit. The summit is part of an important and dynamic global conversation about how we can all help secure the beneficial uses of AI and anticipate and guard against its risks. From the G7 Hiroshima AI Process to the White House Voluntary Commitments and beyond, governments are working quickly to define governance approaches to foster AI safety, security, and trust. We welcome the opportunity to share our progress and contribute to a public-private dialogue on effective policies and practices to govern advanced AI technologies and their deployment.
Since we adopted the White House Voluntary Commitments and independently committed to several other policies and practices in July, we have been hard at work to operationalize our commitments. The steps we have taken have strengthened our own practice of responsible AI and contributed to the further development of the ecosystem for AI governance.
The UK AI Safety Summit builds on this work by asking frontier AI organizations to share their AI safety policies – a step that helps promote transparency and a shared understanding of good practice. In our detailed update, we have organized our policies by the nine areas of practice and investment that the UK government is focused on. Key aspects of our progress include:
- We strengthened our AI Red Team by adding new team members and developing further internal practice guidance. Our AI Red Team is an expert group that is independent of our product-building teams; it helps to red team high-risk AI systems, advancing our White House Commitment on red teaming and evaluation. Recently, this team built on OpenAI’s red teaming of DALL-E3, a new frontier model announced by OpenAI in September, and worked with cross-company subject matter experts to red team Bing Image Creator.
- We evolved our Security Development Lifecycle (SDL) to link our Responsible AI Standard and integrate content from within it, strengthening processes in alignment with and reinforcing checks against governance steps required by our Responsible AI Standard. We also enhanced our internal practice guidance for our SDL threat modeling requirement, accounting for our ongoing learning about unique threats specific to AI and machine learning. These steps advance our White House Commitments on security.
- We implemented provenance technologies in Bing Image Creator so that the service now discloses automatically that its images are AI-generated. This approach leverages the C2PA specification that we co-developed with Adobe, Arm, BBC, Intel, and Truepic, advancing our White House Commitment to adopt provenance tools that help people identify audio or visual content that is AI-generated.
- We made new grants under our Accelerate Foundation Models Research program, which facilitates interdisciplinary research on AI safety and alignment, beneficial applications of AI, and AI-driven scientific discovery in the natural and life sciences. Our September grants supported 125 new projects from 75 institutions across 13 countries. We also contributed to the AI Safety Fund supported by all Frontier Model Forum members. These steps advance our White House Commitments to prioritize research on societal risks posed by AI systems.
- In partnership with Anthropic, Google, and OpenAI, we launched the Frontier Model Forum. We also contributed to various best practice efforts, including the Forum’s effort on red teaming frontier models and the Partnership on AI’s in-development effort on safe foundation model deployment. We look forward to our future contributions to the AI Safety working group launched by ML Commons in collaboration with the Stanford Center for Research on Foundation Models. These initiatives advance our White House Commitments on information sharing and developing evaluation standards for emerging safety and security issues.
Each of these steps is critical in turning our commitments into practice. Ongoing public-private dialogue helps us develop a shared understanding of effective practices and evaluation techniques for AI systems, and we welcome the focus on this approach at the AI Safety Summit.
We look forward to the UK’s next steps in convening the summit, advancing its efforts on AI safety testing, and supporting greater international collaboration on AI governance.