Microsoft and OpenAI partner to propose digital transformation of export controls

The worlds of technology, trade and national security policy are converging as never before. Governments and non-government actors are vigorously debating how to ensure that powerful technologies are used by trustworthy actors and to good ends – and are increasingly looking to export controls as a way to achieve this. Targeted export controls on end uses and users of concern are needed to protect national security interests on the one hand without provoking serious unintended consequences on the other. However, these can be difficult to administer and enforce. The time is right for a digital transformation of export controls – a new approach that leverages novel digital solutions within sensitive and important technology itself to better protect it from uses that harm national security, while preserving its beneficial uses. Microsoft and OpenAI have joined together to work on these solutions. In a submission to the U.S. government yesterday and here, we describe how a digitally transformed export controls system would work and the substantial benefits it would provide.

The challenge

Following a mandate in the Export Control Reform Act of 2018 (ECRA), the U.S. Department of Commerce’s Bureau of Industry and Security (BIS) has undertaken efforts to identify and control the export of “emerging” or “foundational” technologies essential to U.S. national security. In comment periods ending in January 2019 and yesterday, BIS sought help from industry and others on how to identify and approach control of these emerging and foundational technologies.

Microsoft and OpenAI share Commerce’s goal that any controls enhance rather than undermine national security. We, along with many others, however, highlighted the substantial downsides with restrictions that are promulgated via traditional export control approaches alone.1 Restrictions based only on the performance criteria of these technologies themselves, for example, would ignore that technologies containing the same performance criteria are used for both beneficial uses (e.g. developing powerful new medications or more efficient fertilizers) and nefarious ones (e.g. developing WMD, carrying out human rights abuses). This overly broad approach would have the unintended consequence of foreclosing beneficial uses. Taking facial recognition as an example, the same digital biometrics technology, software and hardware capture and analyze information to identify people, whether for the purpose of finding a terrorist or a missing child versus finding an oppressed dissident or minority. The approach also fails to keep pace with the rapid development in these technologies and would quickly become outdated. It would cut off U.S. companies’ access to global markets and talent, even in allied countries that are not bound by similar restrictions and want geo-flexibility in their development and tech solutions. It would leave a void for companies outside the U.S. to fill, including by those potentially hostile to U.S. interests.

For these reasons, restricting the problematic users and uses of these technologies is the more targeted and balanced of traditional export controls approaches, as it protects national security interests while preserving beneficial uses and tech leadership. Today, however, such rules depend on knowledge about uses or users that can be difficult to obtain, and on tools that are challenged to systematically and scalably discern authorized from unauthorized technologies provided to a single end user. A digitally transformed approach can more effectively implement and enforce such end-user and use-based restrictions.

The solution: How a digital transformation of export controls will work

The government will set policies that determine who can access sensitive technologies and for what purpose from an export controls and national security perspective. These policies would then be implemented and enforced within the protected technology itself, as well as hardening the infrastructure around it to prevent circumvention. These solutions can protect against problematic users and uses in a more targeted, effective and dynamic way – not just at initial access but continuously in a deployed environment. Key features include:

  • Software features designed into sensitive technologies can enable real-time controls against prohibited uses and users. These features would include, for example, identity verification systems and information flow controls to discern whether facts and criteria are consistent with authorized users and uses. “Tagging” can be used to ensure the same controls apply to derivatives of these sensitive technologies.
  • “Hardware roots of trust” built into hardware that contains sensitive technologies can complement software-based solutions by requiring authorization to send code or data through the equipment. More robust hardware identity verification through secure co-processors akin to those used to secure payment in mobile phones or to prohibit in-game cheating in game consoles can further protect hardware against unauthorized access and uses.
  • Tamper-resistant tools for sensitive technologies and for protective software and hardware solutions themselves can harden infrastructures against subversion.
  • At a minimum, the above techniques can enhance export control systems. Artificial intelligence (AI) techniques, however, can be used to more dexterously identify and restrict problematic end users or uses by continuously improving to incorporate government policy changes or observations from unauthorized user or use attempts. OpenAI’s GPT-3 – a large neural language model that is trained on a broad range of internet data to complete texts that the user enters – is an example of such a technique already in development.

Applications beyond export controls

There are several other applications that would benefit from this approach:

  • Scalability for securing supply chains. The solutions described can be combined to create comprehensive systems to secure supply chains and protect critical infrastructure assets. For example, components secured with hardware roots of trust and software-based solutions – when integrated into larger integrated systems – can give those systems the same protection.
  • Industry Corporate Social Responsibility. Industry has its own imperatives to ensure that its technologies are not used in destructive and dangerous ways. For example, Microsoft has long publicly supported regulations on the use of facial recognition technology and has committed to self-enforce similar restrictions based on our Facial Recognition Principles. More recently, Microsoft imposed gating restrictions on its Custom Neural Voice service, a synthetic voice generating technology with incredible benefits, such as allowing people with degenerative diseases to preserve and project their own voices from a computing device when they can no longer speak. Because the technology can also be used to create deepfakes, however, Microsoft restricts access to the technology based on use and users.
  • Customer-driven incentives. OpenAI is working collaboratively with its customers on AI-driven systems to direct model outputs so that they conform to customer expectations and OpenAI’s mission that AI benefit all of humanity, such as being able to provide reliable safeguards against user-generated hate speech. Similarly, Microsoft is already deeply involved in the Open Compute Project, where providing companies with openly available tools necessary to secure systems is core to the project, even when those systems involve hardware from multiple companies. Today, Microsoft incorporates chips into its Azure systems that use standards consistent with the Open Compute Project’s work.

Employed appropriately, these digital solutions will provide valuable commercial benefits to users, as well as a far more powerful, dynamic and targeted method for controlling exports of important technologies. We look forward to continuing to exchange ideas with a range of stakeholders interested in pursuing such solutions.

 


 

1 See, e.g. Microsoft Comment on Advance Notice of Proposed Rulemaking Regarding Review of Controls for Certain Emerging Technologies (“Microsoft Comment”), at 3, available at https://www.regulations.gov/document?D=BIS-2018-0024-0175; OpenAI Comment on Advance Notice of Proposed Rulemaking Regarding Review of Controls for Certain Emerging Technologies (“OpenAI Comment,”) available at https://www.regulations.gov/document?D=BIS-2018-0024-0195; Google Comment on Advance Notice of Propose Rulemaking Regarding Review of Controls for Certain Emerging Technologies, at 19, available at https://www.regulations.gov/document?D=BIS-2018-0024-0160; Semiconductor Industry Association Comment on Advance Notice of Proposed Rulemaking Regarding Review of Controls for Certain Emerging Technologies, available at https://www.regulations.gov/document?D=BIS-2018-0024-0130.

Tags: , , , ,