“What can we do?”
Those words continued to echo as I returned home from the World Economic Forum in Davos in early 2019. The director of a major news organization asked me that question point blank after we reviewed several deepfake videos together at a meeting I had arranged with him to discuss efforts being made by Microsoft’s Defending Democracy Program. The videos demonstrated how AI and graphics could be harnessed to generate persuasive, realistic renderings of political leaders saying things they had not said.
What could be done to address the risk posed to journalism and democracy by synthetic and manipulated media? How might we address the unprecedented challenges generated by the coupling of new forms of disinformation with viral sharing of content on the internet?
There are no easy answers, but several promising ideas have come to the fore. One important direction in the fight against disinformation is to develop and field technologies for certifying the origin, authenticity and history of online media, which we refer to as the provenance of the content. I’m excited about the progress in this direction, nourished by strong cross-organization collaborations.
Today, we hit another milestone. Microsoft and the BBC have teamed up with Adobe, Arm, Intel and Truepic to create the Coalition for Content Provenance and Authenticity (C2PA). The C2PA is a standards-setting body that will develop an end-to-end open standard and technical specifications on content provenance and authentication. The standards will draw from two implementation efforts: Project Origin’s (Origin) efforts on provenance for news publishing and the Content Authenticity Initiative (CAI), which focuses on digital content attribution.
Together, we are a small but growing coalition with a shared mission to re-establish trust in digital content via methods that authenticate the sources and trace the evolution of the information that we consume. This effort will require participation by global organizations with a desire to combat disinformation, consumers who want to regain trust in what they see and hear, and policymakers and lawmakers with the best interests of all of society as a top priority.
From exploration to possibility
The formation of C2PA comes via creative problem-solving at multiple organizations, with innovative efforts occurring independently and together.
Shortly after my meetings in Davos, I sketched out a back-of-the-envelope solution to address media authentication and provenance. We’d need watermarking to tag content, combined with strong security and a means of storing and tracking allowable changes to content over time. I reached out to tap the expertise of long-term Microsoft Research colleagues: Henrique (Rico) Malvar, an expert in signal processing with a long history of contributions to rights management and compression technologies, Paul England, a security and privacy specialist who developed the Trusted Platform Module (TPM) technologies to encrypt devices, and Cédric Fournet and Manuel Costa, who led efforts on the Confidential Consortium Framework (CCF), an open-source framework for building a new category of secure, performant blockchain networks.
I challenged the team with a question: Can we build an end-to-end pipeline that could authenticate the identity of the source of audiovisual content and assure, over the transmission and greater life history of that content, that the “photons hitting the light-sensitive surface of a camera would be properly represented by the pixels on displays viewed by consumers.” Could such a “glass-to-glass” system accurately assign a “pass” or “fail” to digital media depending on whether content was modified beyond a set of acceptable changes, associated with normal post-production and transmission?
Early whiteboard captured as part of notetaking on the Amp effort.
We spent hours together drawing on a whiteboard, brainstorming ideas and performing attacks on potential solutions before we came up with a pipeline of technology and techniques that we had confidence in. The early working sessions with the initial team members were just a start. Thanks to the efforts of other researchers and engineers, including Microsoft Research security expert Jay Stokes and Azure Media Security lead Andrew Jenks, we developed a solution we refer to as the Authentication of Media via Provenance (Amp), a blueprint for authenticating the provenance of media content.
Getting real
Moving a proposed solution from whiteboards, papers and prototypes into the real-world of news media requires strong partnerships, particularly partners who are in the business of content creation and distribution. We found strong resonance with technical and programmatic leadership at the BBC. We also had developed relationships with like-minded leads at CBC Radio Canada and The New York Times via a working group exploring threats by AI to media integrity hosted by the Partnership on AI. We were delighted to discover that our colleagues were also thinking deeply about media provenance and resonated with the directions that we had been pursuing. Working together, we refined the ideas and presented them to the wider broadcast community.
At the Partnership on AI meeting with (L-R) Eric Horvitz (Microsoft), Jatin Aythora (Chief Architect, BBC), Bruce MacCormack (CBC Radio Canada) and Marc Lavallee (Head of R&D, The New York Times).
Last year, the BBC, CBC Radio Canada, The New York Times and Microsoft stood up the Origin Project. The goal of the Origin partnership is to promote collaboration and discussion about the creation and adoption of a new media provenance tracking process, focused on news and information content.
We now have an Origin technical proof of concept that establishes a chain of trust between the publisher and the end user. You can learn more here or watch this fun video featuring Rico. The basic idea is that a publisher of a media file, in this case a video, will cryptographically sign a digital fingerprint of the file at the time of publication. That signature and fingerprint become part of a ledger and a receipt is sent to the publisher. When a consumer views the file, the browser or video player checks the ledger for the manifest and receipt, then displays a signal to the user indicating whether that content is certified.
Beyond Origin, our team has had an opportunity to collaborate and contribute on a related effort, the Adobe-led Content Authenticity Initiative (CAI). The CAI is building a system to provide provenance and history for digital media, giving creators a tool to claim authorship and empowering consumers to evaluate whether what they are seeing is trustworthy.
Moving ahead
Origin and CAI leads have now come together to stand up C2PA. We’re very excited about this step forward and honored to be part of this journey. While streams of work on Project Origin and CAI will continue, we’ve formed the C2PA to apply what we’ve learned to generate the technical requirements and standards that will support interoperability of solutions and the wider application of technologies for detecting and thwarting manipulated content.
What can we do? That question framed our initial efforts as we kicked things off in front of a blank whiteboard with markers in hand. But with the efforts of researchers, engineers, technologists and advisors spanning multiple types of expertise and organizations, we’ve been able to bring promising solutions to life, aimed at strengthening journalism and protecting the foundation of our democratic societies.
Related resources:
- Deep Dive: Technical explanation of Project AMP’s components including a demo based on the paper AMP: Authentication of Media via Provenance.
- Tech Minute: Paul England explains how Project AMP works. Developers can get started on GitHub.
- C2PA is accepting new members. To engage with the effort, email: [email protected].