Four months ago, when our team at Microsoft first made plans for a visit to New Zealand that began yesterday, we did not expect to arrive on the heels of a violent terrorist attack that would kill innocent people, horrify a nation and shock the world. Like so many other people around the globe, across Microsoft we mourn the victims and our hearts go out to their families and loved ones. This includes two of the individuals killed who were part of the broader Microsoft partner community.
We appreciate the gravity of the moment. This is a time when the world needs to stand with New Zealand.
Words alone are not enough. Across the tech sector, we need to do more. Especially for those of us who operate social networks or digital communications tools or platforms that were used to amplify the violence, it’s clear that we need to learn from and take new action based on what happened in Christchurch.
Across Microsoft, we have reviewed how our various services were used by a relatively small number of individuals to try to spread the video from Christchurch. While our employees and technology tools worked quickly to stop this distribution, we have identified improvements we can make and are moving promptly to implement them. This includes the accelerated and broadened implementation of existing technology tools to identify and classify extremist violent content and changes for the process that enables our users to flag such content. We are exploring additional steps we can take as a company and will move quickly to add to these improvements.
We recognize, however, that this is just a beginning. More fundamental progress requires that we work together across the tech sector and in collaboration with governments and nongovernmental organizations so we can take bigger steps.
What should we do?
To start, we should acknowledge that no one yet has all the answers. This is an area in which companies across the tech sector need to learn, think, work and act together. Competition is obviously indispensable to a vibrant technology sector. But when it comes to saving human lives and protecting human rights, we should act in a united way and enable every company large and small to move faster.
Ultimately, we need to develop an industrywide approach that will be principled, comprehensive and effective. The best way to pursue this is to take new and concrete steps quickly in ways that build upon what already exists.
There are in fact important recent steps on which we can build. Just over two years ago, thanks in part to the leadership and urging of the British and the European Commission, four companies – YouTube, Facebook, Twitter and Microsoft – came together to create the Global Internet Forum to Counter Terrorism (GIFCT). Among other things, the group’s members have created a shared hash database of terrorist content and developed photo and video matching and text-based machine learning techniques to identify and thwart the spread of violence on their platforms. These technologies were used more than a million times in 24 hours to stop the distribution of the video from Christchurch.
While these are vital steps, one of the lessons from New Zealand is that the industry rightly will be judged not only by what it prevented, but by what it failed to stop. And from this perspective, there is clearly much more that needs to be done. As Prime Minister Jacinda Ardern noted last week, gone are the days when tech companies can think of their platforms akin to a postal service without regard to the responsibilities embraced by other content publishers. Even if the law in some countries gives digital platforms an exemption from decency requirements, the public rightly expects tech companies to apply a higher standard.
As an industry, tech companies created new services to bring out the best – not the worst – in people. To break down boundaries, not sow division. But as with virtually every technology ever invented, people are using digital services for both good and ill. Unfortunately, individuals are using online platforms to bring out the darkest sides of humanity.
The problem has multiple dimensions and we will need to address all of them. We’ve seen online platforms and digital tools used to help recruit people to violent ideologies. These same tools have been used to incite and organize violent attacks on innocent people. And as we saw in Christchurch, we’ve seen digital platforms used to amplify the impact of attacks through the widespread sharing of violent images and videos around the world.
Regardless of whether a particular technology played a big, small or no part in this event, across the industry we all can and need to be part of the solution. There is a role for everyone to play. That should be one of the most important lessons from Christchurch.
There are at least three areas where we should focus our efforts.
First, we need to focus on prevention. We need to take new steps to stop perpetrators from posting and sharing acts of violence against innocent people. New and more powerful technology tools can contribute even more than they have already. We must work across the industry to continue advancing existing technologies, like PhotoDNA, that identify and apply digital hashes (a kind of digital identifier) to known violent content. We must also continue to improve upon newer, AI-based technologies that can detect whether brand-new content may contain violence. These technologies can enable us more granularly to improve the ability to remove violent video content. For example, while robust hashing technologies allow automated tools to detect additional copies already flagged as violent, we need to further advance technology to better identify and catch edited versions of the same video.
We should also pursue new steps beyond the posting of content. For example, we should explore browser-based solutions – building on ideas like safe search – to block the accessing of such content at the point when people attempt to view and download it.
We should pursue all these steps with a community spirit that will share our learning and technology across the industry through open source and other collaborative mechanisms. This is the only way for the tech sector as a whole to do what will be required to be more effective.
We also should recognize that technology cannot solve this problem by itself. We need to consider and discuss additional controls or other measures that human beings working at tech companies should apply when it comes to the posting of this type of violent material. There are legal responsibilities that need to be discussed as well. It’s a complicated topic with important sensitivities in some parts of the tech sector. But it’s an issue whose importance can no longer be avoided.
Second, we need to respond more effectively to moments of crisis. Even with better progress, we cannot afford to assume that there will never be another tragedy. The tech sector should consider creating a “major event” protocol, in which technology companies would work from a joint virtual command center during a major incident. This would enable all of us to share information more quickly and directly, helping each platform and service to move more proactively, while simultaneously ensuring that we avoid restricting communications that are in the public interest, such as reporting from news organizations.
We should also discuss whether to define a category of agreed “confirmed events,” upon which tech companies would jointly institute additional processes to detect and prevent sharing of these types of extremist violent content. This would better enable efforts to identify and stop this content before it spreads too broadly.
Finally, we should work to foster a healthier online environment more broadly. As many have noted, while much of the focus in recent days rightly has been on the use of digital tools to amplify this violence, the language of hate has existed for decades and even centuries. Nonetheless, digital discourse is sometimes increasingly toxic. There are too many days when online commentary brings out the worst in people. While there’s obviously a big leap from hateful speech to an armed attack, it doesn’t help when online interaction normalizes in cyberspace standards of behavior that almost all of us would consider unacceptable in the real world.
Working on digital civility has been a passion for many employees at Microsoft, who have recognized that the online world inevitably reflects the best and worst of what people learn offline. In many ways, anonymity on the internet can free people to speak and behave in ways they never would in person. This is why we believe it’s important to continue to promote four tenets to live by when engaging online. Namely, we all need to treat others with respect and dignity, respect each other’s differences, pause before replying and stand up for ourselves and for others. This too is an area on which we can build further.
We all need to come together and move faster. This is the type of serious challenge that requires broad discussion and collaboration with people in governments and across civil society around the world. It also requires us to expand and deepen industrywide groups focused on these issues, including key partners from outside the industry.
Finally, we hope this will become a moment that brings together leaders from across the tech sector.
It’s sometimes easy amidst controversy for those not on the hot seat to remain silent and on the sideline. But we believe this would be a mistake. Across the tech sector we can all contribute ideas, innovate together and help develop more effective approaches.
The question is not just what technology did to exacerbate this problem, but what technology and tech companies can do to help solve it. Put in these terms, there is room – and a need – for everyone to help.