As we head into 2024, there is an urgent call to action for us all to take steps to protect youth safety and privacy online and to ensure that technology – including emerging technologies such as AI – serves as a positive force for the next generation. As policymakers and regulators weigh potential measures to help advance safety outcomes in the coming weeks and months, it is essential that they consider both the benefits and the risks of technology for young people. Based on our experiences, we outline three themes for policymakers to consider as they work through initiatives to protect youth safety and privacy online, as well as Microsoft’s position on these issues.
Microsoft’s approach to online safety
At Microsoft, our goal is to empower young people to use technology safely, mindfully, and in pursuit of social, educational, and economic opportunities. In keeping with our mission and our commitments, this means:
- Unlocking the benefits and power of technology for all
- Maintaining our longstanding efforts to advance child online safety and privacy
- Advancing our understanding of how technology and child well-being intersect
We have long recognized that we have a responsibility to support safe online experiences for our users, including youth, and to contribute to a safer online ecosystem. This requires us to balance youth rights thoughtfully and holistically, including freedom of expression, access to information, privacy, and security. There is no one-size-fits-all approach; interventions to protect youth from harmful online content or conduct or their data should be tailored to the service or feature, and to the harm we are seeking to address.
Key themes for policymakers to focus on in 2024 and beyond
- Technology is critical to enable young people to unlock social, educational, and economic opportunities.
Policy and regulatory measures should preserve the benefits of technology for young people and enable them to participate fully in their communities. Technology has been shown to:
- Create social opportunities: It enables young people to connect with others, access diverse resources and perspectives, and find support and information online
- Provide educational benefits: It can increase access to learning opportunities, help diverse learners thrive, and help young people develop skills for the digital future
- Create economic opportunities: Mastering computing and AI skills will become increasingly essential for many careers across the entire economy. Technology can also empower young people to create and innovate, with tools such as generative AI
However, we also acknowledge that technology can have negative impacts on young people’s well-being, and that these impacts are not evenly distributed: Some young people may be more vulnerable or marginalized than others. These risks may also change as technology evolves.
- Empowering young people to use technology safely and mindfully requires age-appropriate experiences but age assurance methods have trade-offs.
We need to be able to identify the ages of our users to protect against potential harm. Knowing a user is under a certain age can enable the application of different content policies, privacy settings, family safety features, or the provision of age-appropriate content.
However, there are challenges and tradeoffs in implementing robust age assurance processes. There is currently no clear technical solution to age assurance that achieves the accuracy needed to effectively identify or verify a user’s age without risking trade-offs such as potential security, privacy, and human rights risks. In addition to considering how we can build an interoperable and frictionless user experience, conversations are underway across industry on ways to mitigate the following:
- Privacy and security risks, such as collecting and processing sensitive personal data
- Equity and access challenges, such as excluding or discriminating against certain groups of users
- Civil rights and government surveillance concerns, such as undermining online anonymity and freedom of expression
- User experience issues, such as creating barriers or delays to access online services
Microsoft is exploring different age assurance methods, enabling us to learn and improve our practices and solutions. We are engaging with technical and other organizations such as the International Organization for Standardization (ISO) to help develop frameworks to support interoperable and privacy-protective age assurance technologies.
But we also need the support and guidance of policymakers, regulators, and other experts to help navigate these trade-offs and find smart solutions that are both effective and respect fundamental rights. We also want to work with experts to deepen our understanding of how age information should inform different interventions across different services and features, as well as where we can appropriately offer solutions without age verification measures.
- We must continue to grow the necessary evidence base to design technology and policies to advance youth well-being.
We need to constantly update and improve our understanding of how technology affects young people’s well-being, and how we can design technology and policies to best support and protect them. More research is emerging about the use of social media, but we see a need for additional, medically informed research to understand the impact of different online services on youth mental health, development, and learning.
We encourage stakeholders and experts to invest in further research on diverse topics, such as building understanding of how youth engage with different kinds of technology, the impact of limiting access to technology, whether some features or content might be more appropriately limited than others, and the potential to leverage technology to deliver mental health interventions. To support this, on Safer Internet Day we will publish the latest results from our annual consumer research.
And, throughout, there is a need to center youth voices and perspectives – we cannot meet our goals without understanding what young people want and need from technology.
Our support for regulatory measures
Regulatory measures such as the EU Digital Services Act are driving systemic changes in how industry approaches digital safety. We support the development of tailored, thoughtful measures that can support young people to engage safely online and commend the good practices starting to emerge globally. Microsoft supports:
- Risk-based approaches, tailored to specific online services and features: We support regulatory approaches that clearly differentiate between unique online services and leverage risk or impact assessments to deliver effective mitigations for risks to children and young people in the design or operation of those services.
- Clearly defined harms and duties: Where proposals incorporate a duty of care to address potential harms to children, we encourage policymakers to provide clarity on what’s required to discharge those duties and to clearly define the harms at issue, based on scientific research.
- Outcome-based approaches: To enable a service-specific approach that supports the rights and best interests of the child, we welcome regulatory approaches that focus on ensuring the systems and processes are in place to mitigate risk and build in improved privacy protections for children and teens.
- Regulatory harmonization and interoperability: To support seamless online experiences, we encourage ongoing regulatory cooperation and harmonization, including between privacy and safety regulators, and international regulators.
- Providing parental guardrails: Where appropriate to the service, we support requirements to provide parents and caregivers with robust tools that enable them to choose guardrails to manage their child’s online experiences.
We welcome the opportunity to further engage with experts, policymakers, and regulators to support the development of a safer online ecosystem for young people.