Please allow us to collect data about how you use our website. We will use it to improve our website, make your browsing experience and our business decisions better. Learn more Learn More
As we step into 2025, the digital safety landscape is undergoing a rapid transformation, influenced heavily by the rise of generative AI (GenAI) and the increasing complex discussion around freedom of expression and content moderation.
From the proliferation of harmful content to regulatory challenges and the recent announcement by Meta to scale back content moderation efforts and fact-checking in the US, the digital safety field faces a pivotal moment that demands urgent attention and collaborative action.
Within this context, it is fair to ask: is digital safety achievable in 2025?
Generative AI is revolutionizing content creation, offering new opportunities for creativity and innovation. However, its misuse has rapidly escalated the creation and spread of harmful content. The rise of deepfakes, manipulated media and explicit synthetic content is blurring the lines of reality, amplifying disinformation and undermining trust in online ecosystems.
This surge in synthetic content presents significant challenges for content moderation systems, which must now navigate unprecedented levels of generated content and complexity. The erosion of trust extends beyond digital platforms and news outlets, affecting even personal communications and relationships.
As this grows, stakeholders and users increasingly demand transparency in how AI algorithms make decisions, particularly in content moderation. In 2025, aligning these systems with human rights principles and eliminating bias will be critical to rebuilding and maintaining public trust, in line with discussions around freedom of expression.
The recently-released World Economic Forum’s Global Risks Report 2025 highlights the growing concern around adverse outcomes of AI technologies. In the report, mis- and disinformation ranked first for the second year on a row and online harms ranked 14th.
Global risks ranked by severity over the short and long term.Image: World Economic Forum
“A complete view of the relationship between automated systems and online safety requires adequate understanding of how AI helps trust and safety teams counter online abuse. Increasing public understanding of AI as both a risk factor and a means of risk mitigation will be crucial to maintain trust in digital services in 2025,” says David Sullivan, Executive Director at the Digital Trust & Safety Partnership.
Perhaps most alarming is the rise in AI-generated synthetic child exploitation material. The Internet Watch Foundation reported that more than 20,000 AI-generated images circulating on a dark web forum in just one month, underscoring the urgent need for comprehensive interventions to combat this growing threat. As these threats evolve, 2025 will require innovative solutions and global collaboration to safeguard online spaces and protect vulnerable populations.
At the start of 2025, the digital safety landscape faces a critical shift in content moderation practices, particularly with Meta’s recent announcement to remove content moderation efforts across its platforms in the US and instead rely on users to add notes to posts.
This approach mirrors a broader trend seen in the industry, notably with Twitter’s implementation of its community notes system, which similarly relies on user contributions rather than centralized moderation to address potentially misleading or harmful content.
However, there are some concerns that Meta’s decision to scale back its fact-checking and content moderation initiatives could inadvertently expose vulnerable groups to a greater variety of harmful content, including disinformation, hate speech and cyberbullying.
Critics have argued that this approach risks amplifying harmful narratives and creating a less safe digital environment. In response, Meta has claimed that the solution to “bad” speech isn’t to block it but to encourage “good” speech to counter it, fostering a more open and self-regulating community dialogue. The company has argued that less content moderation would promote freer expression and allow more open dialogue.
Meta’s approach highlights a broader dilemma in balancing content moderation with freedom of expression. The key challenge in 2025 will be to find a middle ground where both freedom of expression and digital safety can coexist, ensuring that the rise of harmful content does not stifle trust in digital platforms.
Regulations will continue to shape the digital safety ecosystem, with evolving laws marking a shift in how online safety is approached. Even with a trend towards a more assertive regulatory environment there still exists regional and national disparities in approach, with, for example, the US more reluctant to regulate online speech than many European countries.
The European Union’s Digital Services Act (DSA) and the UK’s Online Safety Act (OSA) have set important standards for content moderation, platform accountability and child protection. However, their implementation varies across jurisdictions, highlighting the complexities of aligning regulatory approaches.
UK online platforms will soon face new obligations under the OSA. Ofcom is set to release its Protection of Children Codes of Practice and updated risk assessment guidance by April 2025. This will further define platforms’ responsibilities in safeguarding users, particularly children.
Australia has also taken a bold step by introducing age restrictions on certain social media services for children under 16, signalling a global shift toward prioritizing child safety in the digital space. The move will see the Australian eSafety Commissioner work with government, industry, youth and the broader community to implement the age restriction.
The UK and Spain are among other countries exploring similar policies, reinforcing the international commitment to protecting youth online.
As these regulatory frameworks evolve, 2025 will be pivotal in the global push to enhance digital safety. The Global Online Safety Regulators Network, introduced in October 2024, launched the Online Safety Regulatory Index, a valuable tool for comparing and assessing safety standards across regions, which will help navigate the complex regulatory environment.
“eSafety’s plan to effectively implement and enforce the new social media age restrictions legislation in 2025 is one component of our work that will complement our existing holistic strategy to ensure platforms and services are more effectively deploying Safety by Design, whilst lifting safety practices and processes for all Australians,” says Julie Inman Grant, eSafety Commissioner, Australia.
“Our world-first industry standards, designed to require global tech giants to tackle the most harmful online content including child sexual abuse material and pro-terror content, are another important, interconnecting element of our multi-pronged approach to keeping children safe online. This includes continued digital literacy for children and empowerment of parents.
“Equally, our codes and transparency powers can all support social media age restrictions to provide an umbrella of protection for children and young people.”
The scale of digital safety challenges is alarming, yet businesses, civil society organizations and other stakeholders are implementing numerous interventions and solutions to address these harms.
Technological interventions, such as leveraging artificial intelligence and machine learning, are also critical to detecting and mitigating harmful content. However, effective strategies must extend beyond technology alone.
They require comprehensive policy improvements, the promotion of positive online behaviours, and user education to create a safer digital environment. This complexity stems from digital safety issues being not solely technological but deeply interconnected with cultural, societal and systemic dynamics.
For the challenges ahead, empowering users with knowledge about digital risks and safe practices is equally important for creating online platforms where users can have a voice. Awareness campaigns that enhance digital literacy are essential, enabling individuals to navigate online spaces responsibly and fostering a safer, more inclusive digital ecosystem.
To maximize the impact of these efforts, it is crucial to continuously evaluate intervention strategies and mitigation measures, identifying what works and what does not, in line with the debate around freedom of expression and safety. This ongoing analysis is essential for refining approaches and ensuring the sustained effectiveness of digital safety initiatives.
In digital safety, neglecting to anticipate emerging priorities often leads to a reactive approach, with responses initiated only after significant consequences and harms have occurred. Implementing proactive measures to address potential threats before they escalate is crucial to mitigating damage and ensuring resilience.
Research by WeProtect Alliance and Thorn underscores the value of regular horizon scanning. This approach enables rapid assessment of both current and emerging technologies that may disrupt the landscape of technology-facilitated child sexual exploitation. Such forward-looking initiatives must become a standard practice for child protection across all online harms as technologies and user behaviours evolve rapidly.
A paradigm shift is essential in 2025, and developers and policy-makers must collaborate closely, integrating harm prevention into technology development. This shift requires balancing safety and innovation, fostering solutions that protect users and their voices without hindering progress. By prioritizing proactive strategies, we can build a digital environment that effectively anticipates and mitigates future risks.
The interconnected nature of digital safety challenges demands a unified global response. Tackling these issues effectively requires strong partnerships among governments, private sector entities and civil society organizations. Such collaborative initiatives are essential for fostering innovation, sharing best practices and driving meaningful progress.
On the intergovernmental front, the United Nations Global Digital Compact is poised to set international standards for the digital world with a strong call for digital safety. As its implementation unfolds in 2025, it will be closely watched for its ability to address cross-border safety challenges and align global efforts toward shared objectives.
From a multistakeholder perspective, the Global Coalition for Digital Safety, convened by the World Economic Forum, plays a pivotal role. It provides a platform for stakeholders to collaborate on best practices, exchange insights and advance safety-by-design principles.
Later this year, the coalition will release a report mapping digital safety interventions and introduce a comprehensive framework for addressing disinformation across its entire lifecycle.
In the private sector, the Coalition for Content Provenance and Authenticity (C2PA) is taking significant steps to combat misinformation. By establishing technical standards for verifying the source and history of media content, the C2PA aims to enhance trust and accountability in digital spaces.
“A safer digital world demands not only regulatory and technological innovation but also a cultural shift that prioritizes ethical practices and anticipates future risks. With joint efforts, we can transform the digital safety landscape into one that empowers users and protects them in the midst of these complex and evolving challenges,” says Daniel Dobrygowski, Head of Governance and Trust at the World Economic Forum.
Achieving digital safety will requires balancing freedom of speech with user protection in an increasingly complex online landscape. Through global collaboration, transparent regulation, technological innovation, and a commitment to human rights and ethical principles, significant progress can be made in 2025.