On the ninth of January, 2024, Mark Zuckerberg announced Meta would suspend third-party fact-checking on Facebook and Instagram in the US, citing concerns over political bias and a need to restore trust. Replacing third-party fact-checking is a community notes system, echoing X’s model, where users collectively assess content credibility. Whilst seemingly a minor move relative to the apparent new US cultural status quo, this policy change marks a significant shift in how digital platforms shape, legitimise and regulate what content we see online.
Meta’s algorithms, designed to maximise engagement and ad revenue, are continually questioned for their hidden influence on public opinion. Many are concerned that profit-oriented algorithms favouring emotionally charged content are distorting content visibility, manipulating geopolitical opinion, and fuelling harmful content creation – justifiably, according to Sarah Wynn-Williams’ whistleblowing memoir released last month. This, combined with the rising presence of bots, misinformation and deepfakes, intensifies the challenge for businesses to leverage social media platforms as a growth channel whilst maintaining brand safety.
So What?

The implications of these changes can be seen through 4 lenses.
For brands and advertisers, these forces have combined to raise 2 key challenges:
Brand Reputation Concerns
Loosening content moderation policies increases the risk of ads appearing alongside harmful content. With impersonation on the rise, over 40% of advertisers expect brand safety to deteriorate (WARC). As Thinkbox CEO Lindsey Clay points out, Meta’s fragmented ad delivery makes controlling ad placement difficult, while no single advertiser has enough leverage to force Meta to improve safety measures. Marketers face a dilemma when choosing what types of content appear alongside ads: brand safety filters increase cost while lax brand suitability filters improve reach but risk brand reputation. This trade-off between brand protection and short-term performance metrics remains a strategic challenge. Brands may need to rethink how they mitigate risk while preserving marketing effectiveness, potentially needing to take a longer-term view on ROAS.
Consumer Disengagement
The overload of misinformation is driving emotional fatigue and mistrust, reducing users’ responsiveness to traditional advertising. Many now gravitate toward peer-led content or nano/micro-influencer communities as they seek more authentic engagement channels. As Story Collective CEO Josh von Scheiner explains, “There’s so much noise that consumers tune out the vast majority.” To stay relevant, brands must focus on nurturing trust and explore alternative strategies to drive truly intentional and meaningful consumer engagement.
Our Perspective
Meta's changing moderation policies create an opportunity for brands to reconsider online trust and transparency. Those prioritising authenticity can stand out by moving toward curated communities that cut through ad noise. Lush exemplifies this approach by leaving mainstream platforms to focus on ethical PR, owned media, and in-store experiences. We expect more brands to explore new channels to rebuild customer trust and foster deeper engagement. While Meta remains a powerful advertising channel, concerns about effectiveness and brand safety are increasingly well-founded. As Lindsey Clay puts it, “Marketers should choose effectiveness, not ease,” acknowledging that volume and reach alone do not guarantee a positive impact.
In response, we expect winning brands to bolster their marketing efforts with tech-led safeguards and double down on alternative social channels to cultivate smaller, but more engaged communities. Brands are investing in AI compliance tools to verify content against guidelines and detection systems like media watermarking or deepfake assessment to authenticate media. Many are also building out first-party data and owned-loyalty ecosystems to reduce dependence on Meta’s tracking capabilities. These strategies reflect a necessary shift toward technology-enhanced, transparency-led marketing where robust infrastructure is as crucial as creative content.
What Next?
With Meta’s moderation changes likely to expand to the UK and Europe, brands should act pre-emptively: future-proof digital strategies, update playbooks, and invest in supporting technology to protect brand equity. This means acting on both immediate risks and longer-term shifts:
Short-Term Moves
Tighten content governance using blocklists, allowlists and AI tools to avoid unsafe placements.
Improve social and search listening to detect misinformation and distinguish real feedback from bots or bad actors.
Stay compliant by aligning marketing ops with evolving global regulations.
Long-Term Shifts
Build trust-driven communities through private channels, subscriber spaces and nano/micro-influencer partnerships.
Promote digital literacy to help users recognise transparency to improve digital literacy and content credibility.
Invest in verification tech to flag deepfakes and bot interactions.