Meta’s Community Standards Were Updated in January 2025
Meta-owned platforms Facebook and Instagram are the world’s most popular and third most popular social media networks, respectively, which allows advertisers to reach a wide audience in a cost-effective way. Meta manages user experience through the implementation of community standards; a set of rules which users must adhere to. These community standards do not directly impact advertising, but they can affect the audiences that advertisers are trying to reach, both in terms of their perception of and their experience of using the platform.
When reviewing Meta’s January 2025 policy updates, a specific section caught my eye. It states that Meta removes any of the following user generated content:
- Claims that a violent tragedy did not occur.
- Claims that individuals are lying about being a victim of a violent tragedy or terrorist attack, including claims that they are:
- Acting or pretending to be a victim of a specific event, or
- Paid or employed to mislead people about their role in the event.
This was introduced by Meta in June 2023 but has survivedupdates to their community standards since then. It was added to the “Bullying and harassment” section of the policy due to the prevalence of aggressive conspiracy theorists who target victims of tragedies. September the 11th and the 2012 Sandy Hook school shooting are examples that seem to frequently draw the focus (and wrath) of these “tragedy deniers”.
It is a verifiable fact that tragedies like Sandy Hook happened, and Meta does not consider their occurrence to be a subject for “debate.” This creates a safe space for constructive conversation and support, by refusing to give ground to harmful misinformation. This helpful policy relies on firmly separating fact from fiction.
How Does Meta Manage Fact-Checking?
Until now Meta has partnered with third-party experts to remove misinformation – particularly if it could lead to physical harm or violence, as well as “Harmful health” misinformation (e.g. false claims about the supposed dangers of vaccines).
But on the 7th of January 2025, Meta announced the end of its partnership with third-party fact-checkers, starting in the USA. They are introducing crowd sourced “community notes”, with user feedback guiding the removal of misinformation. Will the unpaid, voluntary effort of the masses be as effective as professionals determining fact from fiction?
After all, if enough Meta users believe that vaccines cause autism and submit community notes in support of that position, will anti-vaccine content be left unchallenged on the platform?
What is Meta’s New Policy Regarding the LGBTQIA+ Community?
In the same January update, Meta chose to single out and exclude the LGBTQIA+ community in the “Hateful conduct” section of their community standards policy. The new policy states they will remove allegations of mental illness…unless it is regarding sexual orientation or gender identity.

The World Health Organisation's International Classification of Diseases (ICD) removed homosexuality in 1990 and “gender identity disorder” in 2019, while the American Diagnostic and Statistical Manual of Mental Disorders removed the former fully in 1987 and the latter in 2013. In 2010 the UK Equality Act recognised LGBTQIA+ identities as protected characteristics, making it illegal to discriminate against someone because of their gender identity or orientation.
Perception of LGBTQIA+ identities as “mental disorders” is often used to justify harmful practices such as conversion “therapy” (which is allowed to be promoted on Meta, despite countries including the UK legislating to end the practice), so this update by Meta seems to be incontrast with their current policy of removing misinformation which could lead to “physical harm”.
What is Meta’s Position on Misinformation?
Meta seems to be embracing “harmful health” misinformation of their own by specifically allowing users to target LGBTQIA+ individuals with accusations of mental illness, in contravention of modern medical expertise.
It is a confusing contrast with their beneficial tragedy denial policy. If it is against community standards to spread misinformation about incidents like Sandy Hook (which all reliable sources confirm was a real incident), why is Meta allowing users to spread misinformation about the LGBTQIA+ community, when all reliable sources agree that being gay or trans is not a mental illness? Both are examples of misinformation that can lead to real harm for the communities involved.
Meta CEO Mark Zuckerberg claims the changes will support and promote free speech for users.
What Does Meta’s January Policy Change Mean for Advertisers?
Meta’s January announcements seem to align with Donald Trump’s January policies regarding trans, nonbinary and intersex people. In the same month Meta CEO Mark Zuckerberg enjoyed a front row seat at Trump’s presidential inauguration, after donating $1 Million to the inauguration fund. This may provide advertisers with confidence that Meta will enjoy a cooperative relationship with the new United States government, in contrast to a previously unstable relationship between Meta’s CEO and Trump. Stability is generally good for business, so this hopefully signals a continuation of cost-effective advertising from Meta.
One of the benefits of advertising on Meta is the wide reach of the platform. Facebook alone has 3.065 billion active users, with over 56% of the world’s active internet users accessing it every month. This allows advertisers to reach people of all demographics at volume.
However, any perceived alliance between Meta’s CEO and the controversial Donald Trump could potentially lead to some Facebook and Instagram users reducing their presence on these sites. This could reduce the reach of advertisers who want to reach users across a broad political spectrum. Additionally, some LGBTQIA+ users and creators are considering moving to other platforms because of the policy update. If a substantial number of affected users leave Meta platforms, advertisers may find it harder to reach people of all demographics on Facebook and Instagram.
Additionally, misinformation is a concern for advertisers who want their ads to be shown alongside high-quality content, so if the new community notes system does not perform as well as third-party fact-checking, it could lead to brand safety concerns.
We have seen Elon Musk’s platform X (formerly known as Twitter) lose out on ad revenue with 26% of advertisers planning to cut advertising spend on the platform in 2025 due to uncertainty about the type of content on the platform, and the erraticism of its owner. It is worth noting that X currently uses crowd-sourced “fact-checking” on its platform.
The new community standards may mean that advertisers see a difference in the kind of comments that are left on Facebook and Instagram ads. Advertisers might benefit from closer moderation of ad comments. This may be of particular relevance to any ad content that advertises services for LGBTQIA+ people or features anyone from that community in the ad creative.
There is nothing in the new policies to suggest that Meta will stop delivering cost-effective advertising to a wide audience. Meta still offers advertisers a range of targeting options and advertising products, with a convenient self-management platform. These factors may outweigh any brand safety concerns for the moment.
Will their new LGBTQIA+ policy reduce audience diversity on the platform? Will brands feel safe advertising on Meta with “community notes” policing content, instead of third-party professionals?
It is too early to tell, but it is important to keep track of significant policy changes which could impact brand-safety, audience volume and audience composition. We will certainly continue to monitor the situation closely, so that we can help our clients make informed choices about their campaigns.
If you would like to discuss this, or anything else, please reach out here.