CreatorsNewsPolicyPoliticsSpeechTechYouTube

YouTube has loosened its content moderation policies

YouTube has loosened its content moderation policies

YouTube has loosened its content moderation policies

YouTube has relaxed its moderation policies and is now instructing reviewers not to remove content that might violate its rules if they’re in the “public interest,” according to a report from The New York Times. The platform reportedly adjusted its policies internally in December, offering examples that included medical misinformation and hate speech. 

In training material viewed by the Times, YouTube says reviewers should now leave up videos in the public interest — which includes discussions of elections, ideologies, movements, race, gender, sexuality, abortion, immigration, censorship — if no more than half of their content breaks its rules, up from one quarter. The platform said in the material that the move expands on a change made before the 2024 US election, which allows content from political candidates to stay up even if they violate its community guidelines.

Additionally, the platform told moderators that they should remove content if “freedom of expression value may outweigh harm risk,” and take borderline videos to a manager instead of removing them, the Times reports.

“Recognizing that the definition of ‘public interest’ is always evolving, we update our guidance for these exceptions to reflect the new types of discussion we see on the platform today,” YouTube spokesperson Nicole Bell said in a statement to the Times. “Our goal remains the same: to protect free expression on YouTube while mitigating egregious harm.” YouTube didn’t immediately respond to The Verge‘s request for comment.

YouTube tightened its policies against misinformation during Donald Trump’s first term as president and the covid pandemic, as it began removing videos containing false information about covid vaccines and US elections. The platform stepped back from removing election fraud lies in 2023, but this recent change goes a step further and reflects a broader trend of online platforms taking a more lax approach to moderation followingTrump’s reelection. Earlier this year, Meta similarly changed its policies surrounding hate speech and ended third-party fact-checking in favor of X- community notes.

The changes follow years of attacks on tech companies from Trump, and Google in particular is in a vulnerable legal situation, facing two Department of Justice antitrust lawsuits that could see its Chrome browser and other services broken off. Trump has previously taken credit for Meta’s moderation changes. 

As noted by the Times, YouTube showed reviewers real examples of how it has implemented the new policy. One video contained coverage of Health and Human Services Secretary Robert F. Kennedy Jr.’s covid vaccine policy changes — under the title “RFK Jr. Delivers SLEDGEHAMMER Blows to Gene-Altering JABS” — and was allowed to violate policies surrounding medical misinformation because public interest “outweighs the harm risk,” according to the Times. (The video has since been taken off the platform, but the Times says the reasoning behind this is “unclear.”) Another example was a 43-minute video about Trump’s cabinet appointees that violated YouTube’s harassment rules with a slur targeting a transgender person, but was left up because it had only a single violation, the Times reports.

YouTube also reportedly told reviewers to leave up a video from South Korea that mentioned putting former president Yoon Suk Yeol in a guillotine, saying that the “wish for execution by guillotine is not feasible.”