Science and TechSocial Media

Actions

Studies: Social media giants fuel antisemitism, extremism, hate speech

Studies say social media companies are not only hosting antisemitic and extremist content but are actually suggesting it to users.
Posted

A group of researchers found that the majority of the world’s biggest social media platforms are pushing antisemitic and hateful content, especially toward one group in particular.

But who exactly is responsible for the spread of antisemitism and extremism online?

Two new studies from the Anti-Defamation League and the Tech Transparency Project looked at Facebook, Instagram, YouTube, and X, formerly known as Twitter.

The report says the social media companies are not only hosting antisemitic and extremist content but are actually suggesting it to users.

"These companies, in their own tools, are not just recommending and amplifying hate and extremism, but in some cases, in one of the reports, they're even auto-generating some of it," said Yael Eisenstat.

Eisenstat, the vice president for the center for tech and society at the Anti-Defamation League, says the findings are extremely concerning considering the fact that those platforms all have policies in place against hate speech.

"Social media companies will often tell you they're a mirror to society, right? That they are just showing you back what society itself is saying or just showing you what you are actually there looking for. But what these reports really ended up proving is that the companies themselves, in some instances, have a direct hand in the proliferation of online hate," said Eisenstat.

Georgia kids could soon need parental permission to join social media
Social media apps

Georgia kids could soon need parental permission to join social media

Republicans in the state Senate are fighting for the requirement.

LEARN MORE

As part of the studies, both the ADL and Tech Transparency Project created male, female, and teen personas who searched for conspiracy-related content.

Their findings showed that three out of the four social media platforms were actually pushing hateful content on their users.

The most harmful content is recommended for teens.

"The most disturbing was that the worst content, the most extreme content, was being pushed to the 14-year-old teenage persona that we had set up for the experiment," said Eisenstat.

The study found YouTube—which has its own troubled past with extremism on its platform—didn't take the bait from the fake accounts and didn't surface hateful content.

"And what that really proved to us is that this isn't just a problem of technical capacity or scale, because if YouTube was able to somehow refine their algorithms and recommendations to not take the bait and not purposely push more virulent content towards its users, then I don't really understand why any of the other companies couldn't do the same," said Eisenstat.

The studies come as the ADL, which has been tracking antisemitic incidents for over four decades, says antisemitism is at an all-time high.

"If you look at any number of recent mass shootings in the United States, whether it's Pittsburgh Highway, Buffalo, the Club Q shooting, all of those mass shooters were exposed to antisemitic conspiracy theories coursing through their social media feeds. So I just want to emphasize that this is not just about speech online, but it actually has the potential to really cause harm in the offline world," said Eisenstat.

And although Eisenstat told the U.S. government that regulation is needed in an industry that is largely unregulated, she also says there isn’t just one magic fix to the problem we’re seeing online. Mostly because there is a demand out there for that type of content.