U.S. NewsIn The Loop

Actions

What's Trending, Plus Chicago's Unofficial TikTok Historian

Gov. Greg Abbott's trans kids directive; Oscars moves 8 categories off air; and rent increase. Plus, TikTok's unofficial Chicago historian's story.
Posted
and last updated

When the Capitol riot happened, mainstream social media sites — like Twitter and Facebook — took action.  

 

(VO) 

Then-President Donald Trump was banned from Twitter permanently... and suspended from Facebook for at least two years. The social media company, now known as Meta, said it would re-assess the risk of violence near the end of Trump’s suspension in January 2023. 

 

While getting Trump off the sites was an immediate reaction from social platforms, lawmakers have spent the last year looking into role that these social media sites might have played in the PLANNING OF the Capitol insurrection. 

 

In 2021, we saw big tech CEOs take the hot seat on Capitol Hill as U.S. lawmakers grilled them about the spread of misinformation... and how their sites enabled extremists.  

 

(SOT) 

JACK DORSEY | Former CEO, Twitter 

March 25, 2021 

lawmakers have to consider the "broader ecosystem" and "not just technology platforms we use." 

 

(SOT) 

MARK ZUCKERBERG | CEO, Meta  

March 25, 2021 

"I believe that the former president should be responsible for his words and that the people who broke the law should be responsible for their actions" 

 

Mark Zuckerberg caught a lot of heat for that comment, which critics saw as Facebook downplaying its role in the events of January 6th. And to put some numbers to all this, an investigation from ProPublica and our partners at The Washington Post, published earlier this week found during the period between Election Day and the Capitol riot, there were at least 650,000 posts in Facebook groups that attacked the legitimacy of Joe Biden's win, with many posts calling for political violence. 

 

(AOC) 

So, have these social media sites made major changes to avoid becoming tools for political violence? Yes... and no? 

 

First, it’s worth noting that the companies HAVE put in some new measures. 

 

Headline GFX: Facebook 

Let's take Facebook for example: 

 

(VO) 

Meta made sure its oversight board was up and running to make decisions with real impact. The group is supposed to be independent of Meta and work as a “Supreme Court” of sorts. It issued its first five decisions in January of 2021. 

 

(AOC) 

The company also introduced new enforcement protocols for content posted by public figures during times of civil unrest and violence in June of last year. 

 

(VO) 

But Meta's critics say there’s still a lot of work to be done — and the company has felt a bit more heat since a whistleblower came forward last year saying the company did NOT do enough to combat misinformation and political violence planned on its platforms.  

 

(AOC) 

That misinformation includes content related to the 2020 presidential election and Jan. 6 insurrection. The whistleblower’s complaint claims this was done to "promote virality and growth on its platforms." 

 

(SWV) 

FRANCES HAUGEN | Facebook Whistleblower 

October 5, 2021 

These problems are solvable.. A safer free speech respecting, more enjoyable social media is possible, but there is one thing I hope you take away from these disclosures is that facebook can change but is clearly not going to do so on its own  

 

(AOC) 

This year, Meta says it will give updates about when it leaves up content that violates its rules because of newsworthiness and would no longer presume that speech from politicians is inherently of public interest.  

 

HEADLINE GFX: Twitter 

Then there’s Twitter — while the site is nowhere as near as large as Facebook, Trump did have a massive following and tweeted frequently during his presidency. Twitter 86’d that just a few days after the Capitol riot.  

 

Here's what else the platform changed since then: 

 

Immediately after the events of Jan 6th last year, the company stopped allowing people to reply to, like or retweet posts that violated its updated civic integrity policy. Then, they permanently suspended thousands of accounts that mainly shared QAnon content. 

 

Twitter also launched a pilot program called Birdwatch in which users can identify tweets they think are misleading and need more context. Finally, the site teamed up with Associated Press and Reuters to elevate credible information on the platform. 

 

It’s worth noting the founder and CEO Jack Dorsey stepped down back in November and was replaced by Twitter’s chief technology officer, Parag Agrawal. 

 

To mark one year since the insurrection, the social networking site convened a team meant to monitor any content on their platforms associated with January 6th that could lead to more political violence.  

 

The thing is... even if there are these safeguards, restrictions and community guidelines in place on the mainstream social networks, there are still a bunch of other services that don’t mind becoming home to misinformation or extremism.