
Illustration: Aïda Amer/Axios
Big Tech companies took swift action to limit President Trump's online reach following Wednesday's riot at the Capitol. Facebook announced his account would be shut down "indefinitely and for at least the next two weeks" and Twitter promised to ban him if he breaks its rules one more time.
Yes, but: The companies had been preparing for this moment for a while.
Why it matters: With the elections over and the president in his final days in office, tech companies feel they have more latitude to take tougher action, sources tell Axios.
The firms may also have an eye on Washington's looming power shift.
- Democrats who have long been concerned about the proliferation of misinformation and extremism on social media will soon be in charge of the White House and both houses of Congress.
Driving the news: A slew of platforms, including companies that have shown restraint over the past four years, finally pulled the plug on the president's accounts after Wednesday's events.
- Twitch and Snapchat disabled Trump's accounts.
- Shopify took down two online stores affiliated with the president.
- Facebook and Instagram banned him from posting for at least the next two weeks, and faced calls to boot him permanently, including from former First Lady Michelle Obama and high-ranking Hill Democrats.
- Twitter froze Trump out of his account Wednesday before reinstating him Thursday once he deleted problematic tweets.
- YouTube says it's accelerating its enforcement of voter fraud claims against President Trump and others based on Wednesday's events.
- Reddit says it's taking action on reported violations of its content policies, which prohibit the incitement of violence.
- TikTok is removing content violations and redirecting hashtags like #stormthecapitol and #patriotparty to its Community Guidelines.
The big picture: Since Trump's inauguration, social media platforms have grappled with how to moderate his and his supporters' posts, drawing criticism from all sides.
- They've taken incremental steps each year leading to this point, ranging from labels, to longer labels, to limiting the reach of posts and removing posts, groups and accounts altogether.
From 2018 to 2020, pressure built on tech platforms to address Trump tweets that incited violence or contained lies.
- Many tech companies said that such posts were concerning, but ultimately felt it was best to let the public hear the president. Facebook CEO Mark Zuckerberg gave an address at Georgetown touting the importance of free speech on Facebook in fall 2019.
- In summer 2020, with coronavirus misinformation spreading and Black Lives Matter protests heightening tensions, platforms started tightening their policies, labeling and limiting misinformation.
- January 2021 has marked a new peak for the companies' restrictions.
Be smart: The damage is already done. Extremist communities have organized events on these platforms that turned violent, and Trump's many falsehoods have reached millions.
Go deeper: