The Capitol raid could change North American social media for good

Twitter’s actions in Europe prove it’s capable of removing extremism from the platform

Social media is partially to blame for the storming of the US Capitol.

On Jan. 6, thousands of Donald Trump extremists and white supremacists who believed the 2020 presidential election had been ‘stolen’ from their candidate stormed the United States Capitol.  Two days later, the president was permanently suspended from Twitter.

The social media platform has spent the last four years defending its decision to allow right-wing extremism to continue on its website, often citing the difficulty and complications with Americans on Twitter. In 2019, the company argued that removing extremism in the US would disproportionately impact Republicans. In short, Twitter was worried that if it implemented any kind of filter targeting white nationalism or fascism, the accounts of prominent Republican politicians would be removed.

Instead of being banned, neo-Nazis were all but embraced on Twitter’s platform. In January 2020, a BBC investigation found that Twitter was allowing brands to micro-target ads at neo-Nazis, transphobes, and homophobes, finding that brands could target users by which search terms they had used. With no hard limits on what terms brands could choose, it was entirely possible to target users interested in “transphobia,” “islamophobia,” or “anti-gay.” Twitter apologized and rectified the issue but, unfortunately, it’s just another offence in a long history of making excuses for the hate speech prevalent on the platform.

However, those looking to scroll through their feeds without seeing white nationalist rhetoric have found a unique loophole. For years, some users have made the choice to switch their Twitter account to Germany.

Extremism has already been erased from social media feeds in many parts of Europe, and as early as 2012, neo-Nazis were being removed from Twitter in Germany. Due to a local censorship policy, Twitter has allowed content to be blocked in specific countries if tweets are violating local laws, and German police were able to use this new policy to begin the process of preventing the online spread of fascism in their nation. These filters that remove neo-Nazi content in Germany prove that despite social media companies claiming removing extremism is too difficult, it’s entirely possible.

The insurrection that took place on Jan. 6 at the US Capitol has once again raised the stakes for the American social media landscape. Trump’s removal from Twitter, while long overdue, is a fairly unprecedented move for the platform in the US.

Trump isn’t the only public figure who has been impacted by these bans. On Jan. 8, the same day Trump was permanently suspended from Twitter, a slew of his advisors and supporters were also removed. National Security Advisor Michael Flynn, lawyer Sidney Powell, and over 70,000 accounts with ties to the QAnon conspiracy were removed from the platform. With concrete proof that inflammatory social media posts and the spread of misinformation can lead to dangerous levels of violence, public safety has to become the number one priority.

Twitter CEO Jack Dorsey appears to agree. In an interview with the Washington Post, Dorsey defended the decision to ban the president, saying the company was forced to “focus all of [their] actions on public safety.” With the ban on the president, it seems Twitter is no longer interested in protecting Republicans. After years of new guidelines that never seem to work, Twitter appears to be ready to take action in removing hate speech from its website, and it isn’t the only social media giant to do so.

Facebook, Snapchat, YouTube, Twitch, and Reddit are all on the long list of the platforms that chose to ban Donald Trump after last Wednesday’s Capitol raid. Popular social media websites have finally made the decision that public safety trumps ‘free speech’—and all it took was the incitement of an insurrection.

Since at least 2012, Twitter has proved it’s had the ability to rein in hate speech but has chosen not to. In North America, where hate speech laws aren’t strict, there’s been little legal incentive for social media companies to police morals spread on their platforms. But accountability is coming for both social media users and the companies themselves, and it will likely mean our social media is about to look a little more like Germany’s.

All final editorial decisions are made by the Editor(s)-in-Chief and/or the Managing Editor. Authors should not be contacted, targeted, or harassed under any circumstances. If you have any grievances with this article, please direct your comments to

When commenting, be considerate and respectful of writers and fellow commenters. Try to stay on topic. Spam and comments that are hateful or discriminatory will be deleted. Our full commenting policy can be read here.