Social media platforms need to do more to censor hate speech

The Internet is an excellent forum for sharing ideas—until hateful dialogue across social media leads to violence in the real world.
It’s time to hold websites accountable for censoring hate speech.
Social media is a wonderful connective tool. Websites permit the instant exchange of videos, pictures, and messages, bringing people together all over the world. 
In a digital age, the Internet has also become a forum for debate. Sites like Facebook and Twitter are now primary platforms for passionate discussion. 
Politicians use social media accounts to disseminate their values and rally supporters. Intellectuals and academics wield their profiles to condense complex information into bite-sized chunks for everyone to engage in. 
And trolls, bigots, and the prejudiced are granted space to freely distribute harmful—and dangerous—rhetoric.
Hate speech on the internet is expressed in countless different ways. Blog posts, Facebook statuses, Tweets, and group chats have all become conventional means of spreading harmful ideas. Groups of various religions, ethnicities, gender identities, and sexual orientations are just a handful of those frequently under attack.
Thanks to social media, these hateful beliefs have devastating real-world implications. 
In Germany, studies found a direct link between anti-refugee Facebook posts and physical attacks on asylum seekers. The 2018 Pittsburgh synagogue shooter was found to be heavily influenced by Gab, a social network notorious for its relaxed regulations around hate speech. In Myanmar, a United Nations investigation found that Buddhist nationalists and military leaders used social media to dehumanize Rohingya Muslims before engaging in a genocidal campaign against them.
It’s true some sites like Facebook, Twitter, and Instagram have strict hate-speech regulations to limit accounts engaging toxic rhetoric. But very few provisions exist to stop those whose accounts are deleted from creating new ones. 
In an age of internet anonymity, often, creating a new email address with a fake name is enough to start up a brand-new profile. Without real accountability, online hate speech ultimately goes unchecked.
Some argue digital hate speech is an unfortunate but unavoidable consequence of the freedom of speech granted by social media. However, this consequence is unacceptable: it puts the already-threatened lives of minority groups in further jeopardy.
Whether it’s requiring users to create accounts linked to government-issued ID or implementing more sophisticated user-identification software, social platforms need to work harder to eliminate hate speech on their sites. 
Online hate speech places vulnerable groups at risk. Social media channels must be held responsible for censoring that content and reinforcing respect for those groups. 
Andrew is The Journal’s Features Editor. He’s a third-year Philosophy student.

All final editorial decisions are made by the Editor(s)-in-Chief and/or the Managing Editor. Authors should not be contacted, targeted, or harassed under any circumstances. If you have any grievances with this article, please direct your comments to

When commenting, be considerate and respectful of writers and fellow commenters. Try to stay on topic. Spam and comments that are hateful or discriminatory will be deleted. Our full commenting policy can be read here.