Should Hate Speech (and Disinformation) Be Allowed?

Freedom of speech in theory seems like a lovely ideal. Speech and freedom both sound pretty good, so, when times aren’t tough, it’s easy for us all to agree that sure, we support it! But when the subject of hate, or more recently, dis (or is it mis?) information comes up, things start to get murky quickly.
It’s fairly easy to sound the alarm about hate speech. We can find examples of speech many or most of us call abhorrent, that we don’t want to hear, and that we don’t want others to hear. And it’s easy to cry loudly about the dangers of book banning or imprisonment of journalists covering political issues in restrictive regimes. When it comes to misinformation, amplifying one set of examples can be used to bolster arguments on why euphemistic intervention or “harmonization” actions by government or media monopolies are justified, while pointing to another warns what can happen when censorship goes too far.
The ongoing debate over free speech, recently given a new dimension by Elon Musk’s takeover of Twitter, should be put to bed by two arguments we often sidestep in our discussion of what we as a society allow to be said.
The first is that we must admit that choosing what to allow in public is inherently subjective. What is considered hate speech today would have been considered appropriate fifty years ago, and still may be in other nations with different cultural values. What is considered misinformation is also entirely dependent on one’s background, educational level, personal philosophy and worldview. Even those looking at the same set of facts can arrive at completely different conclusions. While there are many who argue that misinformation causes a tremendous amount of damage, there still remains the problem of who is allowed to call what misinformation. What is misinformation, and what is scientific disagreement? We cannot celebrate the value of the latter, if we ban the former. There are valid scientific arguments happening in everything from asteroids to zoology, and labelling information in one particular field as misinformation increases the danger such labelling can be applied to another. As governments and social media giants up their actions to remove disinformation from the public sphere, they are failing to provide transparent, quantifiable and objective criteria for what constitutes misinformation, relying instead on “published by major news outlets”, who award themselves the role of arbiters of what is real or fake, or “government scientists” who work for the state and necessarily serve its interests. In fact, there is no such thing as objective criteria - with almost 8 billion people on the planet you can be certain that there are 8 billion different views on whether information is true or false. Neither can you poitn to “science” as a valid criteria to determine what information is allowed. Even when there is scientific consensus about a certain fact, there is never scientific unanimity. And a lack of unanimity means one scientist’s misinformation is another scientist’s truth. While there are clearly problems with censoring misinformation for the public, the benefits for those who wield the censorship axe are questionable, since such efforts can and do backfire. So censorship on the basis of suppressing misinformation, no matter what the dangers may be, is simply not a viable option for those who truly wish to educate and inform - or wish to be thought of as a credible source for both.
Labelling hate speech is just as subjective as labelling misinformation. Is suggesting that a group of people should die hate speech? Is saying we should not tolerate certain people hate speech? I would argue yes, but both have been printed on the front pages of Canadian newspapers with nary a hint of opposition from the government who is taking steps to outlaw it.
But my second argument, which I see as the biggest problem with suppressing either hate speech or misinformation is this: banning (supposedly) harmful speech does not ban the (supposedly) harmful thoughts or impulses that led to it. If someone were to tweet “I hate Sarah Climenhaga (that’s me) and I wish she would die” I would certainly be alarmed to say the least. I would have no problem labelling such a tweet as hate speech. But, I would far rather know that someone out there has that sentiment, than be ignorant of the potential danger I face. That way I can assess the threat the person poses, and I can consider my options for either communicating with the person uttering that phrase or taking measures to protect myself. I don’t believe that banning the speech, even if it results in a host of others voicing the same view, addresses the underlying problem of people wishing ill on me. The underlying problem is the one I want to address, not the fact that people are saying it out loud. And should the danger move beyond speech towards actually uttering threats, or carrying out action, there is already a criminal code in place to deal with it.
We often wish that bad things would just go away - our mean, shallow or judgemental thoughts, the cruel words that others say - because they cause us or others pain. But trying to deny the existence, control or legislate the words away will just drive things underground, and the pain will show up somewhere else or in some other form.
People will gravitate to those who have something to offer that resonates with what they already want - if they gravitate to something we disagree with, we need to ask ourselves why. Asking why will allow us to find a common ground on which to heal the wounds that create hate. Let’s create the world we want by spreading love ourselves rather than banning speech we hate. And let’s stop suppressing what we think is fake, and instead shout to the rooftops our own truth.