Justice

Why Policing the Internet for True Threatening Speech is Easier Said Than Done

February 23rd 2015
Is that a threat? Or is it free speech?
 
The Internet has brought countless changes to our lives, one of them being a need to re-examine free speech. Police, the judicial system, and ordinary citizens have been questioning the need to regulate the growing number of threats being made online. With movements like GamerGate and the Men's Rights Movement gaining more traction, an alarming number of women are being threatened online. Those threats also spill offline, in the form of pranks, bomb threats, and crimes. People are questioning how to address the threats before they become real-world behavior. This is an important legal question because the laws were written before the Internet existed and may not effectively address threats made online.
 
The Supreme Court is currently considering a case where a man wrote about his desire to kill his ex-wife on his Facebook page. She says that his words were threats that made her fear for her life (as does the government). He says that he is an aspiring rapper and that the words were lyrics. Rap lyrics are protected as free speech, and threats are not. If he was rapping onstage, he'd have a microphone in his hand and a sideways baseball cap on, and it would be clear that he was engaging in free speech. Online, it is much more difficult to tell. And the justices face a difficult decision: Should it be considered a threat because it caused his ex-wife real fear? Or should we take the speaker's word for it when he says he never would have acted on his words?
 
How do we define free speech?
 
Free speech is arguably the most important aspect of our democracy. Each progression towards becoming a more perfect union has started with someone speaking up against injustice and having the ability to do so without fear of retribution from the government. And because of this right we are able to discuss new ideas, protest, write new laws, and move forward as a more just society.  Of course, free speech doesn't mean speech without consequences. You can end up fired from your job, or have your business boycotted by people that are offended by your speech, but you won't get thrown in jail.
 
Not all speech helps move us forward. Freedom of speech also protects vile, hateful, racist, sexist, and mean-spirited speech, too. Which is great. Because one person's hate speech is another person's well-reasoned argument. In a democracy, we don't decide which is which, we protect everyone's rights equally.
 
Over time, distinctions have been made in what kinds of speech are not protected. Speech that directly harms others is punishable, whereas speech that is about the harm of others is not. For example, pornography depicting children is illegal because children were abused in the creation of that "speech". However, pornography advocating for the abuse of children, in which no children were involved, is protected. As abhorrent as abusing children is, you're allowed to talk about it, speak in its defense, and even write fantasies about it. You just can't do it.
 
Most of the other exceptions to free speech function similarly. Inciting violence is only illegal when immediate violence is being advocated. Fighting words are only prohibited when it is clear that the hearer of the words is in immediate physical danger. Threats are only illegal when an objective hearer would know the threat to be real and not just hyperbole. Under current law, then, a threat made online would only be investigated if an objective person considered the threat credible and likely to lead to illegal action.
 
Why we should treat threats made online differently?
 
Do we need different rules for online threats? Most threats made online are just trolls being trolls. Why not leave laws the way they are let police determine which are the few cases where it seems like things might escalate into offline behavior that could actually harm someone?
 
There are two problems with the way online threats are currently handled. First, the police really suck at the Internet. They don't understand the way it works and tend to ignore most threats made online because of this lack of understanding. One journalist recounted her story of going to the police about threats that were being made towards her online. The police told her that the threats were not harming her because she could just avoid going to the website on which they were being made.
 
On the surface this makes sense, but it doesn't take into account the way that the Internet actually works. Yes, if someone is threatening you on some sub-Reddit, you can easily avoid that site. What about sites like Facebook and Twitter? Many women face hundreds of threatening tweets per day. Are they supposed to avoid Twitter? Would we tell a woman to avoid having a cell phone or an email account to get away from threats? Twenty years ago, we might have. But now we recognize cell phones and email as necessary tools for conducting business. I don't think we're too far away from finding social media sites equally as indispensable.
 
The CEO of Twitter apologized this month for how slowly the site has moved to address concerns about threats. While Twitter does offer the option to block a particular account, people can make a limitless number of accounts and there aren't advanced measures (such as blocking IP addresses) that allow for permanently blocking people. If Twitter begins to ban people who make frequent threats from using their site, they will likely face some criticism from groups who say that their free speech is being threatened. But those groups will be wrong; Twitter is a private business with the right to refuse service to whomever they want. As a business, we can expect Twitter to address threats in whatever manner is in their best interests. That they are doing so now implies that it's an issue that concerns a large percentage of their customers. Eventually, each individual site will likely adopt a policy on threats and then people can choose on which sites to spend the majority of their time accordingly.
 
But this won't solve the problem. The second difficulty with creating an Internet-specific distinction between speech and harm is that we don't have a workable definition of harm for the Internet-age. Should we consider harmful not only real-world actions that result from speech on the Internet, but also the online speech itself? Should we consider it harmful that the threats are viewable to anyone online, instead of just telling targets to avoid them? Obviously this type of speech causes very real stress and fear for the targets. Isn't that harm? Threats online are disproportionately lobbed at women and people of color, people whose experiences are often marginalized in American society. Isn't that harm? Why isn't doxxing (posting people's personal information like addresses and phone numbers on public sites without their consent) taken more seriously? How should we weigh the emotional harm of the target against the right of the threatener to speak freely?
 
Currently, free speech has trumped emotional harm in the majority of cases. And maybe it should. The problem is that it is very difficult to know when emotional harm will be turned into physical harm. The most well publicized example is probably Elliot Rodger, the young man who killed six people in Santa Barbara after posting a YouTube video and emailing a 137-page manifesto about his desire to kill women for rejecting him socially. Rodger's contributions to pick up artist (PUA) forums gained a lot of attention after his death because these sites are full of threats and negative speech towards women. Police were called to his home and saw videos and writings he had posted online threatening the lives of women. And, like in most cases, determined that the threats were not illegal and did not arrest him.
 
It's clear in retrospect that this tragedy could have been avoided if they had taken the threats he made seriously and arrested him. But hindsight is 20/20. PUA forums (and their more mainstream counterparts, men's rights forums) are filled with posts like his, most of which will never result in violent acts. Police may not be able to discern the difference between the poster who will actually cause violence and the one who just enjoys typing about it. Research has been done to understand the difference in these patterns when it comes to terrorism, but not gendered violence. Like the pornography example, speech that advocates violence should be protected, and speech that is likely to lead to violence should not. But we need to get better at determining when speech crosses the line into causing actual harm. And it will be difficult to find a way to do that that doesn't stifle anyone's free speech.
Share your opinion

Have you ever been harassed online?

No 29%Yes 71%