0117 916 6553
charlotte@bristolwomensvoice.org.uk

Online hate crime and the use of social media in hate crime and abuse

Online hate crime and the use of social media in hate crime and abuse

By Anamaria Fonseca

**This piece contains references to abuse, discrimination and violence that some readers may find upsetting or triggering**

Amongst all the types of abuse that people can inflict upon one another, there is one that cannot keep being ignored especially nowadays – online hate crime. It can be manifested in different forms, but all of them are capable of serious harm.

In a society where many relationships start or are maintained through the internet, it is of utmost importance to address the impact of virtual abuse and online hate crime. It can no longer be excused as keeping ‘real life’ separate from ‘virtual life’ when the action of one directly impacts on the other.

The perception that an online hate crime can unleash harmful consequences, just like any person-to-person expression of hate crime, has recently been adopted in new guidelines of the Crown Prosecution Service (CPS) released in October of 2016, “Alison Saunders [the Director of Public Prosecutions] said the Crown Prosecution Service will seek stiffer penalties for abuse on Twitter, Facebook and other social media platforms. Saunders says the crackdown is needed because online abuse can lead to the sort of extremist hate seen in Charlottesville in the United States […], which left one person dead.” (Dodd, 2017).

According to the guidelines mentioned above, the definition of a hate crime is “any criminal offence which is perceived by the victim or any other person, to be motivated by a hostility or prejudice based on a person’s actual or perceived: race, religion, sexual orientation, disability, transgender identity or gender.” Within this perspective online hate crime is also understood as equally damaging: “whether shouted in their face on the street, daubed on their wall or tweeted into their living room, the impact of hateful abuse on a victim can be equally devastating.” Hence the need of action by the public authorities to consider online hate crimes as seriously as face-to-face ones.

Racism, sexism, homophobia, and other forms of hate crime, are also practiced in the form of online hate crime. It becomes worse when they are used together compounding marginalised identities as a form of intersectional abuse, which unfortunately is quite common. For example, racism, sexism and homophobia all being used in one insult to degrade and abuse. When the subject of this kind of crime is women, the impact and consequences are alarming. The cases of abuse against women online, in a space that should ideally be used as safe platform of expression, are outrageous. “Abuse directed at visible and audible women demonstrates that cyberspace, once heralded as a new, democratic, public sphere, suffers similar gender inequalities as the offline world.” (Lewis, R. and others, 2016). As an example, in a report for Amnesty International, many women spoke about the violence and abuse they are subjected to on Twitter and “emphasized how important the platform is to them – both professionally and personally. Women rely on social media platforms like Twitter to advocate, communicate, mobilize, access information and gain visibility.”

The Women’s Media Centre listed several means of online hate crime that are relevant in bringing awareness to the topic, to name a few:

  • Cross platform harassment: multiple harassment through different online platforms towards the same target.
  • Cyber-exploitation, Nonconsensual Photography or ‘Revenge Porn‘: when, without consent, sexually graphic images of the victim are distributed by the abuser, who took advantage of a prior relationship or hacked the person’s electronic devices to obtain the intimate material.
  • Deadnaming: when a person’s former name is revealed in order to cause harm. “This technique is most commonly used to out members of the LGTBQIA community who may have changed their birth names for any variety of reasons, including to avoid professional discrimination and physical danger.”
  • Defamation: false/negative information is propagated deliberately in social media with the purpose of shaming and harming the victim.
  • Doxing: the retrieving and publishing of the victim’s personal information “including, but not limited to, full names, addresses, phone numbers, emails, spouse and children names, financial details. “Dox” is a slang version of “documents” or .doc.”
  • Electronically enabled financial abuse: control and/or denial and/or manipulation of the victim’s finances online, usually perpetrated by men towards women in intimate partner abuse.
  • False accusations of blasphemy: “women face online threats globally, but they run a unique risk in conservative religious countries, where […] blasphemy is against the law and where honour killings are a serious threat.” Accusing women of this, and the consequences of this, becomes a form of violence.
  • Flaming: “a flood of vitriolic and hostile messages including threats, insults, slurs and profanity.”
  • Gender-based slurs and harassment and slut-shaming: the words ‘bitch’, ‘slut’, ‘whore’ or ‘cunt’ are generally used to target women in online harassment, resorting also to the women’s appearance to provoke shame and embarrassment. Slut-shaming frequently targets teenage girls, and it consists of propagating rumours, images, and non-consensual photography of them in order to harm them.
  • Google Bombing: information is updated and optimized in a way to create an online ‘truth’. So when the person’s name is searched online, the rumour is one of the first things to appear corroborating to the defamatory lie.
  • Hate speech: “Harmful and negative gender stereotypes of women offline, as well as widespread discrimination against women rooted in patriarchal structures, manifest as violent and abusive tweets against some women on Twitter.” (Amnesty International, 2018) Hate speech is manifested when the speech, moved by hate, targets “individual or groups of people on the basis of their identity – gender, based on race, colour, religion, national origin, sexual orientation, disability, or other traits.”
  • Mob Attacks/CyberMobs: a group of hundreds or thousands of people attacks a determined target through, for example, a hashtag such as the “#Slanegirl, a hashtag that was used for the trending global public shaming of a teenage girl filmed performing fellatio, is one example.”
  • Rape videos: videos of violent and non-consensual sex are publicised revictimizing the person once again.
  • Retaliation Against Supporters of Victims: when supporters of a victim, such as family members, friends, amongst others, become targets of the hatred as well.
  • Sexual Objectification: when images of women and girls are objectified by harassers, many times even through manipulation of the images.
  • Threats: clear threats in general, such as threats of rape, sexual violence, stalking, serious violence and death threats, may be, in many times, easily identified. Nonetheless the fear and anxiety they generate are not necessarily as easy to address.

Social media is a powerful instrument for expressing thoughts and opinions freely. To the same extent that this is positive, it is also challenging. The abuse and violence perpetrated intentionally or ‘unintentionally’, such as online hate is an example of that (Silva, L. and others, 2016).

In Toxic Twitter, a report from Amnesty International, several women spoke about how the online environment is hostile and harmful for them. One recurrent observation is that online hate reflects the real-life marginalisation of women without the filters imposed by society. It is argued that the online manifestations of hate are somehow the externalisation of inner prejudices, hate and violence that are not accepted in a face-to-face environment, but due to the feeling of anonymity, impunity and freedom provided by the mask of people’s online selves, this becomes possible. Still, this is a damaging virtual reality. “Since the recognition of online hate and abuse, scholarship has sought to define, explain and understand this growing phenomenon. Problematically, extant research has tended to treat online abuse as separate from ‘real-world’ experiences.” (Lewis, R., Rowe, M., and Wiper, C., 2016). However, this virtual reality reflects reality itself, and it is not harmless, not inoffensive, and is a pungent symptom of an unequal real world, exposing of these wider crimes and attitudes, which must be addressed. “[…] Despite all the possibilities and the positive ways in which the platform is used by women on a daily basis, Twitter remains fertile ground for reinforcing existing gender inequalities and discrimination against women online.” (Amnesty International, 2018)

Women’s and marginalised group’s struggles to affirm their position in the society are numerous. The internet must not be an obstacle in the fight to enjoy equal rights, irrespective of race, gender, sexual orientation, career or any other characteristic. What makes us who we are should empower us and not become an instrument to undermine and abuse us both on and offline.

“Online abuse is unacceptable for women in politics, just as it’s unacceptable for a woman anywhere to suffer that kind of abuse.”

Nicola Sturgeon, First Minister of Scotland (Amnesty International, 2018)

To address these forms of abuse and violence and make the internet safe for women specifically, it is of absolute necessity to bring awareness to the fact that things happening online can still cause real harm. It is important to set clear boundaries online between freedom of expression and hate speech, just like in real life. Mostly, it is crucial to make it clear that what happens on the internet must be as punishable as what happens face-to-face. The change in laws and policies concerning tackling internet issues, abuse and crime show a shift in the general understanding of this. We need to broaden what is encompassed by online crimes and to specifically address online hate crime.

Several non-profit organisations advocate for ending online harassment, embracing the causes of minorities and strongly advocating for women’s rights, for example Stand Against Racism and Inequality (SARI), HeartMob and GlitchUK. Through diffusing knowledge and educating people they seek to change the standard mindset that enables the irresponsible use of the internet. Their work also goes beyond that to transform the way people think and act in real life, since this is the real problem, whilst the online manifestation is ‘only’ a harmful symptom. Also, through SARI and HeartMob, victims of online hate crimes can find support as they seek justice and emotional healing.

The Guidelines on Prosecuting Cases Involving Communications Sent via Social Media (GPSICSSM) includes in Part B a specific topic to address Violence Against Women and Girls. It is a strategy to “address crimes that have been identified as being committed primarily, but not exclusively, by men against women.” One of the most relevant features in the nature of all forms of gender-based violence is that generally the perpetrator “exerts power and / or a controlling influence over the victim’s life”, for instance, in situations where the perpetrator would use ‘honour’-based abuse or intimidation to rule the victim’s virtual life through threats of exposure of the victim or other people related to the victim’s life. “The approach recognises VAWG as a fundamental issue of human rights, drawing on the United Nations conventions that the UK has signed and ratified. VAWG is recognised worldwide, and by the UK Government.”

Following the recommendations of Report Hate Now, it is important to screenshot any evidence of online hate, and according to the GPSICSSM it is crucial that the crime is reported as soon as possible.

“In May 2016, the [European] Commission and four major platforms (Facebook, Microsoft, Twitter and YouTube) announced a Code of Conduct on countering illegal hate speech online. Since then, more companies have joined, and they are increasingly meeting the goals of the Code of Conduct, including removing illegal hate speech within 24 hours.”

European Parliamentary Research Service Blog (2018

How to report online hate crime

To report online hate, several platforms already have their own tools for tackling these incidents. For example:

On Twitter, they suggest a series of steps to disengage any disagreement or unwanted communications online. Nevertheless, they also inform that if the behaviour continues constituting abusing, you must report to a specific link that they make available.

On Youtube, a flagging feature is available to report any kind of inappropriate content, and there are also links to report different situations such as an abusive user.

On Facebook, they set several steps for reporting any kind of abuse, including secret conversations. You just have to look at the situation that seems like the one you are going through and follow the proposed steps.

For cases that take place in the UK, online hate can be reported on the True Vision website, owned by the National Police Chiefs’ Council:

“The CPS views all hate crime seriously, as it can have a profound and lasting impact on individual victims, undermining their sense of safety and security in the community. By dealing robustly with hate crime, we aim to improve confidence in the criminal justice system and to increase reporting of hate crime.” (Crown Prosecution Service, 2018)

Online hate crimes must not be left unpunished as they can cause the same extent of damage as real-life crime. Awareness must be diffused, and accountability must be enforced. Everyone must be free to be themselves, to express themselves freely and not be subjected to any form of violence in the online world. We must keep on fighting for that and spread the word!

 

Leave a Reply

Your email address will not be published. Required fields are marked *