What is Hate Speech?
The International Covenant on Civil and Political Rights (ICCPR) states that “any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law”. Conversely there is a school of thought that states the the individual should retain the rights to their own freedom of speech which is defined as recognised as a human right under article 19 of the Universal Declaration of Human Rights and recognised in international human rights law in the International Covenant on Civil and Political Rights (ICCPR). So how does the digital sphere deal with this dichotomy? Given the prevalence of bots and the balance between personal responsibility and the protection of vulnerable individuals? Let’s explore how this is playing out in social media.
The European commission
Social Media platforms including Facebook, Twitter and YouTube recently got recognition from regional European lawmakers for making “steady progress” on removing illegal hate speech on their platforms.
But there is still some work to be done according to the European Commission, who warned it could still draw up legislation to try to ensure illegal content is removed from online platforms if tech firms do not step up their efforts.
Germany has already implemented a regime of fines of up to €50M for social media firms that fail to promptly remove illegal hate speech, though the EC is generally eyeing a wider mix of illegal content when it talks tough on this topic — including terrorist propaganda and even copyrighted material.
This is a complex issue, and writing from a personal perspective I tend to get a bit unnerved when the state talks about protections of copyright, as I think copyright as a model of ownership is (in my personal opinion) outdated and doesn’t deal with the increasingly digital modes of production that the internet has bought about. In terms of terrorist content, this seems a lesser evil, and I see that as justifiable as a protective measure. As always it must be a question of balances against hateful propaganda and security, and the rights of the individual to their own autonomy and freedom. Not an easy balance to maintain, so let’s explore how the EC is negotiating this difficult balance.
Social Media and Codes of Conducts
In 2016 Facebook, Twitter, YouTube and Microsoft signed up to a regional Code of Conduct on illegal hate speech, committing to review the majority of reported hate speech within 24 hours and — for valid reports — remove posts within that timeframe too.
The Commission has been monitoring their progress on social media hate speech, to see wherever these social media platforms are working to the standards agreed in the Code of Conduct.
Progress being made
On Jan the 19th the commission gave the findings from its third review — reporting that the companies are removing 70 per cent of notified illegal hate speech on average, up from 59 per cent in the second evaluation, and 28 per cent when their performance was first assessed in 2016.
Last year, Facebook and YouTube announced big boosts to the number of staff dealing with safety and content moderation issues on their platforms, following a series of content scandals and a cranking up of political pressure (which, despite the Commission giving a good report now, has not let up in every EU Member State).
Twitter and free speech
Also under fire over hate speech on its platform last year, Twitter broadened its policies around hateful conduct and abusive behavior — enforcing the more expansive policies from December last year.
Asked during a press conference whether the European Commission would now be less likely to propose hate speech legislation for social media platforms, Justice, Consumers and Gender Equality commissioner Věra Jourová replied in the affirmative.
“Yes,” she said. “Now I see this as more probable that we will propose — also to the ministers of justice and all the stakeholders and within the Commission — that we want to continue this [voluntary] approach.”
Though the commissioner also emphasized she was not talking about other types of censured online content, such as terrorist propaganda and fake news. (On the latter, for instance, France’s president said last month he will introduce an anti-fake news election law aimed at combating malicious disinformation campaigns.)
“With the wider aspects of platforms… we are looking at coming forward with more specific steps which could be taken to tighten up the response to all types of illegal content before the Commission reaches a decision on whether legislation will be required,” Jourová added.
She mentioned that some Member States’ justice ministers are open to a new EU-level law on social media and hate speech — in the event they judge the voluntary approach to have failed — but said other ministers take a ‘hands off’ view on the issue.
“Having these quite positive results of this third assessment I will be stronger in promoting my view that we should continue the way of doing this through the Code of Conduct,” she added.
Feedback needs work
While she said she was pleased with progress made by the tech firms, Jourová flagged up feedback as an area that still needs work.
“I want to congratulate the four companies for fulfilling their main commitments. On the other hand I urge them to keep improving their feedback to users on how they handle illegal content,” she said, calling again for “more transparency” on that.
“My main idea was to make these platforms more responsible,” she added of the Code. “The experience with the big Internet players was that they were very aware of their powers but did not necessarily grasp their responsibilities.
“The Code of Conduct is a tool to enforce the existing law in Europe against racism and xenophobia. In their everyday business, companies, citizens, everyone has to make sure they respect the law — they do not need a court order to do so.
“Let me make one thing very clear, the time of fast moving, disturbing companies such as Google, Facebook or Amazon growing without any supervision or control comes to an end.”
The Numbers on Online Hate Speech
In the EC’s monitoring exercise, 2,982 notifications of illegal hate speech were submitted to the tech firms in 27 EU Member during a six-week period in November and December last year, split between reporting channels that are available to general users and specific channels available only to trusted flaggers/reporters.
In 81.7% of the cases the exercise found that the social media firms assessed notifications in less than 24 hours; in 10% in less than 48 hours; in 4.8% in less than a week; and in 3.5% it took more than a week.
- Facebook achieved the best results – assessing the notifications in less than 24 hours in 89.3% of the cases and 9.7% in less than 48 hours
- Twitter 80.2% and 10.4% respectively
- YouTube (62.7% and 10.6%).
Twitter was found to have made the biggest improvement on notification review, having only achieved 39% of cases reviewed within a day as of May 2017.
Facebook removed 79.8% of the content, YouTube 75% and Twitter 45.7%. Facebook also received the largest amount of notifications (1 408), followed by Twitter (794) and YouTube (780). Microsoft did not receive any.
According to the EC’s assessment, the most frequently reported grounds for hate speech are ethnic origin, anti-Muslim hatred and xenophobia.
Evaluation of Hate Speech
Acknowledging the challenges that are inherent in judging whether something constitutes illegal hate speech or not, Jourová said the Commission “does not have a target of 100% removals on illegal hate speech reports — given the difficult work that tech firms have to do in evaluating certain reports.”
Illegal hate speech in Europe is defined as hate speech that has the potential to incite violence.
“They have to take into consideration the nature of the message and its potential impact on the behaviour of the society,” she noted. “We do not have the goal of 100% because there are those edge cases. And… in case of doubt we should have the messages remain online because the basic position is that we protect the freedom of expression. That’s the baseline.”
From my personal perspective I think the state and international powers and institutions have to retain the rights of autonomy and freedom of thought and opinion for the individual with the security and human rights of their citizens and the online community.
I think the European commission seems to be getting the balance right in terms of signing up social media platforms up to hate speech legalisation on a voluntary basis (thus making it an impetus for companies to be ethical as opposed to forcing it on them)
Supporting online platforms to self regulate their online platforms seems a reasonable approach, and promotes that fragile balance between respecting freedom of speech and drawing the line around incitement of violence etc online.
In my view the freedom to potentially offend someone must be maintained, even online but where it gets tricky is when people online call for violence, which clearly represents hate speech against group/s. Finally I find this extract from the late Christopher Hitchens interesting in my thoughts with regards to balances and determining where lines should be drawn with regards to Free Speech (not all of it is relevant of course).
Freedom is always and exclusively freedom for the one who thinks differently. – Rosa Luxemburg