Tech giants Facebook, Twitter, YouTube and Microsoft agree to police online hate speech
Some of the biggest tech companies in the US have signed up to a European Union code of conduct to try and slow a rising tide of hate speech online.
Social networks Facebook and Twitter, as well as Google's video hosting platform YouTube and software giant Microsoft, have promised the European Commission they'll try and remove hate speech within 24 hours of it being reported.
The pledge says that the firms will deal with the majority of valid requests for removal of illegal hate speech which will then be deleted or access to the content will be disabled.
The move is largely seen as a response to multiple terror attacks around Europe over the last six months and the ongoing refugee crisis, which has sparked racial tensions in some countries.
Read more: Postal votes to make or break far right's success in Austria election
Vĕra Jourová, EU commissioner for justice, consumers and gender equality, in a joint statement from the European Commission and the participating companies, said:
The recent terror attacks have reminded us of the urgent need to address illegal online hate speech.
Social media is unfortunately one of the tools that terrorist groups use to radicalize young people and racist use to spread violence and hatred.
Individual countries have previously reached agreements with the companies but this is the first time the EU has had a unified policy on online hate speech.
Google, Facebook and Twitter last year agreed to delete hate speech from their websites within 24 hours in Germany.
Read more: Refugee crisis could boost growth and wages in ailing European economies
The country went as far as to launch an investigation into the European head of Facebook over its alleged failure to remove racist hate speech.
The statement from the EC added:
In short, the 'code of conduct' downgrades the law to a second-class status, behind the 'leading role' of private companies that are being asked to arbitrarily implement their terms of service.
This process, established outside an accountable democratic framework, exploits unclear liability rules for companies.
It also creates serious risks for freedom of expression as legal but controversial content may well be deleted as a result of this voluntary and unaccountable take down mechanism.
The four companies have also said they will update their terms of service and community guidelines to make it clear that hate speech is not tolerated, as well as overhauling their reporting tools.