We often encounter rough language on social media, sometimes even language that violates the law. This leads to increasing obligations for the operators of social media, demonstrated by the case Glawischnig against Facebook, which was recently decided by the Court of Justice of the European Union (CJEU). While social platforms are not obliged to monitor content and actively search for infringements, the ruling in essence provides for a worldwide obligation to delete hate postings in case of a reasoned complaint. This also applies to posts of other users with the same meaning and wording.
Social media has become indispensable for the exchange of information on the Internet. On platforms where user-generated content is in the foreground, users are frequently encountering racist, sexist or violent content. Legal action cannot only be taken against the poster. In the fight against hate on the internet, website operators themselves are increasingly being called upon to take action.
Glawischnig-Piesczek against Facebook: Successful lawsuit before the CJEU
In a controversially commented and interpreted decision, the CJEU has recently dealt with the question how far the obligations of Facebook, Youtube and Co can extend (Eva Glawischnig-Piesczek v Facebook Ireland Limited, CJEU 03.10.2019, C-18/18).
In the main proceedings, the Commercial Court of Vienna had issued a preliminary injunction that prohibited Facebook Ireland to publish certain defamatory comments about Eva Glawischnig-Piesczek, who is a former Austrian politician. The CJEU had to deal with several questions following a request for a preliminary ruling from the Austrian Supreme Court:
- whether Facebook can be obliged to delete such comments worldwide,
- whether such comments need to be deleted even if they are posted by other users and
- whether the obligation to delete such comments extends to comments with the same meaning.
In its ruling, the Court of Justice of the European Union first held that as a platform operator, Facebook has no obligation to monitor postings and actively investigate for illegal content. However, since content spreads rapidly via social networks, a platform operator who becomes aware of illegal content can be ordered to remove not only content of the user concerned, but also content stored by other users - worldwide. In addition, content with the same meaning must also be deleted. In this context, the CJEU underlined that the platform operator is not required to carry out an autonomous assessment of the content but can resort to automated techniques and means of investigation.
Social media called to take technical and organisational measures
Despite this limitation, such an obligation has far-reaching consequences for social platforms. Social media companies will have to invest in software and filtering systems to meet their obligations. Sometimes, the use of such systems can lead to the erroneous removal of content that is not illegal, which critical voices call "censorship on the net".
Some commentators have argued that the practical consequences of the ruling are limited, because the obligation to delete postings only applies once a national court has declared the content to be unlawful. This may be what the wording of the decision suggests, but in practice social media operators will usually not be able to wait for a court decision to determine the unlawfulness of the content. If they do not want to risk a worldwide injunction, they will have to respond immediately to reasoned user requests requiring the deletion of defamatory postings.
In addition, legislative measures against hate and violence on Internet platforms are pending or in discussion. In Austria, for example, a few months ago the previous government proposed a registration obligation for users of social platforms. In Germany, on 1 January 2018, the Network Enforcement Act (Act to Improve Law Enforcement in Social Networks) introduced the obligation to remove or block "obviously illegal content" within 24 hours of receipt of a complaint and to provide for an effective complaints procedure – otherwise, millions in fines could be imposed. However, the providers only have to react to complaints and do not have to investigate themselves.
As part of the reform of the Audiovisual Media Services Directive, which still has to be transposed into national law in many countries, providers of video platforms are obliged to take "appropriate measures" to protect the public from content that incites violence and hatred. The Directive does not contain an obligation to pre-filter content. Rather, mechanisms must be implemented that ensure that users can report and label illegal content. Video platforms include not only services such as Youtube, but also social platforms such as Facebook, when the provision of audiovisual content is an essential function of the service.
More obligations, fewer hate postings?
Whether these developments will lead to fewer hate postings remains to be seen. However, the legal measures and plans as well as the recent ruling of the CJEU indicate a trend: Operators of social platforms will have to prepare for increasing obligations in the future. The creation of functioning reporting systems and a rapid response to complaints will be essential, especially in view of the threatened sanctions.
Social Media cookies collect information about you sharing information from our website via social media tools, or analytics to understand your browsing between social media tools or our Social Media campaigns and our own websites. We do this to optimise the mix of channels to provide you with our content. Details concerning the tools in use are in our privacy policy.