Element 68Element 45Element 44Element 63Element 64Element 43Element 41Element 46Element 47Element 69Element 76Element 62Element 61Element 81Element 82Element 50Element 52Element 79Element 79Element 7Element 8Element 73Element 74Element 17Element 16Element 75Element 13Element 12Element 14Element 15Element 31Element 32Element 59Element 58Element 71Element 70Element 88Element 88Element 56Element 57Element 54Element 55Element 18Element 20Element 23Element 65Element 21Element 22iconsiconsElement 83iconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsElement 84iconsiconsElement 36Element 35Element 1Element 27Element 28Element 30Element 29Element 24Element 25Element 2Element 1Element 66

MEDIA RESEARCH BLOG

Blog Header Background Image
Zur Übersicht
Intricate Balances, Uncertain Futures

Intricate Balances, Uncertain Futures

29.06.2022

Protecting fundamental freedoms online on one hand and restricting illegal content on the other is a complicated balancing act that raises moral questions. 
by Stanislaw Edelweiss

For most users access to online communication creates opportunities for border-free dialog with people from distant countries, speeds up business operations, and finally realizes the idea of the ,,global village”.  The advantages of online platforms are manifold, however, they are also being used by those who engage in illegal activity. 
 
Therefore, many countries have already introduced regulations that allow law enforcement interference and moderation of user-generated content. This means that some of the uploaded content, if not all, is subject to an internal screening or further scrutiny by a service provider to limit this type of activity. Oftentimes, such proactive measures are related to transparency reporting obligations, i.e., making publicly available information about the amount or type of content that has been disabled or removed. Additionally, some countries expect that service providers will report on the criminal activity that arises through these channels. 
 
It also became a gold standard among EU countries to secure a collaboration with these platforms through a contact point that can enable lawful interception, which is nothing else but reviewing the user-generated content by law enforcement. Since a service provider or state officials can monitor not everything, some EU member states impose on the platform an obligation to build software that allows users to report illegal content, making them an active collaborator of flagging content, which should be moderated or reviewed. As much as limiting illegal activity is needed and valuable, it also poses a threat of potential infringement of users’ freedom and right to privacy.
 
The general term “privacy” in this context, would mean a right to secrecy of correspondence as well as secure storing of data without making it public, which historically played a significant role in democratic states. The need to protect this role is even greater, due to the pressure from the authorities, who see the limitation of these rights as an opportunity to cease any criminal activity. There is no argument about that, but the cost, in this case, privacy, might be too great. 
 
The more software and data analytics employed, the more valuable information is available for the intelligence. So-called ,,BigTech” has been asked to support the surveillance. For example, in 2015, FBI director James Comey attacked Silicon Valley for embracing end-to-end encryption (that is, simply put, closed exchange of data between users, with restricting access for the third parties) in an effort to “deprive police and intelligence companies of potentially life-saving information.”
 
Significant support in quarterly reporting of content disabled or removed is provided to the authorities by an EU regulation on disseminating the terrorist content online (2021/784). It not only provides statistics on the type of terrorist-promoting material removed to the law enforcement but also streamlines the “Notice and Takedown process”, in which due to a court or law enforcement order the illegal content is being removed by the service provider. Whilst the process itself includes the ability to respond to orders to remove terrorist content.
 
To sum up, the removal of illegal content is a prerequisite for a safe and healthy Internet environment. A separate issue is what illegal content means and how it should be regulated. There is no doubt that material that violates intellectual property, disseminates child abuse material, or supports terrorism must be effectively removed. Such regulations are already present in every EU country and there is consensus on them. But how to separate the cultural prism while guaranteeing freedom of speech with respect for certain minorities? Who should decide whether language is already hateful or a biased representation of events?

Finally, what can guarantee that progressive and inclusive views that are currently dominant in many European countries will prevail in an era of rising nationalist movements? What if the political landscape changes and those who call for greater moderation of content now become victims of it later? Questions such as these should guide a proper debate about content moderation. Whatever decisions are made, the secrecy of correspondence and freedom of speech should not be curtailed, because only these values guarantee that individual freedom in the Internet age will be protected. 

 
 
Cover Photo: Niklas Ohlrogge / unsplash
 

Weitere Artikel

mehr anzeigen

Newsletter

Infos über aktuelle Projekte, Veranstaltungen und Publikationen des Instituts.

NEWSLETTER ABONNIEREN!