Matthias C. Kettemann (Ed.)

How Platforms Respond to Human Rights Conflicts Online

Best Practices in Weighing Rights and Obligations in Hybrid Online Orders

Abstract

Platforms have power. But this power is not unchecked. Governments have an important role to play in protecting their citizens’ rights vis-à-vis third parties and ensuring a communication order in which rights are not violated. This collection unites studies by researchers from across Europe within the Global Digital Human Rights Networks on the best human rights practices of platforms.

Spanning from studies on the tools and vectors of online power to hate speech and discrimination, the authors discuss the normative and algorithmic approaches of platforms to the governance of information flows online and their impact on human rights. As best practices authors identify the recognition of infrastructure control as a lever of power, the reduction of discriminatory practices, a stronger commitment to the rule of law on platforms, the introduction of deliberative elements and the introduction of normative sandboxes to help platforms and regulators innovate.

Keywords

Platform response, hybrid online orders, human rights, best practices, rights and obligations, hate speech, elections, platform rules

Contents Browse study

Preface and foreword

Preface

Platforms have power. But this power is not unchecked. Governments have an important role to play in protecting their citizens’ rights vis-à-vis third parties and ensuring a communication order in which rights are not violated. (And in addition, of course, they need to respect human rights themselves and not arbitrarily shut down sites or use their power to make the Internet less free and open). As leader of working group 2 it is my distinct privilege to present this collection which unites studies by researchers within the Global Digital Human Rights Networks on issues connected to the overarching question of how platforms deal with human rights and their human rights obligations. This study is a key deliverable of our working group in the second year of the Global Digital Human Rights Network’s activities. We will follow-up with Guidelines for platforms and an Assessment Model for states and other stakeholders in 2024. We developed this study under Corona conditions but were able to meet in the Tyrolean Alps in Obergurgl, Austria, in July 2022 to finalize this study.

Matthias C. Kettemann
Leibniz Institute for Media Research | Hans-Bredow-Institut, Hamburg, Germany
Humboldt Institute for Internet and Society, Berlin, GERMANY
Department of theory and future of law | university of Innsbruck, Austria

Foreword

The Global Digital Human Rights Network is proud to submit this important study into the practices of online platforms when faced with human rights challenges. The overarching concern whether human rights can be safeguarded online as efficiently as offline is reflected in the topics of platforms power, hate speech and discrimination, limitations of private governance mechanisms, the governance of election-related information and the institutional responses to making private platform rules better. The broader philosophical assertion of the sameness of human rights online and offline is made manifest in specific issues related to the transposeability of offline rules and principles to the online environment.

The Network wishes to recognize the efforts of Professor Matthias C. Kettemann and his team, as well as all contributors for having undertaking this timely research and produceed valuable insights and conclusions. This comparative study aims to demystify how platforms deal with rights – and clarify whether they take rights seriously. It contributes to the mission of academia to engage with civil society, and political and corporate stakeholders in conceptualizing the challenges of human rights protection online. We expect that the study will not only provide a valuable contribution to human rights scholarship, but will influence more widely human rights discourse at various levels and in different regions.

Mart Susi
Professor of Human Rights Law, Tallinn University
Chair of Global Digital Human Rights Network

Part I

Tools and Vectors of Platform Power

The Power of App Stores and Their Normative Orders

VINCENT HOFMANN, CHRISTINA DINAR, MATTHIAS C. KETTEMANN, LENA HINRICHS, JULIUS BÖKE AND MAX GRADULEWSKI

App stores govern content one institutional level above social networks. They exercize power through largely vague and ambiguous terms and conditions, which leads to inconsistent app moderation practices. Regulation to date has not addressed this problem sufficiently. However, the rules in the upcoming Digital Services and Digital Markets will increase the obligations app stores have.

(Niche) Platforms as Experimental Spaces for Content Moderation - Mapping Small, Medium and Niche Platforms Online

CHRISTINA DINAR AND LENA HINRICHS

We map and define the term of ‘niche platforms’ and look into different aspects of the definition of small and medium platforms in law as well as sociologically and ethnographically. We question the current regulation by size of platforms and show, which other factors have a role in establishing a niche platform (such as thematic orientation of a platform). We plead for a more nuanced regulation of platforms.

Facebook and Artificial Intelligence. Good Practices Review

BEATRIZ PEÑA-ACUÑA AND CELIA FERNÁNDEZ ALLER

Humans are usually good at distinguishing which content can hurt sensibilities, but machines still have a lot of trouble differentiating between hate speech, race, sex, politics, etc. This is one of the great challenges of artificial intelligence. The problem of detecting such content and comments has not been solved adequately by the AI system Facebook has in use.

Part II

Hate Speech and Discrimination

Discrimination on Online Platforms: Legal Framework, Liability Regime and Best Practices

LAURENA KALAJA AND LANA BUBALO

Online discrimination may resemble traditional discrimination, but it can have more serious consequences, as the internet plays an essential role in our lives, shaping our view of the world, our opinions and our values. But who is responsible? Even though the safe harbor principle still applies in Europe and the USA, platforms are less than. They have the ability to shape the information published on the platform, and they profit financially form the interaction that users have with information present on their platforms. In addition, the design of platform can shape the form and substance of their users’ content. By analysing existing regulation and community standards, we show which measures are best suited for preventing and redressing online discrimination.

Online Hate Speech - User Perception and Experience Between Law and Ethics

GREGOR FISCHER-LESSIAK, SUSANNE SACKL-SHARIF AND CLARA MILLNER

‘Governance’ of online hate speech (OHS) has become a buzzword in social media research and practice. In stakeholder discussions, the opinions of users remain underexplored, and data on their experiences and perceptions is scarce. The present paper focuses on five case studies of model OHS postings in the context of the Austrian OHS governance system. For these case studies, 157 respondents assessed in an online survey whether a posting should be deleted according to their own ethical standards, whether they believed that this posting was currently punishable under Austrian criminal law, and whether it should be punishable. We found that OHS-awareness among our respondent group was high and that there was a preference for state regulation, i.e., punishability under national criminal law, and for the deletion of OHS postings. Simultaneously, readiness for counter-speech and reporting of postings for deletion remains relatively low. Thus, OHS postings are hardly ever answered directly or forwarded to specialised organisations and/or the police. If OHS postings are reported, it is mostly done via the channels of the respective platform.

The Impact of Online Hate Speech on Muslim Women: Some Evidence from the UK

KYRIAKI TOPIDI

The intersectional analysis on the implications of the ‘racialized’ representation of Muslim women online reveals why and how they are experiencing harm online as a consequence of their gender, religious affiliation and ethnic origin that single them out as ‘targets’ for online hatred. By securitizing Muslim women, this case study on the UK shows how online hate speech sets the basis for serious limitations of their fundamental rights, extending beyond freedom of expression.

Part III

Protecting Rights on Platforms

Pandemics and Platforms: Private Governance of (Dis)Information in Crisis Situations

MATTHIAS C. KETTEMANN AND MARIE-THERESE SEKWENZ

What role do online platforms play in managing and governing information during the pandemic? Chinese platforms cooperated substantially with the governments’ message (and message control) on COVID-19, but also US-based platforms like Twitter and Facebook that had employed a hands-off approach to certain types of disinformation in the past invested considerably in the tools necessary to govern online disinformation more actively. Facebook, for instance, deleted Facebook events for anti-lockdown demonstrations while Twitter had to rely heavily on automated filtering (with human content governance employees back at home). Overall we have seen emerge a private order of public communication on the pandemic.

Legal Mechanisms for Protecting Freedom of Expression on the Internet – The Case of Serbia

JELENA SIMIĆ

Serbia’s mechanisms on protecting online freedom of expression are still developing, partially due to the digital economic underdevelopment, but also due to a lack of interest of major platforms in developing and applying rules especially for Serbian market. We suggest to adopt a new law on the media, which recognizes and regulates platform in light of their role for online discourses, although they are not media in the traditional sense of the concept. When it comes to hate speech on the internet, although there is no doubt that there is room for improvement of the legal framework, the existing constitutional and legal provisions provide sufficient guarantees for protection against hate speech. Rather, the application of existing legal frameworks needs to re refined.

Digital Rights of Platform Workers in Italian Jurisprudence

FEDERICO COSTANTINI AND ALAN ONESTI

The social relevance of so-called "platform workers" has surged dramatically during the pandemic. The contribution explores how the issues concerning “digital rights” have been addressed in the Italian legal system and suggests possible remedies to reduce the vulnerability of members of this new workforce.

Part IV

Platforms and Elections

The Legal Framework of Online Parliamentary Election Campaigning - An Overview of the Legal Obligations of Parties and Platforms

MATTHIAS C. KETTEMANN, VINCENT HOFMANN, MARA BARTHELMES, NICOLAS KOERRENZ, LENA MARIE HINRICHS AND LINDA SCHLEIF

The German legal system provides for an interplay of rights and obligations for both political parties, especially if they are members of the government, and platforms. For political parties, these are, in particular, constitutional principles and their formulation in simple law, while platforms have so far been primarily regulated by media law. The EU's regulatory projects, especially the DSA and DMA, supplement this catalogue with far-reaching obligations for platforms.

Part V

Improving Platform Rules

Platform-proofing Democracy - Social Media Councils as Tools to increase the Public Accountability of Online Platforms

MARTIN FERTMANN AND MATTHIAS C. KETTEMANN

New institutional configurations represent a good opportunity to increase the legitimacy of the power platforms wield over internet users and to advance the protection of individual rights against platform overreach; such institutional configurations can be conceived as expert-based or participatory "Social Media Councils", but more research needs to be done on different practical avenues of their implementation.

Internet and Self-Regulation: Media Councils as Models for Social Media Councils?

RIKU NEUVONEN

Social Media Councils are at the moment mostly theoretical innovations and waiting for pilot projects. Media Councils are established part of media regulation and therefore could provide role models as well as best practices to the social media councils. It is especially important to build trust between different stakeholders when new institutions are formed or otherwise institutions are in crisis mode at the beginning.

Best Practices

The Global Digital Human Rights Network has set out to identify – through sectoral, platform-specific and state-specific studies – key best practices of platforms and platform governance. These cover five key regulatory areas.

Make better rules: Rules have power and so does infrastructure

  • Regulating infrastructure behind the platforms is a powerful normative vector.
  • The regulation of infrastructure providers can lead to substantial collateral damage.
  • App developers are particularly interested in clear communication with the app stores and concrete information about what content of the app led to the app store's actions.
  • The User/Monthly active user (MAU) figure itself, if it is a fixed component of regulation, should be regularly reviewed and revised (if necessary, by a government-independent expert opinion) also to allow exclusions, economic development and growth potential.
  • Cross-platforming, especially in the case of problematic content (e.g., illegal content, election-related content, etc.), should increasingly bring platforms into exchange with each other.
  • Small and medium platforms should come into low-threshold contact with larger platforms, not only in "crisis situations".
  • Content moderation needs to be understood more broadly and the differences artisanal, industrial and automated content moderation models need to be better understood.

Ensure rights: rights matter and discrimination needs to be eliminated

  • Online discrimination can have grave consequences for public safety and social inclusion and should be expressly addressed in international, legal and national regulations, and these sources of law should be harmonized.
  • States, tech companies and NGOs should work together on raising awareness of the problem of discrimination online, so people can recognize discriminatory practices and know their rights.
  • More research about online discrimination is needed so this practice can be recognized and better addressed.
  • Tech companies ought to share best practices in detecting and avoiding discriminatory practices.
  • Tech companies ought to cooperate on developing the automated systems of content control instead of developing parallel systems, which would be more cost efficient and result in more harmonized systems.
  • Filtering algorithms would require human review to prevent human rights violations and discrimination.
  • The existing mechanisms for reputational and copyright protection such as notice and take down procedures and the right to be forgotten can analogously be applied in case of online discrimination.

Respect the rule of law: Platforms have to stick to rule of law standards

  • There is a need for additional transparency measures for online platforms, including on the algorithms used. Platforms that feature user-generated content should offer users a clear explanation of their approach to evaluating and resolving reports of hateful and discriminatory content, highlighting their relevant terms of service.
  • Greater ease for reporting cases of online discrimination (user-friendly mechanisms and procedures).
  • Platforms should enforce sanctions of their terms of service in a consistent, timely and fair manner.
  • Platforms should abide by duty of care, going beyond notice-and-takedown based legal models.
  • Legislative framework for handling of requests to take down discriminatory content should be put in place.
  • Procedural protections should be built into platforms notice-and-takedown systems.
  • Rules should incentivize intermediaries and users to detect illegality, while minimizing the risks and the costs of errors and safeguarding a balance between the different human rights at stake.
  • Tech companies need to ensure algorithm transparency and neutrality.
  • A balance between citizens and tech companies must be struck in designing the liability rules.
  • Setting up a detailed and harmonized European notice and take down procedure would provide more legal certainty.

Make platforms more democratic: To make platforms more accountable, deliberative elements can be introduced

  • Minimising associated risks: If not carefully designed, social media councils and other institutional proposals for renegotiating the relationship between societies, states and platforms, may conceal actual power structures and fail to initiate real change, providing only a perceived legitimacy while stabilising a status quo many societies seem uncontempt with.
  • Design Requirements for New Institutions: Against the background of these risks, new institutional solutions have to meet the highest standards of transparency not just regarding their activities, but also regarding the systemic improvements they initiate at scale, being equipped with appropriate rights to information and data access to investigate and report on these aspects.
  • Both media councils and social media councils are dependent on various stakeholders (government(s), public, companies, professionals etc). Every stakeholder must trust that the council is working on their benefits and disagreements may be solved.
  • Sanctions: Effective self-regulation organs must have competence to order sanctions. These sanctions do not need to be fines or measurable in money but sanctions must be credible and reason to change bad practices.
  • To limit the unchecked exertion of power by large platforms over political discourses and individual expression, major platform companies should, in their private ordering systems and terms of service, refer at least partially to external standards which cannot be arbitrarily changed by these companies, and their processes should be equipped with institutional structures that negotiate the relationship between the two sets of rules.
  • Online platforms should have independent bodies consisting of legal experts evaluating the reported cases of discrimination in order to achieve better balancing of rights.
  • Platforms and domestic legal frameworks need to consider tackling responsibility for the consequences of regulatory choices: for online hate speech, focusing on impact in socio-legal terms means opting for substantive equality between groups
  • Institutional measures (e.g. third-party reporting centres, state-non-state partnerships) can be helpful to reverse mistrust towards law enforcement authorities felt by members of religious/racial minorities and improve accessibility to the criminal legal system.
  • It is important to encourage self-regulation of Internet portals that would make clear internal rules regarding the prohibition of hate speech in user-generated content.
  • It is important to systematically improve preventive measures against hate speech, primarily in terms of educating citizens about the harmfulness of hate speech and its consequences.

Make platform governance innovative: Normative sandboxes can help platforms (and regulators) innovate

  • To incorporate the protection of "Digital rights" directly into the algorithm governing the platforms, in order to provide built-in operating mechanisms of trade negotiation and mediation (eg. Art. 25 of GDPR). In this sense, international guidelines and collection of best practices could help.
  • Collective bargaining agreements: To include a more binding and specific regulation using collective agreements between union workers and employers’ associations, where possible according to the legal framework.
  • Local arrangement and code of conduct: To support at a municipal or regional level, the adoption of local provisions or of voluntary codes that could improve awareness of social and digital rights among riders.
  • Regulatory sandboxes and living labs: To establish provisional legal frameworks in order to experiment with new forms of regulations and models of interaction suitable to protect the "Digital Rights" of “platform workers, according to the concept of “regulatory sandbox” included in the EU proposal called “Artificial Intelligence Act” (articles 53 and 54).
  • Institutional Experiments: To achieve a balance between platforms’ and states’ power over the behaviour on the internet requires a multiplicity of bold institutional experiments. The Meta (Facebook) Oversight Board is, despite some shortcomings, a noteworthy and already at least partially successful experiment in this regard, but should not be elevated to an archetype of a social media council or conceptually monopolise the space for – still needed – further institutional innovation.

Contributors

Name Affiliation
Barthelmes, Mara Student Assistant, Leibniz Institute for Media Research | Hans-Bredow-Institut, Hamburg
Barrat Esteve, Jordi Professor of Constitutional Law, Rovira i Virgili University (Catalonia); Training Coordinator, EODS (Election Observation and Democracy Support)
Bubalo, Lana Associate Professor of Law, University of Stavanger, Head of the Department of Accounting and Law
Böke, Julius Student Assistant, Leibniz Institute for Media Research | Hans-Bredow-Institut, Hamburg
Costantini, Federico Researcher and Lecturer of Legal informatics, Department of Law, University of Udine
Dinar, Christina Junior Researcher, Leibniz-Institute for Media Research | Hans-Bredow-Institut
Fernández Aller, Celia Professor of Law and Ethics. Technical University of Madrid; Advisory Board Fundación Alternativas. One of the experts in charge of Digital Rights Charter in Spain.
Fertmann, Martin Junior Researcher, Leibniz Institute for Media Research | Hans-Bredow-Institut, Hamburg; PhD Fellow, Centre for Law in Digital Transformation, University of Hamburg
Fischer-Lessiak, Gregor Researcher, lecturer and project manager at the European Training and Research Centre for Human Rights and Democracy, University of Graz, Austria
Gradulewski, Max Student Assistant, Leibniz Institute for Media Research | Hans-Bredow-Institut, Hamburg
Hinrichs, Lena Marie Student Assistant, Leibniz Institute for Media Research | Hans-Bredow-Institut, Hamburg
Hofmann, Vincent Junior Researcher, Leibniz Institute for Media Research | Hans-Bredow-Institut, Hamburg,; Junior Researcher, Humboldt Institute for Internet and Society, Berlin
Kalaja, Laurena Legal Representative, Lecturer in Law, POLIS University, Tirana
Kettemann, Matthias C. Professor of Innovation, Theory and Philosophy of Law; Head of the Department of Theory and Future of Law, University of Innsbruck; Research Program Head, Leibniz Institute for Media Research | Hans-Bredow-Institut, Hamubrg
Koerrenz, Nicolas Student Assistant, Leibniz Institute for Media Research | Hans-Bredow-Institut, Hamburg
Millner, Clara Jurist, Antidiscrimination Office Styria (ADS), Graz
Neuvonen, Riku Senior Lecturer in Public Law, University of Helsinki; Senior Researcher, Tampere University, Communications Rights in the Age of Digital Disruption (CORDI) research consortium funded by the Academy of Finland
Onesti, Alan Ph.D. Student, Department of Law, University of Udine
Peña-Acuña, Beatriz Vice-Dean of Quality, Practices, Students and Employment, Associate Professor, University of Huelva; Member International Academy of Social Sciences
Sackl-Sharif, Susanne Postdoc Researcher, University of Music and Performing Arts Graz; Lecturer in empirical research methods, University of Graz
Schleif, Linda Student Assistant, Leibniz Institute for Media Research | Hans-Bredow-Institut, Hamburg
Sekwenz, Marie-Therese Researcher, Sustainable Computing Lab and Vienna University of Economics and Business
Simic, Jelena Associate Professor, Union University School of Law, Belgrade
Topidi, Kyriaki Senior Researcher, Head of Cluster on Culture and Diversity, European Centre for Minority Issues, Flensburg

Editorial support

Felicitas Rachinger, Department of Legal Theory and Future of Law, University of Innsbruck (team lead)
Johanna Erler, Department of Legal Theory and Future of Law, University of Innsbruck
Anna Schwärzler, Department of Legal Theory and Future of Law, University of Innsbruck
Linus Wörle, Department of Legal Theory and Future of Law, University of Innsbruck

Design

Larissa Wunderlich

About the study

The publication is based upon work from COST Action GDHRNet – CA19143, supported by COST (European Cooperation in Science and Technology). Our Actions help connect research initiatives across Europe and enable scientists to grow their ideas by sharing them with their peers. This boosts their research, career and innovation.

Global Digital Human Rights Network

The GDHRNet COST Action will systematically explore the theoretical and practical challenges posed by the online context to the protection of human rights. The network will address whether international human rights law is sufficiently detailed to enable governments and private online companies to understand their respective obligations vis-à-vis human rights protection online. It will evaluate how national governments have responded to the task of providing a regulatory framework for online companies and how these companies have transposed the obligation to protect human rights and combat hate speech online into their community standards. The matters of transparency and accountability will be explored, through the lens of corporate social responsibility.

The Action will propose a comprehensive system of human rights protection online, in the form of recommendations of the content assessment obligation by online companies, directed to the companies themselves, European and international policy organs, governments and the general public. The Action will also develop a model which minimizes the risk of arbitrary assessment of online content and instead solidifies standards which are used during content assessment; and maximizes the transparency of the outcome.

The Action will achieve scientific breakthroughs (a) by means of a quantitative and qualitative assessment of whether private Internet companies provide sufficient protection of human rights online in comparison with judicial institutions, (b) in the form of a novel holistic theoretical approach to the potential role of artificial intelligence in protecting human rights online, and (c) by providing policy suggestions for private balancing of fundamental rights online.

COST Actions on COVID-19

GDHRNet is a member of Network of COST Actions on COVID-19 and other pandemics.

Very early on in the COVID-19 pandemic, COST Actions started to collaborate with other Actions on COVID-19 and on the impact of pandemics in general. In order to coordinate these efforts, COST has gathered details of all of the Actions wishing to connect and collaborate. All the information can be found in the booklet “COST Actions against COVID-19 – An interdisciplinary network”.

As the full consequences of the current pandemic are yet unknown and the threat of a future pandemic is always present, the Network of Actions offers considerable potential in mobilising experts and tackling challenges as they arise. The Network is open to other participants and completely bottom-up. Any Actions wishing to join it can do so by contacting the COST Science Officer coordinating this initiative.