Equal treatment by platforms? New research by InternetLab addresses differences in the treatment of social media users

Document analyzes "layered moderation," a controversial type of system that creates lists of users whose content is reviewed differently by platforms, rather than following the regular moderation process. Starting from the human rights safeguards and going through the logistical complexities of handling online expression, the study issues recommendations aimed at ensuring that such systems are fairer and more transparent.


News Freedom of Expression 08.04.2023 by Iná Jost, Alice de Perdigão Lana and Francisco Brito Cruz

“Leveling the Playing Field: Achieving Fairness and Transparency in Content Moderation on Digital Platforms is the title of the latest report that is part of the “Diagnostics and Recommendations” series published by InternetLab in August 2023.

Started in mid-2022, the research sought to study layered moderation systems, which add differentiated levels of analysis to certain user accounts when deciding whether certain content should remain available on a platform.

The subject of layered content moderation emerged in 2021 when The Wall Street Journal published an article revealing the existence of Meta’s Cross-Check system, one that added an extra layer to the content moderation process for certain profiles. In other words, the mechanism provided a different examination for specific users – such as elected officials, important business partners, high-follower users, among others.

In practice, when profiles belonging to the list posted potentially violating content, their posts were directed to a different queue, supervised by a specialized team, instead of going through the regular moderation process. To reflect on the fairness of the process, a useful analogy is a boarding queue at the airport. Everyone agrees that elderly people or those with babies should board first. But what if the queue were exclusive to “premium customers”?

Assuming that the scale of moderation operations can sometimes result in erroneous decisions or leave gaps for various reasons – for example, particularities related to cultural, social, and political contexts of a specific region – we asked:

  • Should content moderation policies on platforms include additional layers of analysis for different types of profiles or content?
  • If certain people or contents are treated differently by platforms, what structure should be used to ensure the efficiency and legitimacy of these systems?
  • How should these systems be designed to protect users’ rights, especially concerning equity and transparency?

To seek answers to these and other questions that arose during the process, InternetLab conducted a series of interviews with focus groups throughout 2022, considering aspects of class, gender, and race. The importance of gathering qualified and contextualized information led us to invite people from Latin America working in different sectors, such as electoral integrity, misinformation, and journalism, as well as members of civil society and academia dedicated to the study of digital rights. The conversations also resulted in written contributions and were guided by a script that comprised three stages: (i) sharing individual experiences; (ii) questioning about layered moderation systems; and (iii) proposals for the future.

From the interviews, we systematized the collected material, which allowed us to separate our findings into two chapters: the glass half full and the glass half empty. We concluded that the existence of such systems is necessary to seek equity, as opposed to formal equality. Treating unequal individuals according to their differences offers an alternative to large-scale moderation, which can generate misinterpretation and errors in sensitive cases, especially in contexts promoting human rights. Moreover, adding other stages of analysis opens up space for considering local perspectives. In automated moderation systems, global rules are applied regardless of the cultural, linguistic, political, social, and economic characteristics of each region. In other words, the criteria used to keep or remove content are based on universal conceptions and ignore realities of other contexts.

On the other hand, layered moderation should not alter what rules are applied, only their application procedures. However, in practice, the “special” application can change the nature of content decisions since it ends up yielding different results for privileged individuals. Thus, it may distort a principled and consistent content moderation.

Based on these conclusions, we seek to contribute concretely to the discussion by providing recommendations to guide a better design and implementation of layered moderation. The suggestions are listed below and are further detailed in the report:

  1. Clear and public criteria for inclusion or exclusion in the lists of users included in layered moderation programs.
  2. Disclosure of profile categories and the percentages of each group on the list – for example, the number of business partners, politicians, journalists, human rights advocates, as well as their regions, gender, and race.
  3. Transparency regarding the procedure and its rationale, especially if there are veto processes and waiting lists for new participants, how the entry and withdrawal processes works, and if it is possible to apply to or leave the lists.
  4. Implementation of processes and criteria that take into account the political, cultural, and social particularities of each region when adding users to the lists.
  5. Periodic disclosure of data on the operation of the systems, including the number of decisions that have been reversed by layered moderation, false positives, false negatives, and so on.

The report is available in both Portuguese and English.

compartilhe