InternetLab submits contribution to the Colombian Constitutional Court on content moderation in the case of Esperanza Goméz Silva

We highlighted the importance of using artificial intelligence tools in large platforms content moderation, without ignoring, however, the need for constant review and improvement to avoid reproducing biases in local contexts and against historically marginalized groups.

News Inequalities and Identities 12.18.2023 by Francisco Brito Cruz, Fernanda K. Martins, Iná Jost, Clarice Tavares and Anna Martha Araújo

In May 2021, Esperanza Gómez Silva, an actress and model in the adult entertainment branch, filed a lawsuit in the Colombian Constitutional Court against Meta after losing her professional account on Instagram, which had more than five million followers. InternetLab was invited to contribute to the case. 

The Esperanza Gómes Silva case

Esperanza created her account on the platform aiming to expand her business model and consolidate her personal brand. Between March and May 2021, Meta removed some of her posts alleging that they contained “adult sexual services”, which is not allowed by its social networks policies. It also warned the model that complying with the platform’s rules was necessary to avoid the account’s deletion. 

In the lawsuit, the actress states that, despite complying with Instagram’s terms of use and community policies, her account was deactivated. Due to that, she argues that Meta violated her fundamental rights, such as freedom of expression, the free development of personality, equality, the right to work, the right to a dignified life and minimum income, and the right to non-discrimination. In her claims, she asks for the restoration of her Instagram account and the payment of compensation for the damages suffered.

InternetLab’s contribution 

Due to the Constitutional Court’s need to gather additional technical information on the possible biases of artificial intelligence tools used for content moderation – especially because of gender issues or related to the work of users who create adult content on platforms – to analyze the case, InternetLab was invited to present its contributions. 

The queries posed by the Court sought to understand: 

i. The functioning of artificial intelligence in social platform’s content moderation; 

ii. The existence of biases in artificial intelligence tools used for content moderation, especially in cases of women who work with selling adult content outside the platform; 

iii. Strategies for facing biases in artificial intelligence tools used for content moderation and 

iv. How social network platforms avoid becoming intermediaries for other platforms or websites with adult content, such as OnlyFans. 

In our contribution, we highlight that it is impossible to do without artificial intelligence systems for content moderation to handle the thousands of content that are daily disseminated around the world, in different languages and social, political, cultural and legal contexts. The absence of automation tools could even prevent the content moderation activity of large digital platforms. 

However, it is also not possible to ignore the fact that all artificial intelligence tools developed for this function are capable of making mistakes. Some research already demonstrated the possibility of such tools to replicate structural inequalities and leverage violence. Specifically in the sphere of issues involving gender and sexuality, there is evidence of bias detection in content moderation systems that use artificial intelligence. 

A study conducted by InternetLab found that this kind of technology may not be able to spot the difference between hate speech against LGBTQIA+ people and content posted by LGBTQIA+ people, who frequently reframe terms understood as offensive, making them positive for communication between peers. Meta’s Oversight Board, in turn, recently opened a public consultation related to recidivist cases of imprecise and/or mistaken content moderation of posts from trans and non-binary people about gender affirmation procedures, alleging that they would violate the nudity and sexual solicitation and sexually explicit language policies.   

Due to this scenario, it is fundamental that these tools are under constant improvement. In order to artificial intelligence recognize local contexts and specificities and those of historically marginalized groups, it is necessary to: 

(i) establish transparency strategies on moderation mechanisms, combined with accountability methods and due process rules in content moderation; 

(ii) incorporate oversight steps carried out by qualified people who review and redirect the work of the machines; and 

(iii) constant investment and reassessment of these systems, by means of a dialogue with local specialists and interdisciplinary teams capable of handling the challenges related to specific languages and conjuncture. 

Finally, it is also important to highlight that, as each platform is free to design its policies in different ways, there will be distinct expectations regarding the possible discourses to be found in each social network. On a platform that allows nudity content, the user expects to find a certain language, more aimed at an adult audience; while in those where this type of content is forbidden, it is expected that the type of interaction and language are appropriate for all ages.  There is a plurality of speeches that, however, should take into consideration the extra layers of sensitivity that adult content carries within them, such as the sexual exploitation of children and adolescents and the non-consensual disclosure of intimate images. 

InternetLab’s contribution is also available in Portuguese and Spanish

compartilhe