Drag Queens and Artificial Intelligence
InternetLab has started a research project aimed at analyzing the potential impact on the circulation of LGBTQ content of using artificial intelligence tools to moderate content on platforms.
Internet social media platforms have been under constant pressure to increasingly moderate content. Due to the sheer volume of third-party content shared on their services, these companies have started to develop artificial intelligence technologies to automate decision-making processes regarding content removals. These technologies typically rely on machine learning techniques and are specific to each type of content, such as images, videos, sounds and written text. Some of these AI technologies, developed to measure the “toxicity” of text-based content, make use of natural language processing (NLP) and sentiment assessment to detect “harmful” text.
While these technologies may represent a turning point in the debate around hate speech and harmful content on the internet, recent research has shown that these artificial intelligences are still far from being able to grasp context or to detect the intent or motivation of the speaker, failing to recognize specific contents as socially valuable – what may include LGBTQ speech aiming to reclaim words usually employed to harass members of the community. Additionally, there is a significant body of research in queer linguistics indicating the use of “mock impoliteness” is valuable to help LGBTQ people cope with hostility [1] [2] [3] [4]. Some of these studies have focused specifically on the communication styles of drag queens. By playing with gender and identity roles, drag queens have always been important and active voices in the LGBTQ community.
With this context in mind, InternetLab started a research project that aims to analyze the potential impact, on the circulation of LGBTQ content, of the use of AI tools for moderating content online. The research methodology, as well as its preliminary results, were published on our blog and on Wired. Our findings were shared and discussed at RightsCon 2019, at BIAS 2019, at Connected Life Conference 2019, at the seminar “Major internet platforms and content moderation” and at Web Intelligence 2019.