SPECIAL | María Paz: “The implementation of these technologies is being done without proper regulatory frameworks that really balance the interests of the different stakeholders”
This is the sixth and last in the series of interviews on artificial intelligence, algorithms and internet platforms which make up our Artificial Intelligence Special (in Portuguese). The interview, conducted in 2020, is complementary to the series conducted by students from the Center for Law, Internet and Society (NDIS). The NDIS is a study group at University of São Paulo’s Law School, which is offered in partnership with InternetLab since 2015.
María Paz Canales is an executive director of Derechos Digitales.
In this interview, she discussed the application of international human rights mechanisms in the use of artificial intelligence, challenges for AI regulation and the role of privacy and data protection regulation in this debate, and the work of Latin American civil society in this field.
And when we talk about [human rights and artificial intelligence], we talk about all kinds of human rights, not only the civil liberties, the generation of civic and political rights but, increasingly, also the exercise of the social, economic and cultural rights. (…) The implementation of these technologies is being done without proper regulatory frameworks that really balance the interests of the different stakeholders, or one that provides a proper framework to get the positive aspect of the artificial intelligence and avoids its unintended harmful consequences.
Check out the full interview with María Paz Canales below
What are, in your opinion, the main challenges and social risks presented by artificial intelligence algorithms? What is artificial intelligence for you?
Generally speaking, the main challenge, for me, about the implementation and regulation of artificial intelligence, is how this type of thing will gradually impact the exercise of human rights. And when we talk about that, we talk about all kinds of human rights, not only the civil liberties, the generation of civic and political rights but, increasingly, also the exercise of the social, economic and cultural rights. Artificial intelligence is having increasingly huge impacts in terms of the possibilities of discrimination of people or, if you look into the most positive side, the possibility of using the technology for inclusion and for trying to overcome some inequalities that are structural in many of the societies around the world, but particularly in Latin America. But it also comes with a great risk because, as you have heard many times probably, the implementation of the technologies is being done without proper regulatory frameworks that really balance the interests of the different stakeholders, or one that provides a proper framework to get the positive aspect of the artificial intelligence and avoid the unintended harmful consequences and to operate in an infraconstitutional setting, from a technical perspective, but also from an institutional and regulatory perspective that is still very weak in our region because, coming from the technical aspect, there’s still a lot of lack of capacity, in the private sector even, for knowing how to really use and better understand the impacts of this type of technology. So, in general, there’s some asymmetry of information when this type of technologies are offered by growing companies that look at them and try to offer their products and services in the region, and then you also have an issue that is linked with the quality of the data that will feed the type of system that we know. So, in general, in Latin America, all the initiatives regarding the ability of high quality open data are not very well developed – that has been around for a long time now and there’s a very strong movement of open data, but it hasn’t been very successful in terms of having the possibility to access a large data set of high quality and accuracy. So if you combine those two problems, at the end you will have the risk of having the implementation of technologies that will, rather than improve and address some of these structural inequality systems, will risk to deepen them and make them worse, because they will implement a technology in an institutional setting and in a context of engagement with the technology that will not be appropriate. So that’s my general diagnostic.
You have talked about the positive aspects of technology, about these open data initiatives… Can you give us some concrete examples that you have in your mind, or that you have been involved with?
One of the many projects that have been heavily criticised because of its unintended consequences in the region, one of the landmarks projects, is the prediction system that was set first in Argentina by Microsoft to predict the possibility of teenage pregnancy. The system was filled with data that produced a result that was heavily discriminated against the lower income sector of the society. And that was very well documented in terms of how, maybe, the intention of the system was appropriate and good, that they tried to address a very problematic issue that is real and causes developmental harms in many young women, but at the end, the way in which it was addressed was not sensitive at all in terms of how the data was fed in the system and how the system’s result was communicated. And one of the issues there is, again, the quality of the data that is fed and the way in which the system takes into consideration, or not, the context in which you want to insert it, but also there are other aspects that are problematic in terms of how you also have the companies, the global companies that are offering this kind of system, without conducting any kind of impact assessment evaluation prior to implementation. Maybe they were something useful or successful in other jurisdictions, but they can have a completely different result when you transplant them in a different setting. And how there’s no kind of ability, request for more accountability from the companies for not reproducing that model because, as far as I understand it, this very same system is now being prioritized in some regions in Brazil with also many similar consequences to what was seen in Argentina. And other examples with similar characteristics but different origins are, for example, another prediction system that was implemented in Chile by the ministry of social protection, that was trying to predict also the probability of abuse of children – and they were replicating the implementation of a methodology that was developed in New Zealand for doing that: to have early intervention to prevent abuse from children, and, for example, in this implementation, again, there was an algorithm that was originally developed for New Zealand, and New Zealand has completely different social and economic circumstances – and even though the system has been criticized in New Zealand, it is now offered here in Chile. They have also done so in the past, in the United States, with some criticism and harmful consequences. So they keep doing it. They keep doing it, and it is not clear what is the feedback loop that they are considering, because many of these things will need to be improved and perfected to try to avoid the biases and the harmful consequences it can have. But there is not enough transparency in how that learning circle is going on. Maybe there are unintended consequences, but they are really unable to consider them in order to perfect the system for the “next round” – and you don’t see that in the next round. You see that, again, there’s an implementation without an impact assessment of the system, without proper consideration of the context and so on. And this is linked with what I was referring to in the first example: when you have Latin American countries, in which the majority doesn’t have regulatory frameworks or even institutional tools to apply this kind of implementation, it is very difficult to ask these difficult questions and to be more demanding with the implantation of this type of system. Also, there are some examples of harm in the implementation of artificial intelligence in the justice systems, for example, in Argentina or Colombia. They provided a way for processing, in a more efficient manner, the paperwork and the resolution of issues; that is the example from Prometea and PretorIA, which are the application of an artificial intelligence that provides shortcuts for producing a judicial resolution, but which are really still heavily in the oversight of human beings. That is one type of implementation in the justice system, and then you have all types of implementation in justice systems that are much more problematic. For example, the one developed in Colombia that offers recommendations about prison times and sentencing which has had some difficulties that have already have been pointed out in the development of similar systems in the United States, in which, because of the data that was fed to the system, it was heavily biased against specific racial groups, and they were totally discriminatory with those groups. The data, however, was consistent with human previous history. It was not a defect of the maching learning system, but rather it was so that the maching learning system was able to amplify the biases and discrimination that already existed in the human implementation of the system. So I think that those are examples of what is happening in Latin America, and they are problematic, again, because of the lack of institutional frameworks, regulation, but not only the regulation. I’m referring to institutional frameworks and oversight bodies with institutional capacity to evaluate which solutions are really necessary and for what, and what are the purposes and the goals of the implementation – and then they should be able to oversee the implementation and provide feedback to improve the technologies over time or to anticipate at some point if they are more harmful than beneficial – and if so, then we need to eliminate them.
So do you think that current laws, for example, data protection laws, are just not enough for this kind of challenge? Do we need completely new laws or just new interpretations? How do you see the current regulatory framework in relation to these things that you’ve mentioned?
I think it’s a combination of all of that, depending on the country that we are looking at. In Latin America, we’ve got very diverse realities, we have some countries that are more advanced in data protection, they have had data protection regulation and a regulatory framework for a while now and they have had all the time for implementing that, for strengthening the authorities that they have selected to oversee the implementation of these regulations… But there are also countries that don’t even have authorities to oversee the relation that they have or the authorities are not well resourced for doing the oversight work, or there’s a lack of institutional strength to, for example, control all the agencies or public bodies. So they are stronger for the oversight of the private sector but not of the public sector; in different countries the issues come from different places, so in some countries they are more concerned about the action of the private sector and in other countries they are more concerned about the action of the public sector. In the end, I think that there is definitely a need for improvement in the regulatory frameworks that are in place in Latin America. Even in the countries where those exist, they need improvements, and many of them have been at this point for almost 20 years: they need to be updated to the most recent international standards. We know that California law, or the extended producer responsibility (EPR), are very good international standards, and we should follow and see how frameworks can be improved in our region, but also to provide clear political messages from the boards in the region to empower the politicians that are in charge of overseeing this type of regulation. And to give them resources in a proper way to really fulfill this mission. That’s one thing. The other thing is linked to what I was pointing out in the beginning, that this is not only a matter of privacy. And I’ve had this discussion a lot of times – I don’t know if you’ve had the opportunity to see it, but I wrote something in the end of last year talking about strategies for implementing artificial intelligence in Latin America, so I can share with you the link of the publication if you are interested to take a look, but I was pointing out in that publication – and I’ve also mentioned it in the beginning of this conversation – that this is not only the matter of impacting privacy or data protection, which is a discussion that I consistently have with many international actors and governments, because they think that if you fix the data protection laws, the problem with algorithmic decisions or artificial intelligence will be solved. Recently we have seen a lot of impact, a collective impact, that goes beyond data protection and looks more into the individual rights. So the issues regarding discrimination, or how this implementation of technologies heavily impacts vulnerable groups, also when we are talking about social justice and inclusion, we should look into other types of regulation that provide fair institutional frameworks for the implementation of these technologies and, in terms of regulatory fields in which that can be covered, there can be a lot of strategies, and we can be imaginative on that in the sense that some of this things will be part of the, for example, consumer protection regulation for the future or they can be part of the type of regulatory parties that relate with the implementation of the anti-discrimination principle. There are a lot of places in which you can look for implementing those regulatory frameworks that will not be exclusively about data protection. This is a conversation that we increasingly need to have. Another place in which you can address some of these considerations is in the implementation that is happening in many countries, the National Plans on Business and Human Rights, following the United Nations Environment Programme (UNEP) principles, in which there is some increasing consideration of the companies for avoiding harm and going in the path of implementing some mandatory request for the regulation of technologies, as I was talking before. I think that we have a lot to learn about the environmental movement in that sense, because we are a protection movement, so we should look into solutions that go in that direction, that allow us to address the more collective impacts that technology is having, artificial intelligence and algorithmic decisions particularly.
What is the difference between ethics in artificial intelligence and fundamental or human rights in artificial intelligence? Why is this distinction important?
This is a fight that I have to fight all the time in the international forums because my work field is human rights and I am a big defendant of the human rights framework. I think the ethical considerations are valuable and they can be addressed, but in no way they can be the minimal line to request in these models, because if we can look into something that can be actionable, actually the human rights standards provide much a stronger setting and will not be contrary to the ethical principles. Ethical principles are something that you always can implement on the private sector or on the public sector level, you will not do harm, but trying to take the conversation off from the setting of implementing and respecting human rights, and moving exclusively into that [ethical] consideration, I think is a fundamental mistake. Because you would establish all these things that are voluntary and don’t have any kind of mechanisms for oversight. And the reality is that, as regards the human rights standards and human rights policies, the bodies in charge of overseeing those standards have been doing, in the last year, a pretty strong work in trying to translate the traditional human rights framework to the application of such frameworks in artificial intelligence. So I think there is no sharp experience on that, there has already been a lot of paperwork and considerations issued by special procedures from the UN, from the american system, from very specialized bodies that pointed out how the international human rights frameworks are totally applicable and still relevant for the implementation of technology. So when the people claim that there are not enough standards regarding the technology, I think that that’s a little bit of ignorance in what has been going on in the recent years and how strongly work has been developed by the international bodies and by the civil society, by the human rights advocates in trying to connect how these standards apply to the implementation of technology. And I think that then, even if you do like the opposite work and you, for example, in many settings, in many forums, because people have right away decided to talk to you about ethical principles, and you start from there, and then you move in how to connect the human rights consideration, you realize that there’s not much difference in the end, but the human rights frameworks, in terms of sustenance and meaning, are hugely and fundamentally different, especially in terms of the possibility of make them enforceable. So I think that, in the end, we should not succumb to the temptation of a conversation based exclusively on ethical principles that are not commonly agreed in international settings, because then we would create more fragmentation. International human rights standards, in this difficult question regarding technology, still provide a framework that will keep a minimum level of protection around the globe and is at least actionable against the states that are not committed to the international human rights standards. So this is much more protective to people than any kind of ethical voluntary principles that can be proposed.
And so, do you think that these human rights standards have not been applied to technology? Is it because of ignorance from the government or is it a willful ignorance from private companies or developers to imply that these systems do not impact on human rights in any way? Do you think that these international human rights frameworks need to be updated or are they already going in the right direction?
So for the first one, I think that there’s a lot of ignorance, some of that can be good faith ignorance in the sense that there’s lack of skills in understanding technology and how technology connects with human rights, because it is a very specific field of implementation, so I think that much work should be done in terms of providing those capabilities, those skills, particularly inside governments in the region, and inside the tech companies. Some of them are doing this work and trying to implement and connect these standards in the internal processes that they have; it is increasingly common for them to start to apply a fundamental rights impact assessment evaluation inside their processes, but it’s not the majority, only a few, the biggest ones, that have more resources available. I mean, generally they do it as a good faith effort, they don’t feel that they are obliged to this. So I think that definitely there’s a need to reinforce the international implementation of many of these things in terms of signalizing that this is not something that can be implemented voluntarily, but rather it should be part of the mandatory obligation of the global companies. So for example, one initiative that I really like in that sense is B-Tech Project, that is being developed by the Office of the High Commissioner of Human Rights; and what they are doing now is that they are trying to particularly work on the implementation of the UN guiding principles in business and human rights for technology companies. So, the aim is to translate what these more general principles mean, how they should be applied to companies of any kind of field, what it means for the technology companies in particular. Another initiative in that sense, that I think is well intended, and I hope that they are successful, is, for example, the process that’s being carried out by UNESCO for the issue of guidance in the artificial intelligence ethical principles. They talk about ethical principles, they decided to go from that perspective at the beginning, but then, after doing the consultation process and the work of the group of experts that they put together, it was increasingly raised by everyone how the relevance of these ethical principles were parallel to human rights international standards. So, for example, Latin American organizations would participate intensively in the consultation process that was carried out in August 2020 for writing feedback on those principles, and we were very very strong in signalising that and even inside UNESCO, people were very aware that it was something that needed to be well covered for development. So again, this is like an ethical implementation strategy in which you will develop ethical principles for implementing artificial intelligence, but in the end you will link those principles with the international human rights standards. In that way, the principles that you’ve selected are more specifically for guidance, but are always in line with international standards. And I think that’s the good way to go: ethical principles should complement a development for the explanation of how those international human rights standards should be applied, but they are not something separated or, of course, not contradictory to those standards. And another example that I have been in contact with is the effort that has been carried out now in the World Health Organization (WHO). We have also been working with them for the issuance of a guidance about ethical principles for implementation of artificial intelligence in health systems. We were working on this before the pandemic, and now, with the pandemic, all the work is being accelerated and again, in that case for example, you see that this work started with a very diverse group of external experts to the World Health Organization in which the majority of them were not very familiar with the international human rights framework, because they were rather experts from the health field, from academia, from the technical field, from the governments, from the health departments… But people coming from a more regulatory perspective in the international human rights framework would prove in the end that it was totally fundamental to link the principles that would be selected for the guidance to the international human rights standards. And the principles are written in a way that you can develop, starting from them, more explicit applications: in this case, in the health sphere. At first sight, if you are not an expert, it can sound very difficult to implement. So what you do is to go from this more abstract level to a more concrete one, and then to an even more concrete level which is the human rights impact assessment, that can take formal checklists or all of kind of tools that are more easy to implement for people who are in charge of making the decisions and are not necessarily experts in human rights standards.
What has been the role of Derechos Digitales, as far as your experience, and other civil society organizations in this debate?
We have been working writing input and feedback about all these issues related to the commissioner, to help to move on in the application of international human rights standards regarding the use of emergent technologies, not only artificial intelligence, but also different branches of emergent technologies, with the UN bodies, the human rights council, the office of the human rights commissioner, with the special rapporteurs for freedom of expression and privacy, and in the inter-american system also, with the special rapporteur in economic, social, cultural and environmental rights. We have been working with the special rapporteur of freedom of expression also in the issues in Latin America related with privacy, and we have been engaged in the region with Latin American governments and civil society groups also in order to increase capacity in the different things that we have been talking so far in the sense that, as I’ve mentioned before, one must have the appropriate level of capacities or skills in order to be able to really, deeply, understand these topics, so you have to connect them, because there are people that can be very familiarized with the implementation and the challenge of artificial intelligence implementation from different fields, that in a lot of ways are connected with the things that I’ve been talking to you today. So I think that that is part of the job that we are doing. We are trying to connect the dots to bring this information, to bring the Latin American perspective to international forums in which many of these things are being decided or discussed in some way, and at the same time trying to bring back information to local governments in the region in order to better understand the challenges and why it’s important to pay attention to the way in which this type of technology will be deployed. And if you do it well, you have a fundamental opportunity to improve the life of people in your country and region and to overcome, as I said before, inequalities and other structural problems of the region, but if you do it wrong, you risk to really reinforce discrimination and to really harm, in a more permanent way, the possibility of having a more just society. So we are trying, all the time, to do these two things: we don’t want to sound very pessimistic but we want to be very clear in terms of asking for the proper implementation and for an implementation that is sensitive to the protection of rights of people. And as I mentioned, we have been engaged in Chile with the development of the national strategy on artificial intelligence, we now are conducting research regarding the implementation of these things in Chile, Colombia, Brazil, Uruguay etc. We are trying to assess how this implementation has been carried out so far in these countries using study cases in order to provide valuable lessons for the future, for different governments in the region that are considering implementing these kinds of systems. So, using the study case is also an opportunity to learn what works well, what field works well, and how to provide this complete recommendation that I was mentioning to you in the form of a check list or in the form of documents that are more easy to understand and to implement in a regular basis by the people who are in charge of this type of decision making.
***
Interviewers: Enrico Roberto and Jade Becari
Editing: Enrico Roberto and Jade Becari