In the first interview of the series, Agustina Del Campo talks about access to data of digital platforms for researchers

Em entrevista concedida ao InternetLn an interview with InternetLab, the director of the Center for Studies on Freedom of Expression at the University of Palermo spoke about access to data for researchers.ab, a diretora do Centro de Estudos em Liberdade de Expressão da Universidade de Palermo falou sobre o acesso a dados por pesquisadoras(es).

News Freedom of Expression 08.23.2023 by Francisco Brito Cruz, Clarice Tavares, Iná Jost, André Houang, Catharina Vilela, Anna Martha Araújo and João Vitor Araújo

In August 2023, InternetLab launched the project Devouring the DSA: Platform regulation between North and South, which looks at how the Digital Services Act, Europe’s new platform regulation, dialogues with other contexts and realities. The project is based on interviews with non-European and non-US experts on the impact of the European legislation and its precepts around the globe, especially in countries like Brazil.

Throughout the project, InternetLab will seek to foster the debate along three axes, addressing elements of the DSA that have already inspired draft legislation in Brazil: (i) access to data for researchers, (ii) independent audits, and (iii) the analysis of systemic risks. The first interview published as part of “Devouring the DSA” is about access to data for researchers and was conducted with the director of the Center for Studies on Freedom of Expression at the University of Palermo (CELE), Agustina del Campo.

Access to data on digital platforms is indispensable for understanding how they work, identifying problematic dynamics and proposing efficient internal and regulatory solutions. The issue is included in Article 40 of the DSA and also appears in Bill 2630, the Brazilian platform regulation bill.

However, any law established along these lines raises several questions about how the provision should be interpreted and implemented: Which people should be considered researchers? Should these people be registered? What kind of data should be made available? How will it be made available? Will the platforms have to make the data available in a standardized way? If so, will the standardization be done by law, by a regulatory agency, or by the platforms themselves, by mutual agreement? Will data deleted by users be accessible to researchers? What role will the government play in guaranteeing access to platform data? Should a regulatory agency be created?

During the interview, Agustina Del Campo delves deeper into the debate, addressing these and other issues. Read the full interview below:

InternetLab: So first, let us start with a broader question to frame the topic at hand. What do you understand when we talk about access to platform data and how important is this topic in the broader discussion about Internet and human rights? So, it’s more of establishing questions for initiating the conversation.

Agustina Del Campo: What do I understand by access to data? And why is that important? For one, I think access to data is, as a topic, a much broader topic than the DSA conception of access to data for researchers. So if you ask me what is access to data for researchers, it’s access to both platform and company governance information, but also functional information plus access to information about users conduct or the kind of use that the platform receives. So it has two sides in my mind, and it can serve mainly two purposes. One is to understand how these huge Internet companies, which are the main targets for access to data for researchers, how they handle, organize, manage data that they have and how they organize, manage and deal with their own structures. What rules are in place? How do they go about their own processes, what processes they have set? That’s important data for researchers, which is not yet fully transparent and that’s one side of it. That goes to the claims for transparency, for accountability, for understanding a little bit better how companies operate, what they do, what they don’t do, what they are responsible for, what they are not responsible for. And notice that I use the word responsible and not liable. Because I think there’s a gap in understanding there that I think it’s important. And the other side of the coin is users and conduct and what happens on these platforms. And that’s another kind of access to data. That’s the kind of access to data that we have been scraping so far. We’ve been trying to find out how people use these services, what kinds of actions they perform in these services, how they game the different systems that are out there, what potential for good or evil these companies may have. And this second part of the coin is interesting from a number of perspectives, from a sociological perspective, to better understand how societies are changing due to these platforms or these innovations and technology, to understand better what kinds of issues affect our societies when we are discussing an increase in hate speech, for example. What are we talking about? Where? What kinds of hate speech? What are these platforms telling us about those phenomenons? And that can be really useful to understand policy, to understand needs, to understand how our historical policies have worked so far. A lot of the societal issues that we are trying to get a better understanding of online are societal issues that precede the Internet. And there are already policies in place that need to be reassessed, re-evaluated, and that probably need to change, but we’re not sure how. And I think this part is pretty important. So I think in defining what I understand about access to data, I already touched upon why I think it’s important. I think it’s important for evidence based research, for evidence based policy, for a better understanding of the different phenomenons that are out there, that are affecting our societies, our democracies in many different ways, to understand what kinds of remedies we may think about in light of this on one side. And on the other, I think data access for researchers is really important to find liability schemes that work to provide the right incentives to protect fundamental rights and be able to find necessary and proportionate measures to address whatever harms or whatever threats or challenges may be out there, but also to find accountability in a system that currently is lacking a lot of that.

ILab: of the interview, given the framework for assessing all of the questions. An then the next question is digging in the DSA. The question starts with: how do you explain to non-Europeans what does the DSA rule about data access to researchers? Given that the DSA addresses many other issues beyond that, how do you see the place of this provisions alongside the other? How do you fit access to data in the DSA? And of course, if you may, we would love to hear what do you think are the important regulatory choices that you see as relevant in the European debate, so, how do you explain how they faced the problem that they faced and how this fits in the DSA? It’s a summary. You don’t need to give all the detail. But what’s your understanding about it?

ADC: Ok. Let me organize my thoughts a little bit, so that I don’t go back and forth. So how does this fit within the DSA? So the DSA is an interesting policy experiment so far. It’s an interesting regulatory experience. That’s what we know so far. And the regulatory experiment is targeting a couple of different issues, but a main objective of the DSA is trying to generate accountability for large online platforms mainly, but also for smaller ones, for online risks and potential harms and the kinds of risks and potential harms that the DSA is concerned about are risks and harms, mainly to democracy and to the European legal framework. So there are two main concerns, two main categories driving the DSA as an effort to provide more accountability for, again, mainly very large online platforms. That’s how the DSA calls them. It’s the VLOPS or VLOSEs (very large online platforms).  And the whole framework is associated to a certain definition or perception of risks. So one point of entry to the DSA or one main clarification for non-Europeans and also for Europeans is that the DSA is not a process oriented regulation, it has a lot of regulation of processes, but it’s not content neutral, it’s oriented to certain kinds of risks, to certain kinds of contents that the European Union is concerned about. Having said that, how does data access for researchers fit within this very broad system that they have created with the DSA? It’s one article, it’s article 40 and it provides for data access for vetted researchers for relevant research that they may want to perform on platforms. So there are two key provisions there. One is vetted, you need to be a vetted researcher to be able to access this benefit that is granted by Article 40. And the second thing, the research that you want to do needs to be relevant research for the purposes of the DSA. And this takes me back to the risk part. So the way Article 40 is built into the DSA goes very much in line with the explanations that we’ve heard from European authorities in the sense that they are envisioning the DSA to be a regulatory dialogue and not necessarily a piece of set norms or regulations. So what they are thinking basically is that compliance with the DSA will be checked through different means. One mean will be through direct enforcement and through reporting to the European Commission and this is at first. So they’re going to submit transparency reports and reports on concrete information that the European Union is asking of these platforms. And the requisites vary depending on whether you’re a very large online platform or you’re not a very large online platform. So very large online platforms will be monitored directly by European Union authorities, and smaller platforms will probably have their implementation and enforcement distributed through the different national authorities. And a lot of this is to be taken with a grain of salt because they’re defining the implementation of this law as we speak. So there are a lot of things that are not yet set in stone. But the whole idea of a regulatory dialogue, the way they’re envisioning it is that nothing or very little of these regulatory mandates is set in stone. The idea that they seem to have, and this is my understanding, is that by granting data access to vetted researchers, the research that is produced through that use of Article 40 is going to come in and feed into the regulatory and the monitoring and the implementation side of the DSA. So while it may seem like a research, like an academic kind of access, the bottom line, the objective that is built within Article 40 is that research serves to inform policy and serves to inform the way that the DSA is interpreted in its main terms. How do we interpret risk? How do we interpret proportionality? How do we interpret necessity? How do we interpret due diligence? How do we interpret impacts in the sense that the DSA is envisioning these terms to be? So article 40 fits into the structure as an implementing technique, as a monitoring technique, and also as a more academic or as a pre tool, a way for the European Union to be able to induce into this whole process new evidence based policies that may not be in place or built into this framework yet.

ILab: Yeah, totally. Totally. And I think that the discussion around finality would be very interesting for developing during this project. So it’s a key insight. Now we’re going to leave Europe a bit and enter another part of the interview. So in recent years, the global public discourse on platform regulation has often taken the DSA as a major reference. In Brazil, as you may know, the so-called “Fake News Bill” proposes to regulate social media platforms and its most recent version creates mechanisms expressly inspired by the European legislation. Among others, a key question that is always present in regulatory debates transposed to the other realities is how to understand the role and power of the State. As we know, authoritarian experiences in Latin America, for instance, raise concerns on how regimes could use the legislation to concentrate power and control the flow of information. Before we get specifically into data access mechanisms to researchers beyond the European context, what Latin American characteristics in contrast to European contexts that you may emphasize when we think about platform regulation being discussed in the region?

ADC: Okay. So a very broad overview of the Latin American characteristics that are somewhat singular in thinking of a base towards platform regulation or platform accountability. I think there are a couple of things to point out there. One main difference of Latin America regarding other regions, particularly Europe or the United States, is a pretty pragmatic kind of issue. Geopolitically we don’t have the same standing as Europe or the United States as a region, and that has many explanations. It includes the size of the market, it includes the kinds of platforms that reach our region or that become popular in a region. It includes the kinds of the culture, the communications culture that we have. It includes historic reasons of how our traditional media has been historically organized. There are a number of differences that characterize our own region. So in thinking about platform regulation in the way that the DSA is being thought of, that’s an interesting thing to take into account. The other big thing is that Latin America is not a homogeneous kind of region. It’s a very heterogeneous kind of region. And that’s lucky I think. We are very diverse in a number of different ways and a very rich region. But when thinking about platform regulation of the sort that a region like the European region can think of, where you have 20 something countries associated under the same structure and within the same framework, well, when we’re thinking about a Latin American regulation of platforms that needs to be taken into account. Because it wouldn’t be one Latin American regulation, it would be a bunch of Latin American regulations. And a bunch of regulations that are coming from very different countries, smaller ones, bigger ones, more democratically stable, less democratically stable, with a longer democratic history and with a shorter democratic history. And that has a huge impact in how we think about monitoring, supervision, compliance, viability, inter-jurisdictionality. How are our requirements going to work with our neighbors’ requirements? How are they going to dialogue and how are they going to interact? So that I think is really important. The other thing is we have, as you said, a history of authoritarianism in the region. We have a history of very strong presidential regimes in a region that usually tend to impact in the kinds of supervisions or in the kinds of regulators that we can think of. So think of the DSA and how they are thinking about independent regulators. That is something that in many countries in Europe has worked, but in many others have not. Now translate the idea of an independent regulator in the way it’s conceived in Europe and translated to the Latin American region where we have heavy presidentialisms and where we are not necessarily famous for our rule of law or a balanced distribution of power among our different powers. And then what I find is that speaking of an independent regulator in a lot of countries in Latin America is like speaking of a blue dog with green stripes, we have never seen one. And that is something that should be taken into account. Same thing when you are thinking about a parliamentary commission to rule over platforms. Have you seen any parliamentary commissions that have worked in our region? Because I haven’t. I’ve seen many attempts at it, but unfortunately they don’t necessarily work. What has worked has been constitutional courts, the constitutionalization of our different fields of law. So we have seen a heavy constitutionalization and a heavy internationalization of our domestic legal regimes and a high permeability of international human rights standards within our local legislations. And that is something that is interesting and it’s kind of unique when you look at other regions in the world. The impact of our Inter-American human rights system has been quite unique when you compare it with even when you compare it with the European human rights system. And that has provided our region with interesting and strengthened guarantees for rights such as freedom of expression or privacy. It’s not casual that intermediary liability protections in our region are heavily associated with human rights, with freedom of expression, and not necessarily with business law arguments. And it has developed through our constitutional rights, on our fundamental rights litigation angle and not from a commercial or less, don’t allow me to say no, but to say to a lesser extent in a commercial setting. So I think that provides a unique context within which any sort of proposal for platform regulation must be anchored when you think about Latin America.

ILab: So let’s talk about how we should anchor this. So among the myriad of differences on how to understand access to data by researchers around the world, one of the most complex and controversial is what “researcher” should be. In the Brazilian debate, for example, defining what “researchers” are has proven to be a complex task, disputed by several sectors and stakeholders and provoking discussions about what should be the role of the state. In your opinion, what are the best and worst ways to conduct this conversation? Should this category also include journalists with data-mining/visualization skills or should it be restricted to people with institutional affiliation to universities? Should there be some kind of accreditation by a regulatory body or by an association of peers? So what a researcher is for you in this framework?

ADC: That is a very large question. A little bit of context that I think is necessary in addressing that question is that our university structures and our academic and our research history in Latin America is quite different from other regions as well. For one, we have a lot less funding than other regions for research in some ways and a lot more funding in others. And what do I mean by this? A lot of the bigger research facilities across the region are state funded research facilities in our particular region. And this kind of happens in Europe in some countries, but I’m not sure if it is to the extent that it happens in Latin America. When you think about universities in Mexico, universities in Argentina, universities in Colombia, universities in different countries in this region, there’s a heavy state influence in a number of those countries, particularly on the research units and on the research funding. So the state is a big funder for research and a really bad funder for research in a number of ways, research doesn’t really pay in a lot of countries in our region. So that creates an interesting baseline that you need to work with. Other than that, there are few research institutions in Latin America, particularly if you think of research institutions in the traditional ways that you would think about a research institution in the United States or in Europe, an institution that has the funding, the ethical standards and the structure to guarantee those ethical standards around research, institutions that have capabilities, top notch technological capabilities of doing large scale research like the one that presumably is going to be demanded by data access for researchers, according to Article 40 of the DSA. But on the other hand, a lot of the institutions that would have those conditions and that technology and the funding don’t necessarily have the research agenda. It’s not the same people that have been studying technological issues and society and technology so far in our region. So there are two sets of institutions, the few institutions that would have the infrastructure, the technology and the funding and the institutions that have been studying these issues for the last 25 years. And those are not necessarily the same currently. So that’s a problem and that’s a problem that relates to how do we define a researcher and this question is not exclusive for us. The Europeans have been thinking about who do they grant researcher status to. And a big question is, are civil society organizations researchers? Are think tanks researchers? Would InternetLab be considered a researcher? The other question is what is civil society? And the other one is journalists, are journalists researchers? And if so, which journalists would be considered researchers? Would any journalist be considered a researcher? And then you get into a mamuska kind of question. What is a journalist and how do you define a journalist? So you can go deeper and deeper and deeper and deeper into these questions. And for a lot of very valid reasons, our journalist is a person that practices regularly the right to free speech as their trade or profession, that’s the way that the Inter-American system has defined a journalist. So it doesn’t necessarily require a degree in journalism. It doesn’t really require an institution behind it. It doesn’t really require an accreditation of any sort. In fac,t there is a requirement that states don’t impose restrictions to practicing journalism. So I don’t have an answer here, I do have a way to give the question a little bit more complexity. If you ask me, I don’t think the relevant part of the question is on who you consider a researcher. I think the relevant part of the question is how do you vet them? How do you vet access to these people? So it can be anyone, but what will it take for those any ones to be considered vetted? In the same sense that the DSA was drafted and in the same sense that I’ve seen the provision migrate across borders. So there is some requirement and you need to accredit something before you’re granted access to this data. And there is a little bit of complexity, but no answer. One complexity is what kinds of technical requirements are you going to ask for as a government? So as a government authority, whether you create an independent body to assess who can be vetted and who cannot be vetted, there are certain conditions that you need to set from the get go. What kinds of infrastructure will be needed to guarantee, for example, data protection? What kinds of technologies are you going to be requiring from the people that are trying to access the data? What kinds of protections to that infrastructure are you going to be requiring in order to grant this kind of data access? And on the other side, a side that the DSA gives a lot of importance to, is what are you going to consider relevant research for vetted researchers. So in order for you to be vetted, you need to meet certain conditions that are technical, that are related to infrastructure, that are related to security. And then, you need to have your proposal approved for relevance. And how are you going to consider what is relevant and what is not relevant. And there is just one piece of advice or one very broad stroke at a guideline. This can not be in the hands of one state regulator. Going back to my question, we have yet to see an independent regulator in this region. So this cannot depend on a so-called independent regulator, depending on the executive, the way we have seen those deployed in our region. You asked whether it could be a commission or an independent body or something of that sort, right?

ILab: Yeah, about, like, peers making the vetting, for example other academics.

ADC: Okay. That’s an interesting idea. It would really depend a lot on what peers you are going to consider. Are you going to consider civil society organizations among peers? Are you going to consider academic institutions among peers? Are you going to consider journalist associations among peers? And then how are you going to consider state funded research or journalism or even civil society? Let’s not forget that political parties are civil society organizations. So let us bear that in mind. Let that piece of information sink in for a second and think how are we going to deal with this different actress that, don’t get me wrong, they are legitimate potential applicants for this kind of data. I want the State to have evidence based policies. So, I think there’s necessarily the question: is this article thought of as the one that grants the states a tool for evidence based research? Should that evidence based research be conducted by the state or should it be independent? And now we get into another mamuska. So I’ll stop here.

ILab: Great! And you already answered the next one. So you have just three more. Because the next one was about the resources that we, as Latin Americans, have to deal with that and the assymetrys. So you’re ready to talk about that. So the next one is the following: The research center you lead, CELE, has recently made an argument that access to data channels opened by the DSA should also be accessible to non-European researchers in some situations. In addition to explaining a little about this argument about “cross-border access to platform data”, how do you think the data access established in the DSA could impact or influence countries outside the European Union? What are your expectations about how the European Commission will interpret the article 40 on the DSA?

ADC: Ok. Yes, we have made the argument that article 40 should benefit institutions and researchers outside of the European Union for a number of reasons. For one, European authorities are publicly claiming that they want this to be a model for comparative regulation. So they want to export this model. And I think if you want to export this model and this is the experiment that you’re basing your exports needs, it might be a good idea to give researchers access to data, to see – among other things – how is the experiment going and what kinds of evidence can be found. Second, a lot of the language that is behind the DSA and that is also behind data access for researchers is language that has been building over the last decade, and it’s related to transparency. And transparency for sociological reasons, transparency for accountability reasons. And one of the natural characteristics of platforms of the Internet as a technology is that it’s a cross-border interjurisdictional technology. So a lot of what we see happening in other regions, with technology and with policy, is impacting other regions of the world, where those policies are not necessarily being directly adopted. So that’s a second argument, sort of how the impacts of policy are cross-border as well as the technology. The other thing that sort of ties the two previous ones together is that there’s a certain claim to evaluate, and trying to diagnose, how the information ecosystem has been affected by technology. And I think that’s a very valid question that is at the root of accountability, at the root of transparency, at the root of policy efforts that we have seen over the past five years, at least. I think there’s a very poor understanding of the impacts of technology on the information ecossystem. And when we think about the impacts of technology in the information ecosystem, and like what we did 30 years ago, we’re not thinking of our local information ecossystem. We’re thinking of a global information ecossystem, because the technology is interjurisdictional and because the policy that is being adopted in different countries has extraterritorial impacts, whether directly or indirectly. So what we have seen lately is a heavy asymmetry between evidence based research on the information ecosystem in the Global North, in Europe and the United States. Let me put a name to the two regions that where most of the evidence based research is being produced. And a huge gap with the rest of the world. And this has an impact in the way we think about technology. It has an impact in the way they think about technology. It has an impact in the way we think about policy. It has an impact in the way we can anticipate the impacts of technology in this information ecossystem. For better or for worse, a lot of the priority setting around this international information ecossystem is global standard setting, is global diagnostics. That – a global diagnostics – is not happening in majority world countries. It’s happening in Europe and in the United States. And it’s happening there in part because of this asymmetry in evidence based research. So I think it’s in the interest of the European Union countries, in order to better understand the information ecossystem – the migration of information across borders – the impact of technology in that information ecossystem. It is in their best interests to incorporate a trans-European vision of this particular provision. Fourth and final point and argument that I would make, is that the asymmetry already exists and Article 40 of the DSA has a potential to bridge that asymmetry, to help or contribute to bridge that asymmetry, and a potential conversely to amplify that asymmetry exponentially. And I think it really depends on where we land on this question from the get-go, to go in one direction or the other. So I think it has a huge potential for good and a huge potential for big, big, big exponential damage in creating this huge and magnified asymmetry of evidence based research around this information ecosystem that I think is more global every day.

ILab: Thanks. We really wanted to picture and portrait your argument on that. It was a perfect explanation. So we have two more questions. I think they’re a little bit quicker because they are about specifics on the access to researchers. So you already mention one thing or another, so be comfortable for responding more quickly and referring to other answers. So the first one is: the creation of data access mechanisms by researchers raises a number of concerns – from data protection and user privacy to research ethics. In your opinion, what should be taken into account on that, in terms of mechanisms to mitigate those concerns? To what extent for example should data protection authorities and ethical boards be involved in this process? Is there a “homework” element that countries (including Europe) must take care of to ensure its researchers’ access to data and for this to works for the intended purposes that the legislation has?

ADC: Short answer, Yes. I think there’s a homework to be done. I think the European authorities are doing their homework as we speak. They are conducting meetings with well-known, well-established academic centers that have a history and a tradition of doing complex research. They are getting the information that they need. Mostly to set what the requirements are going to be. So if you are asking for anonymized data, what kinds of technology should they ask for or aspire to? What kinds of guarantees should they set us as a minimum standard or as a minimum parameter? What is realistic? What is not realistic? What kinds of systems should be in place within the institutions that received this data, from an ethical point of view? An alternative that they are working around is creating intermediaries that can receive the data and researchers can apply to those intermediaries. And there’s a couple of options that are happening there. But it’s certainly a number of complex conversations that need to happen. And when you look, this is something that I really like about the Europeans, I don’t like it when they export their regulation and don’t necessarily ask before exporting, but I love that they prepare their regulation and they really take their time and they think through what they are proposing and doing. So the DSA was a two and a half or three year effort in dialoguing and conversing and whatnot. And it could be more transparent? Yes, it could have been more transparent. But overall, it was a serious kind of process that led to it. And the implementation is also being conducted in that way. And I think here’s an invitation on my part: whoever has the capacity to participate in this dialogue and is interested in participating in these conversations and thinks that has something to contribute to these ongoing conversations, European Union authorities are holding a number of these conversations, and I invite you to to join them. And I’d be happy to connect or to serve as a clearinghouse for information on what’s happening there. We’re trying to track how those conversations are going and we’re trying to actively participate there. I think any country that is thinking of adopting legislation of this kind should be thinking about their own kinds of homework, making sure they know what the national environment looks like, what the universities locally can do and what they can’t do? What civil society looks like within their own countries? Who could be interested in this kind of access to data? How should they be thinking about these kinds of articles within their own borders, within their own region?

ILab: We are going to do this invitation to our audience. We expect actually that we have some dialogue with Brazilian visitors, for example, and to expand this discussion with them. So the last question is about: which data? Some researchers often report having difficulty conducting research because each platform makes different types of data available in different ways. Others researchers imagine new datasets, maybe more qualitative, in new fields and about new products or systems that data were never produced to external audience. On their side, platforms say that data “interoperability” or even an “extensive menu of data” would be cost prohibitive. What should be taken into consideration when deciding what type of data should be made available and how platforms should make them available? Is it possible or desirable to develop mechanisms to standardize the type of data made available across platforms or no?

ADC: I think that’s an interesting question. I’ve been pushing a number of colleagues of mine that have been talking forever about transparency to give me a more clear idea of what they meant by transparency. Transparency over what? And for what purpose? Because the kind of data that you would get when you ask for one kind of transparency, or when you ask for another kind of transparency, would be different. If you’re asking for transparency for accountability purposes, access to data for accountability purposes, you’re probably going to be focusing on data that has to do with company handling and company processes and company standards. If you’re talking about the impacts of technology in society, you’re probably going to be talking about personal data of users, and that’s a different kind of data. And in dealing with the different kinds of data you need to think of different processes and different standards. We were talking before about the kind of homework that you need to do before you adopt this kinds of legislation. Well, depending on the kind of data that you’re thinking or visioning this rule to give you, the homework will probably be different, and it would look very different. I think the DSA has in-linking the entire regulation to risks and harms. And I started with this. I mentioned the DSA is not content neutral, they are concerned about specific risks and specific harms, even though they speak a lot about process, because they’re looking at process vis-a-vis those kinds of risks and those kinds of harms. I think they have somewhat already answered, some of these questions. Still, it’s an ongoing conversation. Let me just add this: the discussions around Article 40 of the DSA have opened a number of very interesting conversations and varying models for how this thing could be applied. And in looking at the richness of the conversations that have arisen, I think that this is a question in and of its own. There are a number of different initiatives that are that are out there and they’re thinking of an entire new ecossystem being built under Article 40 to deal with different kinds of access to data and different kinds of data. So I think there’s that that’s an another open question and a question that I think should be answered collectively. I mean, I can have my opinion, but I think there’s a conversation there that we should have.

ILab: Great. I think you’re right. My opinion. I really want to thank you for that. I know that many of those questions are super difficult and deep and dense. But as I told you, the idea is to kind of kickstart a more meaningful conversation about the DSA here in our regulatory discussion in Brazil. People are supporting the DSA without forwarding all of these discussions that are occurring in a more detailed level, as you mentioned. So it was super interesting to hear you and I think this is super helpful for the project. This is the first one, this is the first interview. So it was a good one. Setting the stage for this project.

ADC: Thank you so much for the invite. I really appreciate talking about this. You know that we’re engaging with a number of initiatives that are thinking about data for researchers and how to implement Article 40. We’re probably going to host one of the sessions at the CELE’s workshop. It is probably going to be focused on this particular thing and try and share a little bit of what we’ve learned and a little bit of what we’re thinking about what we’ve learned. I’m still really interested. I think there are two conversations here and I think you guys started a conversation that I would love to learn more about. So if I may, I may ask for a favor in return and interview you guys for our own purposes at some point to see what do you guys found about the data scraping realities around the region. I would love to see how that fits within this article 40 thing. I think those are two conversations that are currently being held in two different tracks. But I’m not sure.

The interview was recorded and is available on our YouTube channel. Watch it here.

compartilhe