Image for The Great
Go Back to the article page

Please upgrade to a browser that supports HTML5 audio or install Flash.

Audio MP3 Download Podcast

Duration: 55:12

tendayipodcast-y0-ppg.mp3


Transcript:

Kal Raustiala 0:02

Good evening, everyone. I'm Kal Raustiala, director of the UCLA Burkle Center for International Relations and it's my pleasure to invite you back to one of our occasional Zoom webinars. For today's event, I'm really happy to have as our guest, my colleague Tendayi Achiume, who is a professor at the Law School here at UCLA. But for today's event, what's most important is that she is the United Nations Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia, and related intolerance, which is a long title, but an important one. She has worked for the last couple of years on issues around racism around the world, reporting to the UN Human Rights Council. So it's really a pleasure to have her on. She's going to talk about issues related to digital technologies and discrimination within digital technologies. But I think we'll also drift into many of the other interesting issues that she has been working on both in this role and in her career before assuming the position of Special Rapporteur. So it's my pleasure to invite Tendayi on. She'll talk for a few minutes, or maybe more than a few minutes. And then as usual, I will ask a few questions and then we'll take questions from all of you, so please be sure to post questions using the q&a feature. You can do that starting now (it looks like some of you have already, and continue through our conversation, and I'll be happy to pose those to Tendayi). So without further ado, let me welcome onto the screen Professor Tendayi Achiume.

Tendayi Achiume 1:44

Kal, thank you very much for that introduction. And thanks very much for having me. Thanks also to the Burkle Center for organizing this event. It's always a pleasure to connect with the Burkle community. So what I thought I would do is start off by describing what it means to be Special Rapporteur on contemporary forms of racism, racial discriminations, xenophobia and related intolerance... it's very long title, and there's no reason to assume that people know what that means. I'll give you a general sense of what the position entails. And then I'll also give you an overview of the report that I recently produced that focuses specifically on emerging digital technologies and racial discrimination tied to that. So as special rapporteur, essentially, I'm appointed by the United Nations as an independent expert, and the nature of the appointment is that I don't speak for the UN, but rather I speak to the UN and I provide them guidance on a number of things. One, it's my job to report to them on pressing issues of racial and homophobic discrimination and intolerance on a global level. And so one of the ways that I do that is through thematic reports. I produce a number of thematic reports every year which I present to the UN General Assembly, which is the body in which every country that's a member of the UN has a seat. And then I also present to the UN Human Rights Council, which is the the big human rights organ of the of the UN. And so I produce the massive reports to them, and the report that I'll be talking about today is an example of such a thematic report. I also engage in country reporting, and so I do country visits, and the country visits entail traveling to a country and reviewing, essentially, its compliance with anti racism norms at the international level and producing a report that I present also to the General Assembly and the Human Rights Council and to that country with the assessment of how its faring with respect to race systems and xenophobic discrimination. So since I was appointed in 2017, the countries that I visited are the United Kingdom, the Kingdom of the Netherlands, Qatar, and Morocco. And all four of those were really interesting, challenging, and also informative visits as well. And as you can see, those countries represent different regions in the world. I was supposed to be traveling to Brazil in November, but because of the COVID pandemic, I'm unable to do that. And during q&a, I'm more than happy to talk about some of the country reporting, which is something that won't be the focus of much of my presentation, but it's also work that I do. And then in addition to that, I also receive communications or rather complaints from individuals and groups around the world alleging issues to do with racial and xenophobic discrimination, and then I can bring those issues to to the attention of governments. An example of the kinds of communications I deal with very recently and it actually just went public... I received a communication from Asian American civil society organizations here in the US that we're raising serious concerns with a rise in xenophobic and racist attacks against Asian Americans in the context of the COVID pandemic, and so they were very concerned about levels of violence in this country and the lack of accountability by the government. And so I issued a communication, which went public, and received a fair amount of news attention in the last few days. That's another aspect of my job that I'm more than happy to talk about. So with that background of the job in mind, I'm going to talk a little bit about this report that I produced on emerging digital technologies. When I say emerging digital technologies, really what I'm talking about are networked and predictive technologies that involve big data and artificial intelligence with some emphasis on algorithmic and algorithmically-assisted decision making. And I want to talk about about what motivated the report before giving you some idea of what the report itself finds. So within the UN system, there is a fair amount of discussion, normative discussion, around emerging digital technologies. And this conversation and the policymaking around it within the UN is really important not because the UN will solve all the problems of the world, but because the UN is a really important global and transnational forum that allows for law and policymaking around issues of a transnational or a global nature and the way that emerging digital technologies operate today, it's very much transnationally and globally, so the kinds of regimes of governance that you have beyond just the national context, are really, really important. And so the conversation at the UN level around digital technologies is a very important one. And there has been a conversation around these technologies for a while within the UN, but a lot of the debate, a lot of the conversation around the policymaking has really focused on privacy and data protection. And then also freedom of expression and norms like that. More recently, we've seen some attention to socioeconomic rights. So there is a special rapporteur on extreme forms of poverty, who's a professor at NYU, Philip Alston, who recently produced a report entitled "The Digital Welfare State" in which he talked about how digital technologies are essentially reproducing economic inequalities. And his report is one that has expanded the kind of international human rights conversation around emerging digital technologies. But what I noticed was that conversations around equality and non discrimination relating to emerging digital technologies were not very salient in the UN system and within the Human Rights Framework. And where there was conversation around these issues, they tend to really focus on things like content moderation on social media platforms, you know, so what counts as hate speech on Facebook or Twitter or any of those. And even in that context, really, the normative conversations were really focusing on free speech as the norm that was driving the conversations around policymaking within the UN, with equality and non discrimination kind of coming in as an add-on factor. And so the goal of my report was to really promote principles and ways of thinking about the ways that emerging digital technologies are interfacing with racial discrimination, xenophobic discrimination and equality and non discrimination norms more broadly so we could have a fuller and richer conversation at the global level and within the Human Rights Framework about how to make sense of equality and non discrimination norms in the context of emerging digital technologies. And I started working on this report about a year and a half ago. So the report really preceded some of the recent and really important consciousness around systemic racism that occurred, say, over the summer, following the murder of George Floyd and the kind of uprising against systemic racism that we saw in the US and other parts of the world, which actually percolated all the way up to the UN. And that's something else that I'm happy to talk about in the q&a, the conversations and the debates that have been taking place in the UN context about systemic racism and law enforcement debates that very much were prompted by dynamics in this country and other parts of the world. But the report really did proceed that those sets of events, even though I will say that the report received, I think, a lot more attention because of the increased awareness of systemic racism that has been as a result of some of the uprisings that we have seen.

Tendayi Achiume 9:49

So with that background, I want to talk a little bit about what the report actually does or what it says and give you a sense of some of my findings. Before we do transition to q&a. So one of the first things that the report does is to highlight some of the political, social and economic drivers of racially discriminatory use and design of emerging digital technologies. And this background of understanding of the political and social drivers of discrimination are, I think, really important for regulators and for governments that are going to be regulating the problem. And so this is an approach that I take with my thematic reports is to kind of think about the political economy somewhat of the context before just jumping into the way that the laws should apply. And so some of the background that I thought was really important to highlight in terms of what's driving discrimination are the following. And and many of these insights are not new to the people who work on technology, but they're important nonetheless, to highlight in the human rights context. So first: the report really addresses the presumption of neutrality that often accompanies technological innovation. And so in many parts of the world, in most parts of the world, whether we're talking about public discourse or we're talking about at the level of policymakers, there's a presumption that technology is inherently neutral, inherently objective. And because of that, the vigilance around the discriminatory impacts of technology are shielded because of this presumption of of tech neutrality. And yet, researchers, especially critical race scholars of digital technologies (including some of them who are based at UCLA, some of you may be familiar with Sophia Noble, and Sarah Roberts, who recently founded the Center for Critical Internet Inquiry at UCLA), scholars such as those have really shown that technology isn't neutral. So for example, the report highlights the fact that a 2019 review of 189 facial technology algorithms from developers around the world, the leading developers of facial recognition algorithms, basically found that many of these algorithms were 10 to 100 times more likely to inaccurately identify photographs of Black or East Asian people relative to White people. And in searching the database to find any given face, most of them picked incorrect images among Black women at significantly higher rates than they did among other demographics, right? So we think of things like facial recognition as being racially neutral, ethnically neutral, gender neutral, but in actual fact, that's not the case. And so one of the things that the report highlights is that presumptions of tech neutrality do a lot of work to actually shield these technologies from the kind of scrutiny that's essential for understanding racial discrimination in society that results from the use and design of these technologies. Another thing that the report does is highlight who holds power in the design and use of emerging digital technologies. So if you look globally, the two countries that have the biggest influence on emerging digital technologies right now are the US and China, and they have a disproportionate share of the global market for emerging digital technologies, which means that the values and the dynamics that underlie both societies very much shaped the way that these technologies operate, even in contexts beyond their borders. In addition to highlighting the dominance of the US and of China, the report also highlights the roles of corporations in shaping the landscape of emerging digital technologies. And from an international law perspective and a human rights perspective, it's really important to grapple with the challenge that is raised by corporate influence over emerging digital technologies. Because international law, international human rights law typically regulates the conduct of states and yet, you see companies like Google, like Facebook, that are essentially supra sovereign in some ways, you know, sura sovereign in the sense that they exert even more power than individual countries and the way that their technologies shape the different societies in which these technologies are operating. And so highlighting the role of corporate actors and the need for corporate governance is another thing that the report does. And then finally, in this section that discusses some of the the political economy factors, I highlight the fact that the problem isn't just, you know, extremists, neo Nazis and groups that are really explicitly committed to sowing racial discrimination and xenophobic discrimination through emerging digital technologies. Rather, it's that, you know, business models that are deployed by social media platforms, for example, actually exacerbate racial and ethnic forms of exclusion in ways that require attention, and that may not necessarily on their face seem to be about explicit racism and xenophobia. And then in the context of government adoption of emerging digital technologies, and I'll give some examples of government adoption, the justifications that are driving the adoption of technology have to do with logics of fairness and neutrality. But these logics of fairness and neutrality that are kind of pushing the adoption of these emerging digital technologies aren't being brought under the microscope of whether or not the technologies then actually bring about fairness and neutrality or whether they're producing discriminatory effects. And the report tries to highlight how, notwithstanding goals of fairness and neutrality, many of these technologies are actually reproducing the kinds of inequalities that we see in society.

Tendayi Achiume 15:51

So to shift now to some of the examples of different forms of racial discrimination in the use and design of testing of emerging digital technologies. One of the challenges of producing these reports is that they have to have a global focus. It's only a 10,000 word report, but the goal is to give examples from different parts of the world, and I'll give you some examples that I highlightin the report. So one area of focus of the report is digital employment systems. So in different parts of the world (in the report, I highlight Central and South and North America and also Europe), you have companies and governments that essentially are using digital systems to make decisions about employment. Many of those digital systems are actually reproducing hierarchies, racial and ethnic hierarchies that already exist in labor markets. And doing so using proxies for race and ethnicity that are seen as race neutral, but in their effect are actually excluding on a kind of racial or an ethnic basis. In the healthcare context, I also highlight how you have similar dynamics where you have software interventions that are capable of producing or reproducing racial hierarchies in ways that are really troubling. I give the example of an algorithmic system in the US that essentially was found to assign lower risk scores to Black patients who were equally sick to white counterparts, leading to reduce medical intervention on a racial basis based on an algorithm, again, that was reproducing these kinds of racial inequities in ways that were flying under the radar.

Tendayi Achiume 17:30

The report also deals with the law enforcement context and highlights the use of emerging digital technologies in the predictive policing context and other contexts as well and gives a number of different examples. For example, you have this database in the United Kingdom called the gang matrix database, which essentially has a database of youth who are suspected of being capable of or likely to engage in violent crime. You find that the database is 78% Black youth when the statistics around who's committing youth violence actually show that the proportion of Black youth who are offenders is so much lower, it's probably at 27%. So the surveillance of black youth is much higher on the basis of this database, even though their engagement with the practices that are supposed to be the ones that are targeted is so much lower.

Tendayi Achiume 18:24

And then the report also highlights racially discriminatory structures and contexts where there's actually a project of racial or ethnic exclusion that is operationalized through digital technology systems. And so it gives the example of China and the way that China has used emerging digital technologies to really target, exclude, and marginalize Uighurs in an intentional and just blatantly discriminatory way. And it also gives the example of places like Kenya and India that have adopted biometric identification systems that... you know, it's not the intention necessarily of these systems to discriminate, but on a de facto basis, they exclude ethnic minorities from public services, including due to predictable registration failures that just failed to account for ethnic differences in ways that result in discrimination. And so, this is a very quick overview of some of the examples that I go through in the report. But the goal is to highlight both explicit forms of racist and xenophobic conduct that is, you know, made possible or made easier using emerging digital technologies, or else structural and indirect forms of racial and ethnic and other forms of exclusion and discrimination that are made possible by the design and use of emerging digital technologies.

Tendayi Achiume 19:48

The final part of the report focuses on recommendations, and it really makes the call to governments to rely on the international human rights framework for actually addressing racial and xenophobic discrimination in the design and use of emerging digital technologies. And so what I do in that section is really highlight the international human rights obligations that states have to prohibit certain forms of racial discrimination, including in the design and use of emerging digital technologies. And in the report, I lay out that these obligations require states to take concrete measures. So for example, in the ways that states engage with emerging digital technologies, they have to be accounting for the possible racial, ethnic and other categories, the disparate effect of the technology on these on these groups. And what you find is that in most countries, even when public systems are adopted that are based in emerging digital technologies, it's very rare for governments to conduct impact assessments that factor in I mean, to the extent that they they're conducting impact assessments, many of them do not... (and this is one of the recommendations that's been made by human rights advocates is that technology shouldn't be adopted without impact assessments) To the extent that there are these impact assessments, very rarely is there attention paid to the possible racial and ethnic discrimination that might result. So in the report, I say that governments need to be collecting data and ensuring that their impact assessments really account for the discriminatory or potential discriminatory effect of the emerging digital technologies. I also highlight how, in addition to tracking the impact, governments should be including racial and ethnic and other minority groups in decision making, about the way that emerging digital technologies are used and designed as well. And then I also highlight in the report ways in which governments can hold corporations more accountable. So rather than just relying on corporations to regulate themselves through corporate social responsibility, in the report I highlight the need of governments, in order for them to fulfill their human rights obligations, they have to be imposing legal obligations on corporations to comply with equality and non discrimination frameworks as well.

Tendayi Achiume 22:14

So I want to talk about one more thing, before highlighting again, why I think it's important to have this kind of a conversation at the international level. And the one thing I want to add is that the report also deals with intersectional forms of discrimination. It talks about how things like gender, disability status, sexual orientation, can all compound and actually transform the experience of racial discrimination so that when states are thinking about racial and xenophobic discrimination, they also have to be accounting for the gendered and other ways in which discrimination operates in the governance as well. So this is an important thrust of the report.

Tendayi Achiume 22:56

So I want to conclude... well, maybe this is not where I want to conclude, this is the second last thing I'll say, before I conclude... I want to just reiterate why I think reports like this one are important. And they have to do again, with the transnational nature of emerging digital technologies and their impact. You know, a company like Facebook, which is a is an American company, has the capacity to shape societal dynamics globally. And actually, it was a UN report that was published that talked about the role of Facebook in amplifying human rights violations in Myanmar against Rohingya, right? So when you have issues, problems, dynamics that are transnational, I think it's very urgent to have governance mechanisms at the international level that can weigh in and provide guidance on standards that should apply across borders. That's really the kind of impetus behind producing the kinds of reports that I produce. And I guess, what I wanted to conclude by is talking a little bit about what my next report will be. So the report I've just been describing is a report that I presented to the Human Rights Council, which is the Human Rights arm of the UN, as I mentioned, I presented it in July. And my next report is actually to the General Assembly, and it's focusing specifically on emerging digital technologies and racial discrimination but in the border and immigration enforcement context. So the report I've been describing just deals with the issues across different aspects of governance and society. But my next next report will be focusing specifically on the border enforcement context because of unique dynamics and challenges that that context presents. And now the final final thing I'll say in conclusion is that in the work that I do a special rapporteur, I work a lot with students, including UCLA Law students, who provide really valuable research and other support. So when I was producing the thematic report that I've just in describing I relied a lot on support from the Promise Institute for Human Rights, of which Kal holds a chair, and the students in my international human rights clinic and I actually see that maybe one or two of them may also be on this call. And it's been really exciting to be able to do some of the work that I'm doing at the UN in ways that involve and draw upon the expertise of people here at UCLA, not just our students but also experts, like I mentioned, Sofia Noble and Sarah Roberts and the Center for Critical Internet Studies to do that work. So I'll stop here, because I really would love to engage with the audience and to take some of your questions. So Kal, I'll I'll turn it over to you for the q&a portion.

Kal Raustiala 25:46

Great, thank you so much, Tendayi, for that really interesting overview of both your general work and the specifics of the report. So I thought maybe there's a lot of good questions in the queue. I thought maybe we could begin a little bit with your, your general approach, and then turn to the report, which raises a whole set of, I think, really interesting, and very topical issues. So you know, one thing is, you've mentioned a few... you talked about how your role as special rapporteur is one that is related to the Human Rights Council, but then you mentioned the General Assembly, and so forth. So maybe just take a step back for a moment and explain how these reports feed into the work of the Human Rights Council generally. What's the goal of a report like this? What are you hoping to achieve? And what what is the Human Rights Council likely to do with a report like this?

Tendayi Achiume 26:38

Yeah, I think that's a really good question. And actually, it's a question that I get even from my interlocutors, you know, so when I'm producing these reports, I rely a lot on civil society inputs and civil society consultations, and I get I get that question a lot. You know, "What will these reports actually achieve? Where do they go?" So I'll start off with where they go and the role that they play in the system and then I'll tell you about what I hope for them to achieve. The reports become part of the UN systems human rights canon, right? So they become part of the record that documents the nature of violations that have taken place and the law as it applies to those situations. So to speak very concretely, when I produce a report, I have an interactive dialogue with the General Assembly or the Human Rights Council, which essentially means I present the report and then UN member states put questions to me about the report that are supposed to be tied to their implementation of those reports. And you know, when I took the role on I thought, "Oh, these are just going to be formulaic", right? You're going to just have states who come in and just, you know, engage in a formulaic conversation, and some of the dialogues do feel formulaic. But every once in a while, actually, not every once in a while, every dialogue I've had, I actually get really useful questions from states about how to implement some of the recommendations, how to think about some of the problems. So the goal of these interactive dialogues and of these reports is supposed to be shaping the way that UN member states are approaching the problems. But even more than that, the reports become anchors for civil society advocates in the countries that are mentioned in these reports or that are working on these themes. I often find that the reports are referenced most and used by advocates in different national contexts that are pushing for regulation for, you know, maybe they're engaged in litigation, and they'll rely on the findings of the report to push those issues forward. So the idea is that they are supposed to form part of how international human rights norms are understood at the global level, and then to also have influence at the national level through civil society organizations, and then also through other regional bodies. I often engage with, say, EU equality bodies where they are trying to elaborate anti racism norms and want to ensure that EU norms comply with international human rights standards. So some of my consultations regarding my thematic reports are actually the bodies outside of the UN that nonetheless are trying to make sense of what the UN system is doing.

Tendayi Achiume 29:18

In terms of what I hope to achieve with the reports... I think I have two aspirations. They're both aspirations that are somewhat difficult to assess the effectiveness of my work in relation to those aspirations, but I also think that they're, they're kind of modest in a way. I've tried to set myself modest aspirations so that I could actually achieve them. One is just to ensure that the quality of the human rights record around anti racism norms is as sophisticated as other aspects of international human rights law. What do I mean by this? You know, international human rights was always developing and there're areas of law that received greater attention than others. For example, I think the international human rights expansion of norms around gender discrimination has really been powerful in the last, say, 20 or 30 years, and even you see that sexual orientation and gender identity has received attention in ways that mean that framework is moving within the UN system. When it comes to anti racism norms, I acutally think that there's been a decline in attention to issues to do with anti racism. So one of my ambitions, one of my aspirations, is ensuring that my reports are providing a sophisticated engagement with these norms, and one that really highlights contemporary issues in this area. So that's what I hope for in my thematic reports. And then the other part is that I really hope (and I try to make sure that) the thematic reports are resonant or consonant with work that is actually being done on the ground by different civil society actors that are working in local and national contexts to advance these issues. So I'll give you another concrete example (we've been talking about my thematic reports, but I'll talk about my country report). When I visited the United Kingdom, this was right after Brexit, one of the big issues was issues to do with the way that data around immigration status was being shared in different aspects of government or different areas of government. The NHS, which is their public health service, was required to report to the government if anyone who was receiving public health services was ... out of status or if there were questions about their status. And in producing that report, I was very conscious of the kinds of movements that existed on the ground to kind of ensure that the rights of migrants were protected. It ended up being that the country report was something that was useful in that struggle that actually resulted in a moratorium on collaboration between the NHS and the government. So I guess the other goal I have is trying to, where possible, make sure that knowledge production at the global level is actually useful and resonant with local and national struggles as well.

Kal Raustiala 32:13

Great, terrific. So how do you choose the topics that you cover or the countries that you address? So, for example, in this particular case, digital technologies, obviously, are very topical. But you're also, as you mentioned, sometimes visiting countries... Choosing between the two, give a little insight into your thought process, why you why you went this particular direction, and how you do it generally.

Tendayi Achiume 32:39

Yeah, I mean, in this, you're really touching on something that's really hard with this job, right? It's a part time job, but the way it's framed is supposed to be three months of the year (which is false, it actually ends up being moments of the year), and the resources that are provided are also very limited. So in terms of what you focus on, there's so many issues to do with racism and xenophobia, it's actually the hardest aspect of the job (or one of them) is figuring out where to focus. When it comes to thematic focus, I've tried to do two things. One is to try and focus on areas of my own expertise as an academic and as a professional. A number of my reports have actually touched on, you know, global governance of migration. I mentioned I'm doing a race tech and borders report because I'm very familiar with that international law, and there happens to be a very big gap in the anti racism normative framework focusing on overlaps with migration. So I think about my strategic advantage in the ways that I engage. But then I also do two things. One is solicit input from civil society organizations. I regularly have consultations with groups in different parts of the world (where they highlight what they think the pressing issues are) and I go for themes that are recurring themes and that haven't been already addressed by my mandate. And technology was definitely one of those. There's just been a lot of kind of normative activity around emerging digital technologies that hasn't really focused on on race, and there hasn't been a report by a predecessor who's done that. And then with the country reports, I can only go to countries that have invited me. That's already one constraint. And I saw in the chat box, (or I happened to look) that someone asked me "Have I done the US?" No, I have not. There is an open country visit request the US but the US has not said yes yet. And there I will make requests to countries and they won't say yes. So I go to countries that have said yes, which is a narrow subset, with an eye towards geographic diversity. All my visits can't be in Europe, all my visits can't be in Africa. And then within that, I try and focus on countries where there hasn't been attention from my mandate as well, because there's other people who've held this role. But it's really, really hard and I can't pretend to have it down to a fine science. One of the constraints is figuring out how you prioritize where you go and how much attention you pay.

Kal Raustiala 34:58

Okay, terrific. So you mentioned the chat, and there are a lot of good questions, maybe we'll turn to that and the report generally. So, you know, there are a number of questions about the core issue that you raised about algorithms and technology, and there're so many dimensions to it. But several people asked this issue of, and I'll just use one of the specific questions because it's kind of succinct: "Does the algorithms' coders' racial prejudice lead to the racial inequalities in the algorithms? Or is there more to it?" So another way to pose the question is "what's causing the phenomena that you have described?" I certainly, and I'm sure many people on this call have have heard others mentioned in some way or another, the problems with algorithms. Is it a question of the data that's being used? Is it prejudice on the part of the designers? What do you identify as the chief causes?

Tendayi Achiume 35:54

So there's multiple causes, and you've highlighted a number of them. Sometimes it is the case that there is a discriminatory purpose in mind for the technology, and somebody is looking for technology that will produce discriminatory outcomes. They either commission that technology, or they use technology, that wouldn't necessarily be discriminatory, but they use it in ways that are discriminatory. So it can be intent on the part of the coder or of the user. Other times, it's the data that's input into the system. So in the context of predictive policing, this is, for example, one area where it's come up. If you're relying on databases that have been populated using racially discriminatory practices, right, practices that include racial profiling, you've created a database around where certain crimes occur, and you've used racial profiling and other discriminatory practices to build that database... or, you know, you're using a database about who's previously been employed in a particular system but the data actually is reflecting gender hierarchies and labor employment that have long existed, you're going to get discriminatory outputs, even if the coding itself isn't.. rather, the coder isn't necessarily invested in discriminatory outcomes. So sometimes it's the data. Sometimes it's the coding, sometimes it's that the outcomes are not foreseeable, right? The report talks about the black box effects of technology, which operates at two levels. One is that this technology is often proprietary. So we, you know, we the users don't always have access to exactly where the problem is in the tech because we don't know it. And we don't have access to that information. So we don't know if it's the data, if it's the coder, if it's something else. But then you'll speak to data scientists who will talk about how they can't always know what the technology that they're creating will do once it's out in the in the world, right? So there's the blackbox effects there, which refer to the fact that sometimes it's difficult to predict what technology will do when it's out in the world. So there's multiple levels at which discriminatory effects can be anchored, and it's important to pay attention to these multiple levels. In producing the report, I spent a fair amount of time with experts in this field because these aren't legal, you know, these aren't legal time dynamics. I'm a lawyer by training. It's kind of speaking to people who study information science or computer scientists to get this kind of information. I will add that one of the challenges that many groups raised is that oftentimes policymakers don't understand the technology, right, so that the regulators, the governance, and the people don't understand how the technology is working. And that's one of the challenges to ensuring that issues like discrimination are addressed in the way that the technology is being deployed.

Kal Raustiala 38:51

Great. So just to follow up on that, because there were several questions about it, one of the questioners asked... You talked about the fact that data often reflects gender, or let's say, in this case, racial bias in the world. But then the specific example you gave in your talk about the fact that in particular, Black women...one of your algorithms had trouble identifying them. Yeah, so the questioner asked, "Why are these emerging software's like facial recognition excluding people of color? Is it because of the images that are being selected for deep learning? In other words, are they not using a diverse population?" That's something that seems to be driving some of it. So in that particular case, people's facial appearances are just what they are. But maybe the diversity is not being reflected in the data that's being fed into a machine learning process.

Tendayi Achiume 39:44

So according to my understanding, and again, I don't want to claim to be a facial recognition software expert... But in that context, definitely, one of the issues was the data that's being used to train the algorithms and also a failure on the part of the coders to be thinking about representative data sets in designing baseline technology. This goes back to the issue of how representative and diverse the populations are that are producing this technology. This is something that's highlighted in the report where I discuss a report that was produced by researchers in the US focusing on Silicon Valley in particular, and that criticizes Silicon Valley, at the decision making level at the design level as being, you know, predominantly male, or white and male in disproportionate ways that affect the nature of the technology that's being produced. So yes, it's the data in many cases that's being fed into these machine learning systems. But it's also the kinds of decisions that are being made about what kind of data should go in and what kind of algorithms should be used, where there isn't enough attention to how societies are diverse, and you cannot create models that have the ideal type as one particular and then roll it out to a much more diverse population. So that's definitely part of it, and I'm sure a data scientist may be able to tell us more about it. But that's a part of the story for sure.

Kal Raustiala 41:18

Yeah, that seems really interesting because some of the some of the answers lead to very difficult potential solutions, others much easier. Okay, so a number of questions came in about social media, but there's one I want to make sure I get to because it's timely because it relates to the election. "Do you have any guidance for voters on California's Prop 25, which would replace cash bail with what some are calling automated racial profiling? These both seem like terrible options. It seems like we're headed in the direction of relying too much on algorithms." So do you have any advice on that? I realize that's highly specific to California question.

Tendayi Achiume 41:58

Yeah, I know, and this is always the issue. I'm spending so much time on global and international questions that I'm sad to say I haven't done enough of my own election research yet to know how to give you advice on this. But I can tell you that there are many resources at UCLA that you could use to make sense of that particular proposition, which is one that I'm not familiar with. So I already mentioned the Center for Critical Internet Inquiry, which is based here at UCLA, and which has really powerful racial equality and racial justice analysis relating to different technological interventions. And then we also have a number of criminal justice projects, including through the Luskin Center where I think you can get more pointed advice, and I don't want to miscue you. But I think what I'm sensing in that question is a general skepticism that algorithmic interventions are going to be the cure, or that are going to without further scrutiny lead us to more equal or more just societal arrangements. I think that skepticism is very, very much warranted, especially because a lot of the analysis by leading civil rights organizations within the US (I don't know about California wide, but definitely Los Angeles focused) have really raised concerns about the way that automated decision making is working in the criminal justice context and highlighting its racially discriminatory impact. I would also guide you to consult the resources probably of the coalition called Stop LAPD Spying (they have a very easy name, Stop LAPD Spying) they do a lot of racial justice-related analysis on things like legal developments in California, and they probably have something on that. So apologies for not being able to give you tailored advice, but this is some thoughts.

Kal Raustiala 43:53

That's okay. So just to kind of follow on that in a more general vein... it's interesting to hear when you talked about the problems of automated decision making, with the implication (I think, but I was want to press you on that) that we would rather have more discretion than have more automation in decision making. Is that your general stance, that these technological interventions that tend to take human discretion out of the equation are problematic and we'd be better off with more discretion? Or do we worry that discretion (or do you worry) that discretion actually would just reproduce the same problems, maybe worse?

Tendayi Achiume 44:33

Yeah, I think that's a good question. And I'm glad that you're pressing me on it, because I think it also gives me an opportunity to clarify something. I'm not somebody who, at least at a policymaking level, is anti technology or kind of fundamentally anti technology and I don't think that the issues around the racially discriminatory design and use (and even the report is focusing on design and use of technology). The technology is part of the issue, but it's kind of the human agency and the way that the technology is deployed that is more the locus of my concern. So to come to your question, the question for me becomes "Which approach allows for more accountability in the case event that we get discrimination?" If you have individual decision makers, if you have people making these decisions... our legal frameworks have developed in ways that allow us to hold individuals more accountable for producing racial and other forms of discrimination than they have to deal with kind of automated or algorithmic driven discrimination, right? So in the context of algorithms that are being used in this country to determine sentencing guidelines for different defendants... there's been litigation around concerns about the fact that defendants are being sentenced using algorithms that they have no information about, and that they can't get access to how those algorithms are operating because those algorithms are subject to kind of privacy laws that mean that you can't even figure out how the decision in your specific case was made. So I think if we want to have automation, if we want to have digital technologies take on more of a role in allocating resources and human rights in society, we should only do that when the mechanisms for accountability are strong enough that discrimination, if perpetrated by individuals, would be prohibited, which is kind of harder to get a handle on. I'll give another example from the report, which is the digital redlining concerns where you had Facebook ads, targeted advertisements that were basically making it so that you could advertise to potential particular racial, ethnic population, you know, housing advertisements, right? And we know that this violates the Fair Housing Act. And if you were in a newspaper to say, you know, this housing is only for black people or only for for white people, for example, that would be a clear cut case of a violation of the Fair Housing Act. Once it's taking place in the digital arena, not only is it harder to detect, (because what it takes to be able to determine that digital redlining is taking place is harder than kind of redlining in other contexts.) and then being able to prove it in a court of law is also really difficult. So part of it has to do with where accountability is, where's it easier? Where's it easier to get away with discrimination? And right now, there are not enough checks on emerging digital technologies. And that's part of the problem.

Kal Raustiala 47:41

And just to clarify... I can't believe for a moment I'm going to sort of defend Facebook... but I think in that example, (I'm not even a Facebook user), but I think in that example, the housing was not necessarily discriminatory. It was the advertising that was. In other words, it was the equivalent of saying, "We're going to send this ad for this housing only to people of race x or race y," which is more akin to the defense that I think Facebook mounted. So I think they... did they walk away from that decision? They changed that, or they still permit sort of advertising?

Tendayi Achiume 48:15

So in the housing context, my understanding that Facebook is actually trying to get rid of that. I mean, advocates have come back and said that they're not doing enough. But to my understanding they didn't try and defend it more. And you're right to clarify that it wasn't the housing, but it was who they were targeting, which ultimately has an effect of kind of...

Kal Raustiala 48:36

The ad would only go to people of a certain race-

Tendayi Achiume 48:38

Right, right, right.

Kal Raustiala 48:40

Yeah, so something that our fair housing laws didn't really contemplate. But obviously, it could have a very discriminatory effect.

Tendayi Achiume 48:47

And de facto discrimination is also, like... under the Fair Housing Act, you're not allowed to engage in de facto discrimination as well. And so it's kind of about mechanisms of de facto discrimination as well.

Kal Raustiala 48:57

So on Facebook, several questions about social media, which, you know, all go to the fact of the all go to the question of the detrimental effects that social media is having... Facebook, in particular, large companies...So there's a number of questions that raise issues about what can we do about, or what can government's do to hold large social media companies accountable? You mentioned Facebook, rather, you mentioned the issue of the Rohingya, which is one of the many examples where Facebook has been a vector for really horrific acts, speech that that has led to acts, and that's true in many parts of the world. And I think Americans don't always appreciate quite how dangerous Facebook actually is in many other places. So what can you say about what's realistic for governments to do or should they do with regard to companies like Facebook?

Tendayi Achiume 49:53

Yeah. So you know, in the report and just in general in my in my role as special rapporteur, I focus less on specific recommendations for policy reform and more on approaches, policy approaches, right? So to this question on what governments can do, one big shift is actually imposing legal obligations on corporations around human rights concerns, rather than leaving it to the frame of corporate social responsibility, which is voluntary. And that's a lot of the place where you see human rights language entering, you know, how governments are regulated. And this is tougher in the US where the international human rights framework is not the go to framework for lawmaking in this country, right? There's actually an antagonism in many ways between the international system and what the US does. And this is different in other parts of the world. It's different in Europe, it's different in parts of the global south, where the international human rights legal framework is actually taken seriously as a source of legally binding norms. And so what the report has... one of the concrete recommendations the report is making is saying, "Let's think about what it would mean to have, say, mandatory Impact assessments prior to the adoption of emerging digital technologies that require you to think about in advance what how ethnic and racial and other minorities are going to be impacted by the technology that you adopt." And these mandatory impact assessments are definitely not the status quo at this time, and that's a very concrete recommendation about things that could be done. It's not going to always be effective, but it would make a difference. I think, in the context of social media platforms in general, one of the the goals of the report is one that that makes a recommendation that will seem not very concrete, but I think is really urgent, which is: what do we think Facebook is, right? Like, when we think about Facebook, when we think about Twitter, when we think about all of the social media platforms in particular, there's a way of understanding those platforms as places where you go to hang out with your friends and send messages, right? And, and then also to express yourself. The way that we as societies have approached governance of those platforms is very much, I think, informed by those baselines. The goal of the report is to say we should understand these platforms as fundamentally impacting equality and non-discrimination in ways that mean that the demands that we're making of governments should reflect their expanded role, because they are doing a lot, they're shaping elections. And so part of the work of the report is also trying to shift thinking at that level, which I think can affect the kinds of accountability demands that are made in different contexts as well.

Kal Raustiala 52:44

Great. So I think we have time for a final question, which is, "Has the pandemic helped further highlight the details in your report about algorithmic discrimination?" So in other words, is there some way in which the pandemic context has changed or shifted the emphasis in the work you've done or in your specific report that you've been telling us about?

Tendayi Achiume 53:08

So just thinking specifically about this report, the report that I've been talking about? Definitely. This example is one that I don't have the statistics for, but in speaking to infectious disease doctors, one thing that was happening during the pandemic is that hospitals were using algorithms to determine, in some cases, who gets access to ventilators, because they were a very limited resource. And in some parts of the country, there were concerns that some of those algorithms that were doing some of that work had problems around "Can we rely on these algorithms? Do they have baked into them inequalities that are going to kind of manifest in this context?" So questions of how technology is deployed, including in pandemic context, has absolutely been something that I had discussed briefly in the report but has been even bigger in the race tech and borders report that I'm working on now. Technology and the pandemic? Definitely, you know, when we think about contact tracing apps, when we think about things like community passports, thinking about the equality and non discrimination dimensions of the ways that tech is interfacing with the pandemic has definitely been really salient. So yes, in some ways, there's something about the pandemic that has definitely amplified many of the concerns and considerations that are addressed in the reports.

Kal Raustiala 54:30

Well, I imagine we'll continue to use forms of technology. I mean, right now we're holding this conversation on Zoom. We at some point will go back to normal, hopefully, but I imagine we'll continue to use Zoom in a way that we didn't in the past. And that's just a simplistic example, but I think there are many others where technology will end up being even more baked into a lot of social life even when we don't need it after the pandemic. So, in any event, fantastic discussion! Thank you so much for coming on. We really appreciate it, and thanks to all the questioners for for their many, many interesting questions. I know we weren't able to get to all of them

Tendayi Achiume 55:09

Thanks for having me.

Kal Raustiala 55:11

My pleasure. Take care everyone.