Image for The Great
Go Back to the article page

Please upgrade to a browser that supports HTML5 audio or install Flash.

Audio MP3 Download Podcast

Duration: 58:29



Kal Raustiala 0:03

Good morning, everyone. I'm Kal Raustiala, director of the UCLA Burkle Center for International Relations, and it's my pleasure to welcome you to another one of our Zoom sessions. Today, we have a very special guest, Peter Singer of New America. I'll introduce Peter properly in a moment. But for those of you who have attended these, or were tuned into these before, you know the format, but for others, what we'll do is first hand the screen over to Peter in a few minutes. And he will present for about 20 to 25 minutes. And then he and I will have a brief conversation between the two of us. And then I will pose questions sent in by you and the audience. So please use the opportunity during the talk to send in some questions and I'll choose amongst them and then we'll wrap in about one hour. So that's our format. As usual, these sessions are recorded, and they'll appear on YouTube. So let me introduce Peter. So Peter, so first of all, this is an unusual event for us in that this is not an academic book, or a talk about current events, but in fact, a novel about the future. And what's interesting about this novel is that - and I've read it and I recommend it strongly, and before I go any further, let me emphasize at the end, we're going to have a slide that will provide details on a I believe 35% discount, or at least a large discount from the publisher for any of you who want to purchase the book and I urge you to do so -but it's a novel about the future written by someone who is really one of the nation's preeminent futurists. Peters a fellow at New America chief strategist, long time author, thinker, analyst of issues around security, warfare, technology, and other issues. He's a friend and partner of Burkle Center, someone I've known a long time and someone whom I think many people turn to to think about what the future of conflict and in this case, even just kind of societal structure will look like. And so this novel provides an opportunity to explore those issues in a in a kind of fun and engaging format. So it's really a pleasure to have Peter here. And with that, I'm going to turn the screen over to him. Thank you, Peter, for coming on.

Peter Singer 2:18

Well, first, thank you, Kal, for the incredibly kind introduction. And also to everybody are working behind the scenes at the Burkle Center, you all have been amazing partners to New America where I work at but also me personally and so it's really appreciated and also want to thank everybody who's joining us remotely today. And yes, so the book that Kal talked about that's just out called Burn-in, and as I'm doing this, I'm going to try my own Burn-in, which is to ensure that our technology allows us to work here, because I know at a certain point you all will get tired of looking at my face, so hopefully you should be seeing the screenshot of the overall presentation. Burn-in is a new kind of book but the starting point for it was actually a prior book that I did, a nonfiction book that I spoke many, many years back to the Burkle Center about which was a book called Wired for War that came out in 2009, o a little over a decade back. And it looked at how the one science fiction of robotics was now being used in our wars. And think about it this way, our forces that went into Afghanistan in 2001, US military had only a handful of drones, unmanned aerial systems, none of them were armed back then. And we had zero unmanned ground systems. Now the US military has over 22,000 of these systems in it. But we're not the only one. Nations that range from allies like Australia and Britain to adversaries like China, Russia, and Iran are all not just starting to use this technology but wrestle with all of the ripple effects that come out of it. Everything from "How do you train for it" and "What's the best doctrine for it?" to legal ethical questions that surround it to even "How does it affect when you go to war", if we think about the so called drone wars, that we carried out operations in places like Pakistan or East Africa, but that's where we were. Over the last 10 years, we've seen all sorts of changes since. One has mportantly been in the form of the technology itself. You can think of these earlier versions as almost like the horseless carriage, you know what it was not- a horseless carriage, unmanned- rather than what it is- automobile, robot. And as you move away from that kind of vision, it weaves into the design. You go from just you know, pulling out the the seat for the pilot in the cockpit to get all sorts of these different kinds of forms and real world robotics. That also means you can think about fundamentally different sizes, they might be teeny tiny, they might be huge. This system is the size of several houses. But the biggest change, though, is not in that sort of physical form. It's in the software, what runs what or operates it, where the technology is becoming ever more intelligent and autonomous. This is the Navy's new robotic jet in 225. And now it can fly faster, it can fly further these kind of, you know, physical changes, but the real difference is that it's smarter. It can do more on its own, it can do, for example, what any Navy pilot will happily tell you is the toughest pilot task of all: taking off and landing from an aircraft carrier on its own, flying mission sets on its own, like air to air refueling, collection of intelligence to it's going to be armed. And you know, I love the reality of this came out before the mythology of Top Gun 2. But the point is, it's not just affecting the military side. And my sense is that there is a larger story going on, not just in terms of the technology and in the military, but in the story of humanity itself. Think about it this way. Over the last generation, everything that we saw play out in war moved even faster in the civilian world when it comes to robotics and AI. Industrial robotics? 300% growth rate. Software, you know, 2001 that was Arthur C. Clarke vision of AI. Well, since then, we've seen AI disrupt everything from finance to medicine. But most importantly, we're only at the start of this. There's no other area of technology where we are seeing as much promise, is being funded as deeply, involves as wide array of players as this where the cutting edge of hardware and software come crashing together. For example, in spending 153 billion by one study, with an annual creative disruptive impact of $33 trillion. The participants in it ranged from the leading militaries, of the US military national defense strategy, to some of the most powerful nations. China, for example, has said that it wants to be the world leader in AI by the year 2030. But it's also on the company side. All of the leading companies within the Fortune 500 from Google, Baidu, Facebook, you name it, but also non traditional tech companies, both McDonald's and John Deere, each, respectively bought up a leading AI startup because they see AI as the key to the future of tractors, the future of fast food. That part of buying up a startup points to what's playing out in that community, you know, all the different companies that want to be the next Microsoft, the next McDonald's. They're at play in this. In fact, the founder of Wired magazine said, "I think the formula for the next 10,000 startups is to take something that already exists and add AI to it." Now, this was already all in play before the coronavirus pandemic, but it's only been accelerated by it. We are seeing a move forward. In some ways, it's a move forward, a jump ahead to where we thought we would be at the point in the future but to get getting faster there. For example, I'm very familiar with the realm of telemedicine. telemedicine jumped in a matter of weeks to where the industry thought it would be 10 years from now. Or there's other areas where we got to where we thought you know, we would never be, for example, distance learning and distance work. No one thought we would be at this scale. We've seen the rollout of all sorts of robotics being used to do everything from deliver groceries to police curfews to clean subways and airports and hospitals, you name it, to AI surveillance of society writ large, all the way down to individuals is reaching a scale that was not just not envisioned in science fiction, not even the Chinese government had planned for. And the point is, after we make our way through the coronavirus pandemic, we're not going back. We will not go back 100% to where we were before. Now, if this is playing out, though, if we're headed into you know, what's been called everything from a "new industrial revolution" to "the second Machine Age", we've got some big questions to figure out. And I would argue there's three big questions. One: what is the impact of automation on the economy and everything that flows out from the economy? Now, while some people, you know, believe that new technologies lead to new jobs and opportunities, which is true, they also lead to Job displacement and replacement. And when you literally rewire the economy, it happens on a scale that you don't see often. And there's all sorts of different research on what this scale will be. I actually assembled a database of over 1300 of these different reports and projections. At the high end, you had Oxford University that looked at 702 different jobs, and found that 47% of US employment was at risk for reduction or replacement by robotics and AI over the next 20 years. Now, McKinsey looked at the similar trends and said, "No, Oxford, you're totally wrong, it's not going to be 47%, we found 45%!" PricewaterhouseCoopers looked into the same and said, "Your methodology is way off, we found 38%." At the low end of the spectrum was the OECD with 9%. 9% of the economy being rewired. 9% of jobs. That's a really big deal. So wherever you fall in it, it's a massive change. That means that even in the most optimistic scenario where everybody who's displaced gets a new job, you still have a really tough transition period. But again, this is different than past industrial revolutions because it's not the story of someone dropping a shovel and then going to pick up a hammer at an assembly line. It's a tool that is intelligent, that can do more on its own. And of course, it's always learning too. So as the director of MIT Media Lab puts it, "every area of life will be affected, every single one." And think about how the last Industrial Revolution had economic winners and losers and as a result, political winners and losers at the individual level, at the regional level at the national level. Think about the ripple effects it had into politics, positive and negative. No Industrial Revolution? No such thing as children's rights, workers rights, women's rights. Noo Industrial Revolution? You also probably don't get fascism, communism. So we've got these kind of challenges to work our way through. But we also have a second challenge that looms. It's the idea that every time you get a new technology, you get new legal and ethical questions around it, it happened with the airplane, it happened with the computer. Same thing is happening with robotics and AI, except, again, because it is intelligent and more autonomous, you get new legal and ethical questions that we really never dealt with before, and there are two types. One, machine permissibility. What should this tool be allowed to do on its own? And the second is machine accountability. Who owns this tool, not just what it gains, what it collects, but also who owns it in terms of accountability if things go wrong? And what's so fascinating about this is that it plays out these questions everywhere from our city streets, to our battlefields, and organizations that range from... you know, think about something like face recognition. Organizations utilizing it range from the military for targeting, to police forces, to businesses that range from tech companies to Kentucky Fried Chicken. And those questions have to be answered in terms of everything from, you know, what's the proper application to what about privacy in a world of not just big brother, but big kernel. This leads to the third set of questions, new security questions. You have a scale of collection on us and society writ large like never before. But also when you add in AI, it's not just about history and identification. It's also about prediction and influence. So take something like face recognition. It matches that person's face to an identity. But it's not just saying, "Oh, that's Peter, whether it's Peter entering a train station, Peter going into a Starbucks, Peter being accessed by a police officer," it's then matching that data to everything that we can collect about Peter in his history, every social media post you did, healthcare information, movement data. But again, with AI, it's not just about me as an individual. It's about prediction of what I might do next, and people like me might do next to then influence and shaping what I might do next. But we also see new security questions here ,again, like never before. When we thought about cybersecurity, for example, it's been mostly about theft of information: theft of intellectual property, theft of credit card data. Now we move into, as we wire up the network of things that are out there, we cause physical effect with cyber attacks. As an example, just a couple weeks back, we saw targeting of chlorine levels in Israeli water. Now I don't want to plot spoil the book Burn-in, but if you think the water treatment plants in the US- that are primarily run by mom and pop sized companies and local city governments- if you think they've got better cybersecurity than the Israeli government, I've really, really bad news for you. But again, it's not just these new kinds of attacks. Go back to the first issue, it's also new ideologies that might even spawn violence of a certain kind. The last Industrial Revolution, for example, spawned the Luddites, which were early craftsmen put out of work by factories who then launch what we would call a terrorism campaign. They assassinate factory owners, roadside ambushes, mass street riots. Ultimately, the level of violence in Great Britain from the Luddites reach such that more of the British Army was deployed to suppress the Luddites than was deployed to fight the US Army in the War of 1812. Kind of puts us into place of what they thought was a bigger strategic problem. Now, real rapidly though, if these are the three really big problems that we have looming, we've got three problems in how we face them. The first is that not enough attention is being paid to them because it's not well understood and it's often thought to be well off in the distance. And you can think about this either in a numeric way or an anecdotal way. In a numeric way is they did a survey of leaders and 91% of them said, of all the technologies out there, what I just talked to you about AI automation, this is the most important, this is the game changing, what, 91% say that, and here again, you see it woven into corporate strategy, government documents, you name it. But 17% of leaders say that they have a familiarity with it, let alone awareness of all these dilemmas and application problems that I laid out for you. And if you know leaders, 17% that's probably them being a little bit of higher estimates of if they say they understand it's probably lower, but even if they're being honest, that is a massive delta between 91% saying this is the most important and 17% saying I have just a passing familiarity with it. You also can see this anecdotally, as you see in the quote from Trump's Secretary of Treasury, who said, "it's not even on our radar, because it's not going to be an issue for 50 to 100 more years away." That is proven false, in fact, not just for the distance of 50 to 100 years, it's false because it's already been an issue. For example, 85% of manufacturing job loss over the last two decades has been to automation, not to outsourcing. A second issue is that when we do talk about it, weirdly enough for it being about networks, we treat it in a stovepipe manner. People who are interested in the future of work are different than the people who are interested in the future of war are different than the people who are interested in law and ethics different than the people interested in cybersecurity, for the Internet of Things, etc. And then finally, and very relevant to most of you all placed out in LA, is from the world of fiction. There's an irony that we are on the 100 year anniversary of the creation of the word robot. And it was created for what we would call an early science fiction play in 1920. And it took the idea of... it was the Czech word for servitude, and it was used to describe this fictional idea of a mechanical servant who wised up, grew intelligent, and then rised up, revolted. And ever since, that story of a robot rebellion is woven through all of our science fiction and pop culture, the Terminator movies and the like. Now, that would be fine if it stayed within the realm of fiction. Except the third problem is that it shapes our real world understanding of it. It's shaped everything from on the massive attention given to killer robots and everything from Pentagon discussion to debates in the United Nations, to over $5 billion that's been spent by Silicon Valley tycoons and university research programs on the existential threat for dealing with robots. Now, maybe one day we're truly going to have to wrestle with "do we fight or salute our metal masters", but in your in my lifetime, the real issues are those three that I talked about of going through the real world applications of robotics and AI and living through not a robot revolt, but an industrial revolution. This leads to the project that we've launched called Burn-in. Burn-in is a book that is trying to go after this in a very different kind of way. The name for it is taken from the concept of a burn-in is when you push a new technology to the breaking point in order to learn from it. And so what we've tried to do is bring together a novel and nonfiction into the same form. So burnin is a techno thriller. It follows the story of a hunt for a terrorist who the streets of Washington DC of the future, but baked into the story are over 300 explanations and predictions of everything from how does AI actually work to what are the ways that it's going to be used by the police, by the military, by Starbucks to what are some of the dilemmas that we're going to see in our politics and our economy and our family lives. So it has 300 of these baked into the story, but even more so it's got 27 pages of endnotes, to document how these are all pulled from real research, not dreamed up. So just like you would document in a nonfiction book, both to show where it's from, and to get people "Hey, if you want to learn more, here's more that you can see it." And so that's the project. And the idea behind it is that you can have fiction be what we call useful fiction. That is that it can carry across important information in an entertaining manner. And somewhat related to projects with Burkle Center is that this format can actually turn out to be more influential than traditional academic approaches, not just in reaching more people, but reaching leaders, and also putting it in a form they call it a synthetic environment, that means people are more likely to act upon it. I'm a parent, though, I liken it to sneaking fruit and veggies into a smoothie, you get the good taste, you get the entertainment, but you also get the good stuff. So in this case, it's through a techno thriller rather than through a format like a white paper or a PowerPoint that people aren't going to share or read. And so with that, I really appreciate the opportunity to talk to you about it. And I'm going to take it off the share screen here.

Kal Raustiala 22:31

Thank you, Peter. That was great. So you actually stole my first question a bit, which was why this format, but maybe just take a few minutes to, one, talk about- I see behind you on your shelf is your prior, kind of expedition into this world of of nonfiction fiction Ghost Fleet,, which was very influential- and maybe you could say a word or two more about the way in which, one, this format kind of influences policy and why you chose it. And then I want to ask you a little more about the story of the book so we can understand, you know, kind of how the fiction intersects with the nonfiction. But yeah, the format.

Peter Singer 23:09

Yeah, absolutely. And actually, it's a great link point because it was our experience with Ghost Fleet that set us on this journey. August Cole and I -and August is, you know, equally like you and I he's comes out of the, the wonk nonfiction world, he was a Wall Street Journal reporter - but we teamed up to write many years back a book called Ghost Fleet that our origin point was that we both had grown up loving early Tom Clancy novels, and we wanted to, you know, create our own modern version of one that hit contemporary issues. And that's what started us on it, we did this book that, you know, looked at, like, essentially what a war between the US and China might really be like, but framed in a techno thriller. And it you know, turned out to be popular, it was a summer read. But something happened along the way where it actually ended up having greater influence on the real world than all of my nonfiction books. My non fiction books had done well and you know, I've been invited by to have impact opportunities with it, but Ghost Fleet got me invited to brief at the White House Situation Room, at the tank, which is the Pentagon... it's the conference room inside the Joint Chiefs. August got invited to the Nobel Institute in Norway, we actually tracked it over 75 different organizations, you know, and 82nd airborne, JSOC, the team that got Bin Laden. But it wasn't just we were invited to brief the real world lessons of this novel. It sparked changes that range from investigations by the GAO to the Navy, the Marines ran a war game called Ghost Fleet that changed some of their strategy to the Navy created a three point $6 billion ship program called Ghost Fleet (now they gave us zero dollars for it, I should have had, you know, an agent from like CAA helping me with that or something). But, yeah, and the naming rights, we didn't get anything out of that. But you know, the point is, it had this impact. And that struck us. And so we had not intended that on Ghost Fleet. With Burn-in that was baked into the start. Everything from the research topic that we chose to moments in the story to even character identity was drawn of this idea of servicing both entertainment value, but also people getting something useful out of it. And real rapidly, the reason behind that is that this packaging of what we call useful fiction, or you know, a novel with footnotes, but there can be other forms of useful fiction, we've helped with projects that range from graphic novellas to, there's some visual stuff in the world as well. Now, it doesn't just have to be text. But the point of this useful fiction is there's three reasons why not to put our researcher hats on. One, studies show that the human brain is more likely to take in data from a synthetic environment than even the most canonical academic sources. And the reason is basically, narrative is the oldest communication tool of all. We've been using stories from when we were sitting in caves. PowerPoint? It's only 30 years old. So of course our brain is tailored that way. The second, though, is it's not just about pulling in data. It's also about leading to action. And that's because it connects to emotion. And you know, for us, the reason why our work has had such an impact is we paint a nightmare scenario of something. And then people go, "Oh, what do I do to make sure that nightmare scenario that a character just like me, experiences... How do I make that stop?" And this is a great example of the impact of the book. You know, the novels only been out a couple of weeks, and yet, we've not only been able to brief it, you know, not just to UCLA folks, but senators, a group of venture capitalists, US military units, Australian military, Canadian military, even part of the book was woven into the Cyber Solarium Commission report. The Cyber Solarium Commission is a US commission that is in charge of redoing all of US cyber strategy. The opening of their report is from Burn-in. So it's a meld of US government document and sci fi, and the reason why is that they thought this was the best way to get people to listen to their recommendations, to hit that emotion. And then the third reason why is sharing. No one ever said, "Man,"to such a good academic journal or article or white paper. "I just... it kept me up all night, you know, just when I got to one section, I needed to go to bed. But I was you know what, I had to get to the next section." No one ever said, "Oh, man, Kal, you're about to go on vacation? You know what you ought to read? There's this really good PowerPoint that I saw." They do that with stories. So going back to the smoothie idea, let's take advantage of that. And let's do it in a way that shares the story, but has baked into it something useful.

Kal Raustiala 28:54

Right, that's fantastic. And I of course agree. And as some of you know, here at the Burkle Center, and along with Peter and his team in New America, we've been trying to leverage that exact process in a number of international issues. So I think that's great. So let's turn to the book itself. So maybe give us a little more flavor, if you can, of what the storyline is, like, what's the setting? You're, you're a bit I'll say, having read it... the timing, you know, is a bit ambiguous about exactly where it takes place and what happens. And and so it'd be great to hear, just kind of give us a little flavor.

Peter Singer 29:31

Yeah. So the book follows sort of a two handed two different character sets. We follow on one side, our heroes, Keegan, who is a (and actually this this is another slightly you know, once we had gambled with combining fiction and nonfiction, we gambled it another way, so to speak) and that Keegan is a very different character than you normally see in techno thrillers. She, that's one of the big differences, which is something that's too rare in techno thrillers is, normally when there's a female character, they're the one B to themale hero, they also tend to be fairly one dimensional characters. Keegan is a former Marine turned FBI agent, but also a mom of a five year old and a wife and marriage that's not doing too well, connecting to some of these trends that we talked about. Husband was lost job automated, and he's not dealing well with it, it's having effects on their marriage and the like. And we follow Keegan as she's assigned a new partner, a new kind of technology to test out. But then Keegan, as she travels through Washington, DC, we get to see all of the different places. What is it like at a train station? What is it like at a Starbucks? What is it like in your condo? And through that we get to experience the issues and dilemmas. But she is on the hunt for a new kind of terrorist who is essentially holding the city hostage by going after cybersecurity vulnerabilities that were not possible in the past but here, again, are drawn from the real world. And, you know, don't want the difference of nonfiction is in fiction, you don't want to give too much away. But basically, he has shown how to recreate micro versions of the 10 biblical plagues. And what should, through cyber means and to, you know, scare people so to speak in the real world, is that we can document how each one of these is drawn from it, either from things that have already happened or where it happened accidentally and someone might do deliberately, until you get that two handed story. But then just like, you know, Moby Dick is not really about the hunt for a whale, there are underlying themes that are surfaced by this in terms of what people need to know key dilemmas that we all will be facing moving forward. And so, you know that hopefully, the pull from it is some people it will just be pure summer read. We can plot spoil by saying there is no coronavirus pandemic in it, you can enjoy it, you can have that kind of escapism. But hopefully it does provide things that are useful. And, frankly, useful because of what I was talking about before, many of the issues that were already there, the pandemic accelerated them. So a different way of thinking about it is the dilemmas that our characters deal with are going to come true more rapidly in the real world, in part because of the pandemic. One thing that I'll add in that has been a little bit just (not a little bit very disturbing to me) is that I knew the technology aspects of it would come true. What I was not ready for was some of the dystopian aspects of it. For example, there's a scene in the book where a militarized perimeter fence has been thrown up around the White House. And in the book, it was exactly at the location where it was in the real world in DC, I live in DC. Or there was another scene that we thought was the ultimately dystopian of riot police gathered around the base of the Lincoln Memorial, and that happened one week after the book came out. So you know, that's been the sort of striking thing, like, I knew one part will come through, I was not mentally and emotionally equipped for the other parts to come true.

Kal Raustiala 33:59

Yeah, that's amazing, disturbing and amazing. So on the PowerPoint that you showed us, you had a number of different issues that you raise, which obviously related to topics in the book, but also topics that have been the subject of a lot of attention by many people, many disciplines. You and I are not economists. But you know, one of the things that kind of runs through the econ debate around this is, you know, is this time different? And how is it different and you pointed to the Luddites. And what's interesting about the story of Luddites is, you know, at one level, it could be a parable about how really nothing changes. New technologies come in and, you know, the buggy whip industry was really upset about the car and guess what, you know, now we have the automotive industry. And so isn't this just the same thing? And obviously, there's another side to that which I think you are more inclined to believe based on what you said, but I want to kind of press you on that which is that this time actually is different. And it's really different in a meaningful way. And it's not just different for warfare, where obviously there are technologies that are truly distinct, starting with nuclear weapons and moving on, but also just more generally for societal kinds of purposes. So say a little more about where you come down in that debate.

Peter Singer 35:17

Yeah. So the first is that, I think, and often because they tend to look at it through, again, going back to that that problem of one lens, is that even if they are right, and it's just a repeat of the past, they miss how challenging, frankly, traumatic it means to go through an industrial revolution. We're like, "yo, yeah, you know, you will see more jobs at the end," you're like, "yes, but you have this, like, 30 year period of transition. You know, not everyone easily moves over." But when you think back to the last Industrial Revolution, you know, here again, it was wrapped up into the story of everything from, you know, positives, as I mentioned, like, you know, workers rights, children's rights, but you get fascism, you get communism. Think about the story of the Industrial Revolution, how it's wrapped up within the story of our Civil War. Literally, they were called compromises, you had political compromises between the North and the South, that essentially became frayed as the North industrializes and changes, and the South is holding on to slavery. But it's not just in origin, the story of the fighting of the Civil War, you know, the reason the North wins, because it's this massive industrial power. So, you know, what I'm getting at is that, one, even if they are right, because they tend to look at it through only one lens, they miss, "hey, it's going to be really challenging to go through this, all the more that we're in a nation right now that is more divided than ever, politically, economically, socially." So these would be challenging enough. We add that in. But then the second is, as you noted, I don't think they're right in that it's an exact repeat. And, you know, the sort of the credit of the quote of "history doesn't repeat itself, but it rhymes, it's slightly off." And so yes, there are some similarities, but it's, again, a very different kind of technology in that it's intelligent. It's not just able to do more, but it's also a learning machine, so it's constantly evolving. And that's true, whether you're talking about its impact on medicine, warfare, policing, you name it. S one of those, you know, themes that I talked about, in terms of the underlying questions of the book, is that "no, it's not the same old story of robot revolt against us." Instead, the core question that our character has to figure out, but also our society has to figure out and our individual organizations, (whether it's thinking about it and policing, to a couple days ago, I had a conversation with a group of investment bankers), each of these realms have to figure out: what does the human machine relationship look like? There are lots of different forms of it. It might be delegation to machine, it might be partnering (and there's different forms of partnering). But whether it is in banking, policing, whether it is it in education, what does that relationship look like? Where is it most effective? What are we most comfortable with? That is a core dilemma moving forward. And then the second is that we have a constant tug between security... you think about it as like a square -- security, profit, convenience, privacy -- and whether it is face recognition deployed by police or by Kentucky Fried Chicken or the apps that your kids are allowed to use or not, you have to figure out where within that square you're most comfortable with. And if you are not making that decision, someone else is making that decision for you. They believe you know, more profit at the cost of privacy or more security at the cost...and again, we see that debate playing out in everything from your kids toys apps to Coronavirus tracing.

Kal Raustiala 39:34

Fantastic. So I'm going to turn to the questions in a moment. But let me ask you about the geopolitical implications of some of these trends that you cover, these technologies and trends. So, you know, one thing that came to mind, obviously, reading the book (I should say, obviously, but for me, certainly and I think for for others, I imagine) is a lot of what you depict in the book, the dystopian features, some of them we see already here in in the US. Some of them we see in China already. And China's obviously far ahead of us in a lot of applications, many of which are fairly sinister, they're not all. So I'm curious about how you see geopolitically the power of AI playing out. And do you think that this is ultimately something... there is sort of an AI conflict that sometimes pointed to between the US and China? Those are not the only two players, but they are the ones that come up most frequently. If you believe there's such a conflict, who do you think is leading? So just give us your your take on that.

Peter Singer 40:34

Yeah. So real rapidly, I think there's three issues that come out of it. And again, these are the sort of underlying themes baked within the novel, and you know, and there and each of them, there's no ready answer to them, they're just these are issues that we're going to have to all figure out. The first is, you know, you described it as kind of dystopian and scary, but is actually the fine line between utopian views of the world, including those, you know, pushed out as solutions to all our problems, and dystopian, depending on where you are in society, right? And, you know, and we particularly see this as an issue of a lot of this stuff coming out of Silicon Valley, where it's always like, "this is gonna be awesome, it's gonna solve all these problems," and you're like, "yes, that's because you are, you know, have a certain kind of background, and usually not a particularly diverse one, and you're seeing it as a certain problem set. And yet someone else is going to look at it and go, 'man, that is so dystopian.'" My, you know, recent favorite version of this is a smart toilet. You know, it's like gives total new meaning to "Can I get some privacy here?" But you know, face recognition isn't I mean... So one, the fine line between dystopian and utopian. The second, as you note is the sometimes called the AI arms race. And it's that it's not like a past Cold War style, nuclear weapons arms race with China. It's actually two competing visions of AI, one where it's highly centralized, government controlled, all data shared across monitoring of anything and everything, but it's all kind of centralized. And then the other is kind of the cacophony that we have emerging in the US, where you go, physically, the amount of information collected on you, the uses of AI will be decided by everything from what company you're interacting with to the rules of that local city government. So you know, as we follow Keegan through the city of Washington, but it could be any city, it could be LA, or whatnot. In some areas, there's AI, you know, in a train station, there's different collection than there is when you enter a privately owned building to universities. The amount of data collection we're seeing seems to be that there's no go areas of face recognition as compared to public spaces and in a shopping mall. So that's two competing visions of it and we have to decide which one we're more comfortable with. And then the third issue is on the global level, but even on the domestic level, is the rise of super empowered individuals. I don't mean like Avenger style superpowers, but more kind of like Tony Stark without the Iron Man, in that you will have individuals who will have the power and influence that states dreamed of in a generation past. And going back to that kind of utopian dystopian fine line, they will, in certain situations help solve problems, but also they come after them with a certain mentality and that means they may sometimes exacerbate problems. And a particular, you know, issue in our world is that they tend to be because of the nature of our economy. The super empowered individuals tend to be engineers, digital engineers, who went to a very small set of universities and have a certain type of kind of mentality. And so they tend to look at almost every problem, political, social, etc, as just something that needs that one engineering solution. And so this is a third theme that we're going to continue to have to face.

Kal Raustiala 44:33

Great, fantastic. So I'm going to turn to questions from the audience that have been sent in and there's a bunch. So let me start with a simple but interesting one that I think is probably on a lot of people's minds. So the question is, "what are some industries that will be affected by automation that most people wouldn't think about?"

Peter Singer 44:53

Great question and it actually shows the utility of useful fiction. So this database that we built, you can see all these different professions and the likelihood of replacement or displacement. And, you know, there's the expected ones, you know, factory worker or truck driver or whatnot. And here again, you know, we have a mismatch between job training programs, and what these professions are going to be, even university system is training too many people for roles that we'll see go down. But one of the ones that was an example of a kind of unexpected is contract lawyer. A very lucrative field, you know, average income of a couple hundred thousand dollars. Already, right now, an AI can do a better job statistically at finding errors in, for example, a nondisclosure agreement, then human lawyers can now. Not 10 years from now, now. And so this is an example of a field that won't be completely eliminated, but going back to that notion of human machine relationship, may see massive reduction. Now, you can see the kind of numbers and this comes out of things like a McKinsey report and PricewaterhouseCoopers, and Oxford University. Those numbers, though, to me, don't become real in the same way of making a character in the story a contract lawyer. And so you see everything from how wow, you know, not just in factory workers that are being replaced. But "Wow, what does it mean for someone who went to a great school, got good grades, got a great paying job, they did everything the right way, and then that's pulled out from underneath them? And suddenly, they're now doing remote gig work." And here, again, is another example of gig works being talked about as a solution set, but not if you play out what's the effect on your family, your marriage, your politics. And so that's an example of kind of an unexpected area that when you play it out in a synthetic environment, you I think get... you don't just get a more compelling character for a book, you also get to understand the dynamics of something real.

Kal Raustiala 47:12

Agreed. I would just add, one of the things that I've always found most striking, one of the examples, I found is is actually journalism, that many stories about simple things like a sport like the sports page, what happened in the Yankees game, or even the weather, some basic things are now literally produced by AI's. An editor might look quickly to make sure that there's no, you know, hitch in that process, but they're basically just generated. And on the lawyer side, as a lawyer myself, you know, I think that you're absolutely right, that's happening, we see that. One of the other factors that is implicit in all of this is that when that happens, what you tend to see is the most basic tasks within a given profession or field, like contract revision, or doc review, can get automated, the top level continues. And so one of the key implications of that, which we've seen for decades now, is inequality, and greater inequality. So those who are at the very top of the field, who really conceptualize the contract, they're not automated yet. (perhaps they will be someday, but we don't see that). But what we see is the bottom tranche, the most basic tasks go away, and just create evermore inequality. So that's going to have a huge social impact if it continues.

Peter Singer 48:28

There's an important point that is within that. So the first is specific to those that is that there, it's not just you know, sort of high or low level, there's also jobs that have traditionally been portals to the next... to moving up. It might be first jobs. So the fast food industry, not elimination of everybody working in a McDonald's, but massive shrinking of those numbers. And that is, for example, for a lot of people that first job outlet is not going to be there to the legal field. An example that we've seen is massive reduction of paralegals, which was an entry point for many people to then get to the next level. But more broadly, I think of the quote from the science fiction writer, William Gibson, you know, "the future is here, it's just unevenly distributed." And he was talking about and of technology, this being here in Solomon, you can sort of see the future coming, but we play it out in a different way, which is you will have uneven distribution not just in the economy, but of the technology itself. And so, you know, again, not to plot spoil but we illustrate this through an early scene, where we follow our character into Union Station, which is the train station in DC and we get to see everything from, you know, "How will AI be deployed? and "What is augmented reality in advertising?" And it just seems like all these cool things out of science fiction, and it is, but oh, by the way, there's still homeless, there's still crime, yes, she's stuck in traffic because the companies are competing. And then there's another part that that scene illustrates, which is a concept, a very important concept, called algorithmic bias, kind of a wonky idea that essentially...that AI if it's mistrained, or if it's fed the wrong data, it yields a biased outcome. And it might be a biased outcome and giving you the wrong directions. Or it might be a biased outcome in terms of being racist. And we've already seen examples of this. AI, as part of this streamlining in banking, was used to reduce the number of loan officers and AI would screen out who was to be eligible for bank loans, and what the AI was doing turned out to be screaming out African Americans. No one told it to be racist, and yet it was generating a racist outcome. And so it's this really important concept to understand. Most people are not going to read an academic paper on it. But we can illustrate that through the scene where our main character is entering Union Station, and they're trying to find a terrorist in a crowd. Everybody can now visualize that scene. Hopefully, they kind of, you know, their emotion picked up a little bit, oh, they're gonna find the terrorists. But by the time you get to the end of that scene, you've walked away with an understanding of the basics of algorithmic bias.

Kal Raustiala 51:42

Great, fantastic. So next question. How should regular people prepare in the face of this scenario? What can they do to ward off the most dire consequences for their own lives?

Peter Singer 51:53

So I go back to a couple of those issues that we laid out as they play again, for whether you're setting all of US government policy to you're thinking about it in your own home and the roles that you play, be it as a student, or be it as a parent, you name it. So one is that preparing for these trends that are clearly in play, you know, kind of echo back to Kal's question about, you know, the Luddites of the past and how they couldn't stop the future. Absolutely. 100%. You can't fully fight the future, these trends will play out. It's more though about your agency in them. So, for example, how are you training yourself to understand these technologies, these dilemmas, the match between what you're either ensuring that your kids or yourself understand? It's, you know, I think about the parallel of, you know, same question, we were sitting 20 years back, you would say, "Hey, you know, I know this stuff with computers, it sounds really complex, but you kind of benefit to understand the basics of it. I'm not saying everybody has to become a computer programmer, but this internet thing, you ought to pay attention to it because it matters for your business, whether you are a doctor, or you own a shop, or it also matters in your home." Same phenomena here in terms of its positive uses, but also the new kind of security risks. So it's ensuring that there is a match between the issues that you're soon going to face, and how you're educating yourself, or those that you care about. The second is to go back that I talked about of that kind of, you know this the problem of being remote, but that square of the competing priorities of privacy, security, profit, convenience. You need to be aware of where you fit into that. And again, the decisions that you make, in everything from what apps you play with or not, or the settings that you have them on, all the way up to what technologies you're deploying for your university, for your business... you need to decide for yourself where within that square you're going to emphasize the most and understand there's a constant give and take between it. And so having an awareness of that will allow you to do it responsibly, and emphasize what you care most about versus someone else deciding for you and as a result may be taking advantage of you.

Kal Raustiala 54:31

Great, so next question relates a little bit to some of your past work as well. So the question is, are you concerned with the unintended consequences of military AI in that we become even more inured to human suffering, for example, the recent destruction of Syria. So in other words, will this technology in the warfare context...I guess a corollary would be would it lead to even more war, not only that we're inured to the suffering, but will it make war and conflict more likely?

Peter Singer 55:00

Great question. And I think a more important one than, you know, here, again, a lot of the debate around robotics and unmanned systems has gotten stuck within the sort of killer robot narrative instead of these more essential questions of, does it affect likelihood of war? How does it affect behavior and war? So far, you know, research shows that allowing distancing does change some of the ways that individual operating systems might think. But really, what I'm concerned about more so is two things. One, the effect not on the individual operator of the robotic system (the soldier) but rather on the body politic. You can see the way that we talk or even don't talk about airstrikes utilizing unmanned systems versus manned systems. The very different way that we've talked about the the air war campaign that was in Pakistan to the one playing out right now in East Africa, we don't consider ourselves to be at war. And yet the activity was quite similar. So it affects the body politic discussion around it. But the second -- and I think this is not just a discourse on war, but it also plays out in policing and the like -- is that there is a repeated pattern of rolling out the technology, not well understanding the ripple effects first and second, not having a system of accountability for if the ripple effects are bad. And you can see this with, you know, everything from face recognition, which has been deployed, you know, both by big city police forces, New York, LA to West Virginia has deployed it, and yet, we've one not figured out all of the effects of it. And two, we've not figured out the accountability system surrounding it. And so to me, those are the the more real questions that we again need to tackle versus kind of far off sci fi stuff. And going back to now putting on my fiction hat, if I'm pitching to Hollywood, like, these are actually more interesting questions then yet another story of the robot deciding to come to life and attack the human master. We've seen that before. These ones, they're new and they also feel different, because also... they feel real, because they are going to be real.

Kal Raustiala 57:50

Fantastic. So we're almost at the end of the hour. I just want to remind everyone, that at the very end, when Peter and I go off screen, there'll be a slide it's going to have a discount code. I urge you to use that to buy the book. Peter, do you have a copy handy that you could just hold up and wave?

Peter Singer 58:07

Ah, there we go.

Kal Raustiala 58:12

Just like our slide, but it's always good to see it. So thank you so much for coming on. Good luck with the rest of the book tour, and thank all of you for for tuning in today. Have a great afternoon.

Peter Singer 58:23

Thank you, everybody.

Kal Raustiala 58:25

Take care.