Podcast
Air Date:
March 14, 2025

#001: AI Snake Oil – Arvind Narayanan on AI Hype, Hopes, and False Promises

Arvind Narayanan

Arvind Narayanan is a professor of computer science at Princeton University and a leading expert in AI, privacy, and security. He co-authored AI Snake Oil, a book that critically examines the limitations and hype surrounding artificial intelligence. His research focuses on algorithmic accountability, and he frequently speaks about the ethical and societal impacts of emerging technologies.

In this conversation, Arvind Narayanan discusses the concept of 'AI snake oil,' which refers to AI technologies that are overhyped and often ineffective. He explores the implications of AI in various sectors, including hiring practices and criminal justice, highlighting the biases inherent in these systems. Narayanan emphasizes the importance of skepticism towards AI research and media reporting, advocating for a more informed public that understands the limitations and potential of AI technologies. In this conversation, Arvind Narayanan discusses the regulatory landscape surrounding AI, the potential for an AI bubble, and the challenges in achieving Artificial General Intelligence (AGI). He emphasizes the importance of ethical considerations in AI development and the limitations of AI in content moderation. Narayanan also highlights the need for specific AI applications rather than general-purpose tools and offers advice for young people entering the AI field.

Connect with
Arvind
EPISODE TRANSCRIPT

Arvind Narayanan (00:00)
There was a lot of hype around the fact that chat GPT can pass the bar exam or the medical exam. a lawyer's job is not to answer bar exam questions all day.

AI is used in many parts of the recruiting pipeline.

In some cases, you have to play a little game. called the balloon analog risk task AI tries to see how long the candidate waits until they pop the balloon. If they pop it too soon, they'll get too little reward.

but if they wait too long, the balloon might pop before they can get in here, whatever, right? yeah,

Siara (00:31)
That's so odd.

Arvind Narayanan (00:33)
So content

the Zuckerbergs of the world say, we're going to use AI to solve this problem. We don't want any human bias in this process. AI is going to make the decision objectively. That, we think, is a pipe dream.

Siara Singleton (00:45)
Hi. Welcome to the very first episode of the Logout Podcast. This is a show where we take a step back and get curious about how technology is shaping our world. I am your host, Siara and I'm really excited that you're here. So thank you for taking the time. Today, we're tackling one of the most urgent and often misunderstood topics in tech right now, artificial intelligence. Let me start by saying I am not by any means an AI skeptic.

Neither is my guest today. I am probably more bullish on AI than I should be, but I did notice that when we talk about AI in the public sphere, the conversation tends to be dominated by the sort of promises that come from AI rather than the realities. And a lot of the information we get is given to us by companies, which understandably are trying to turn a profit. That's what they're supposed to do. But the incentives there and the bias do impact how that information is given to us. Outside of

like niche tech communities, my opinion is that there isn't enough emphasis on what's happening behind the scenes with this technology. AI is often marketed to us like it's magic, but it simply is not. It's really cool, but it's not

It can be tempting to learn AI solely from the flashy, self-proclaimed AI experts who became gurus three years ago. And don't get me wrong, some of them are great. There are a lot of very talented individuals who have spent the last few years mastering a bunch of super helpful applications of AI for their specific discipline. I have a handful I can think of right now. I think it's important to remember that AI didn't just appear out of nowhere. It's been around since the 50s. It's

powering things we don't even think about anymore, like spell check and Google search results and email spam filters. So the people who've been driving AI forward have been here. They're mathematicians, they're computer scientists. They're kind of like the OG AI experts. And sometimes in all the hype, we forget about those voices. Within niche communities, these people are well known, but outside of tech, their insights don't always reach the mainstream conversation. So if you're investing in the space,

or think AI might impact you, which let's be real, it probably will, learning from these voices is crucial. And I might add very fascinating and fulfilling. I love this stuff, genuinely. But anyway, it's also helpful to just understand AI's limitations and how to actually leverage what it's best at. That's why I recommend turning to these voices for the fundamentals.

So we're extremely fortunate to be joined by one of those voices today. There is no better person to help us untangle the AI hype issue

my

Narayanan.

If you work in AI or follow conversations around tech ethics, you probably already know his name. He's a professor of computer science at Princeton, where he leads the Center of Information Technology Policy. He's also one of the most respected voices in AI accountability, with a recent focus on debunking AI hype.

He co-authored AI Snake Oil with Sayash Kapoor. This book breaks down what AI can actually do, what it can't do, and how to tell the difference. After reading it last year, I definitely feel a little more equipped to separate fact from fiction. Like I said, I was pretty bullish.

I've wasted a lot of money on crap AI software. I wish I had this book like four years ago, but it's okay because we have it now and it's still early. So in this episode, we'll talk about why Arvind thinks certain AI applications are fundamentally flawed or just straight up fake. We'll explore the real risks of AI, not just the dystopian sci-fi fears, but the everyday harms that are happening today.

So if you're using AI or trying to figure out how it might impact you or others, maybe you're curious about AGI like I am, I think you're gonna love this conversation. we did film this conversation in 2024 and we all know how fast this world moves. So some events I mentioned are a few months past us now, but the information is all wildly relevant today. Let's dive in.

Siara (04:43)
Arvind, welcome to the show.

Arvind Narayanan (04:44)
Hi Ciara, thank you for having me.

Siara (04:47)
Thank you so much for being here. read your book earlier this year and I thought this is the exact type of person who I would love to come on the show and just discuss a realistic viewpoint of AI. So I want to start by asking about just the term AI snake oil, which is the title of your book. It's such a great metaphor for me, it really paints a vivid picture of the issue at hand.

So could you just share what that means to you? What's your official definition of AI snake oil?

Arvind Narayanan (05:17)
Sure. The way we define it in the book is that AI snake oil is AI that does not and cannot work. So a simple example is there was this company that made waves claiming they had built a robot lawyer. There was no robot lawyer. Or in the hiring area, there are companies that claim that their AI will look at a video of a job candidate speaking, not even about their

job qualifications, about their hobbies and so forth. And just by using body language, facial expressions, that sort of thing, figure out their personality and job suitability and be able to be used for selecting candidates. I mean, there's no evidence that that kind of thing works. We strongly suspect that it doesn't. These companies are not very transparent. I'm pretty comfortable calling that AI snake oil, but the book is a lot broader than that. It talks about all of the hype that's out there, even when there may be

a kernel of truth to the claims. Like, you know, there was a lot of hype around the fact that chat GPT can pass the bar exam or the medical exam. And that's true in a certain limited sense, but it's being used to imply that chat GPT can do the work of a lawyer or a doctor. And there's such a huge difference between those two statements because a lawyer's job is not to answer bar exam questions all day. So that's really what our book is getting at, right? And how do you separate what is actually

Siara (06:35)
Mm.

Arvind Narayanan (06:40)
genuinely improving about AI from the kinds of things that companies want it to be used for, but it can't quite do yet, or maybe will never do.

Siara (06:50)
Right. And I mean, it's become more of a public conversation, obviously, recently, as AI products have become more consumer facing. But the research that you've conducted and your hypothesis came well before AI was such a consumer facing topic. So one, I'm curious, for my non-technical audiences who are not as much thinking of those use cases that the government or another business might adopt,

Was the book written before or after AI technologies like GPT were released to the public? can you just share a timeline for everything in relation to what we've seen for the mass public and what you've been working on?

Arvind Narayanan (07:28)
Sure. mean, I love that question because a lot of the time we have sort of AI influencers out there who a few years ago were, you know, COVID experts, right? So basically, LARPing or cosplaying. So that's not what, what we're in the business of doing. I mean, for, for me, for the last 15 years, I've been doing computer science research that's motivated by the fact that tech companies don't have enough accountability.

Siara (07:39)
Yeah.

Okay.

Arvind Narayanan (07:55)
And I've been looking at AI specifically for about 10 years now. Back then I was looking at biases in AI, how it can lead to discriminatory effects, what we need to do about that. The specific research that has gone into this book started around 2019 or so. And that's when I started noticing a lot of these products that were being promoted in HR. And I also started thinking about, okay, so there are biases in these.

risk prediction algorithms used in the criminal justice system, for instance. Yes, we know that that's really problematic, but does it work at all? Is it fair to anyone to jail someone based on a prediction that they might commit a crime in the future as opposed to a determination of guilt? How have we so readily accepted pre-crime? How is this so prevalent in our society? So.

Siara (08:25)
Really?

Arvind Narayanan (08:46)
I wanted to do research to understand that. that's around when Sayesh Kapoor joined me. He came from a background of working on AI in the industry. So he was both aware of its potential, but also its limitations, how there are some exaggerated claims out there. So we've been doing research for many years now.

Siara (09:05)
Yeah. Can we talk specifically, I want to talk about two examples that you've brought up. So the first one would be the hiring. So can you just kind of explain to the audience what these technologies are claiming and then maybe what job seekers should be expecting or, you know, guess is happening on the other side when they encounter AI in a job application. Of course, not all will be able to detect it, but what can people expect and

Is there any accuracy to these systems?

Arvind Narayanan (09:36)
So AI is used in many parts of the recruiting pipeline. Even before you apply for a job, AI may have been used to write parts of the job description, for instance. then when you do apply for a job, it can be used for some sort of resume screening and then for more mundane things like scheduling interviews.

In some cases, you might interact with an AI system and that is leading to some sort of assessment. I was talking about video assessment earlier, but that's certainly not the only one. In other cases, you have to play a little game. There's one, for instance, called the balloon analog risk task that I've seen many times. have to, the AI tries to see how long the candidate waits until they pop the balloon. If they pop it too soon, they'll get too little reward.

This is all play money, obviously, but if they wait too long, the balloon might pop before they can get in here, whatever, right? yeah, mean, yeah, I see the look on your face and believe me, I also have trouble understanding why these are so widely deployed. But then after you're hired, again, you know,

Siara (10:32)
That's so odd.

Arvind Narayanan (10:48)
various automated systems are going to be used and how people are assessed for promotion, for evaluation, various things. And I'm sure there are many that I've missed, right? So it comes in in many parts of the process. And what's happening in response to that is that job seekers are also using AI to automatically...

create or embellish their resumes or cover letters or tailor it to the specific wording in the job description, apply to as many positions as possible. So it's just kind of leading to an arms race. And I don't think this is healthy in any way. I don't think this is helping anyone. Ultimately, I think in the job seeking process, we just have to find, accept and embrace the human element of it.

knowing whether someone is going to be fit for a particular position depends on so many things that go beyond just the numbers that can be extracted out of someone's CV. It might depend a lot on their relationship with their potential future manager, for instance, whether they're able to work well together and that sort of things. I mean, those are difficult decisions.

hiring managers can't really predict who is going to be good at a job as a result of that. So I can understand why they're so tempted to try to rationalize all of it and turn it into math and somehow magically pick the best candidates. But I just don't think that's going to work.

Siara (12:09)
Yeah, and it's obviously a cesspool for inequality, but I can't help but think, and I'm, you know, I don't really think this is a good way to find your team, but with inequity already happening, with, unconscious bias already happening in the job interview process just, by a human, do you think it will be easier to teach an AI not to be

discriminatory or human is my question. It seems like we've been trying to teach the humans to work on their bias. I couldn't say how much that has progressed. So I've heard the argument But my thing is like, it's so random what biases could come up. And you kind of speak to like how these predictive AI models

Arvind Narayanan (12:41)
Yeah.

Mm-hmm

Siara (12:57)
don't really account for randomness. They don't really account for something even like a miracle. So what do you think of when someone brings up that point?

Arvind Narayanan (13:06)
Yeah, so I think in some limited sense it is easier to train AI to be less biased, and that's true, and to the extent that that's possible, we should do that. I think we should keep in mind that so much of the bias out there is structural, and it doesn't reduce to these moments of individual assessment. So a good example is what kinds of positions are companies using these automated hiring tools on?

Like nobody's using that to hire me, right? So, you know, for anyone who is in a relatively high status job and has has bargaining power, if you will, these AI tools are not being used and they're, you know, they're being judged as individuals. There is a human on the other side. If I'm applying to a job, you know, as a researcher or a professor or whatever, who is really trying to understand what's my intellectual contributions are and whether

they think those are substantial. And I think that is a kind of basic human dignity that we are all entitled to. However, these AI tools are being used for jobs that companies consider to be more replaceable. And I think the kind of discriminatory effect here is the fundamental indignity.

I think of being interviewed by AI and not having the opportunity to make your case to whoever, the hiring manager is or who your boss is going to be in the future. And the fact that this is being used almost exclusively for these, for these lower status, lower, wage kinds of jobs. think that is deeply problematic to me and you can't get rid of that by training the AI to be.

Siara (14:34)
you

Arvind Narayanan (14:48)
less biased in the assessments of individual resumes.

Siara (14:51)
Mm-hmm. I want to talk about some of the applications in the criminal landscape, because that's the ethical side that really worries me. Have you seen any studies to prove that these, I'm going to predict if this person's going to commit a crime again, claims work?

Arvind Narayanan (15:11)
I mean, there are so many studies and we know what they say. These algorithms are able to pick up basic statistical patterns, right? And so they work a little bit better than random. These statistical formulas basically are able to pick up on the fact that younger defendants, if released, are more likely to be rearrested for a crime. What they can't tell you is,

the difference between being arrested and actually committing a crime. Because these software systems, or really anyone in the justice system, doesn't have access to who actually committed a crime. They can only observe who got arrested for a crime. So those biases get baked in. That's one thing. And the other thing is, think about

the fact that these algorithms are more punitive when it comes to younger defendants, right? So mathematically, that's the right thing to do because according to the data, they're more likely to re-offend. But if you ask any judge, they'll say, no, it's exactly the opposite. We should be more lenient toward younger defendants for many reasons. Morally, you might consider them less culpable because their brains are less fully developed when it comes to moral reasoning.

But practically, you might think that they have more chances to be rehabilitated, to have their behavior corrected, et cetera, to turn on a different path. And so for those reasons, judges don't behave that way. Even when you show them these high risk scores, they don't think that younger defendants should be treated more harshly. So that tells us many things. First of all, it's not that accurate. It's picking up these basic patterns that you can't actually see into the future and figure out who's going to commit a crime. That's the first thing. Second,

Even after you've done these algorithm de-biasing things, you're fundamentally limited by the fact that you don't have the true data, if you will. Not only can you not see the future, you also cannot see the past. You can't actually know who committed crimes. And it's that problematic data that these tools are necessarily learning from. That's the second thing. And the third thing is that even if you somehow solve all of those problems, the decision that is most accurate from a predictive perspective

is not the one that's most morally defensible. And that is an even more fundamental problem. And I think all of these are reasons to think about whether we should be using these risk prediction algorithms at all.

Siara (17:28)
How often are these algorithms actually used in the real world?

Arvind Narayanan (17:32)
I think in the US they're used in the majority of jurisdictions. I don't know if it's the majority of defendants, but I would not be surprised if they were. To be clear, the algorithms are not solely in charge of a decision. The risk score is presented to the judge who ultimately makes the decision. But as you can imagine, there is a strong pressure to just go along with the algorithm prediction.

Siara (17:54)
Right. On that same topic, you explain why predictive AI often fails, but you also argue that it may never officially work. especially when predicting human behavior. So I'm wondering, one, if you want to add any color to that, but also, is there anything that would change your mind on this? Is there anything that you would see possibly in a study or an experiment that would make you think maybe it is possible?

in coming years.

Arvind Narayanan (18:23)
I mean, in principle, it's very easy to establish that you can predict the future, right? Make those predictions, wait for the thing to happen, and you can see very clearly how well the predictions panned out. It's just that every time we have done that experiment, the answers have not been very good. Yeah, so in principle, that could change in the future. The reason I'm not holding my breath is when we look at all of the fundamental barriers to this.

Siara (18:39)
Mm-hmm.

Arvind Narayanan (18:48)
Yes, there's something fundamentally unpredictable about human behavior, but that's not even the main thing. There are even bigger barriers here. So for instance, let's say there are, just sticking with the example of defendants in the criminal justice system, let's say there are three people who are arrested for the same crime and statistically they all look identical to each other. And one of them is deeply remorseful. The second one doesn't care.

And the third one is itching to finish the job. And so if you could truly see into their minds and be able to figure out their true mental states, maybe you would do a good job of prediction. But no AI system or any human system is going to be able to do this because it's so easy to fake remorse. And of course, every defendant would do that.

Siara (19:17)
Mm-hmm.

Arvind Narayanan (19:35)
the data

that you need for good predictions is fundamentally not accessible. And that's one thing. And then another thing is just gaming the system in other ways as well. some of these risk prediction tools go based on a series of questions, things like, is your room neat or tidy? So that's in the hiring space and the criminal justice space. Some of the questions are like,

Do you often feel bored and restless or things like that? So you can imagine why the developers of these tools would think that they would correlate with bad outcomes. But of course, once you start using these tools on a massive scale, people will start figuring out what's going on and we'll give the answers that will produce better outcomes, better predictions for them. And why shouldn't they? They absolutely should. And those are, I think, all fundamental.

limitations that aren't so easily going to change.

Siara (20:25)
Okay, so jumping around a little bit, you've also made the case that we should all be a little bit more skeptical about any conclusions made from AI or machine learning studies by non-computer scientists specifically. And you've pointed to researchers who possibly are conducting these flawed AI-based studies. In

you've said most machine learning-based research out there doesn't hold up. And correct me if I'm wrong. That's scary. That's very scary.

so

would love for you to explain the data leakage effect because when I read it, I thought it seems so obvious, but it could easily be a common mistake and even like an innocent mistake with not so innocent consequences. So please enlighten us.

Arvind Narayanan (20:54)
Yeah.

Yeah, for sure. And just to clarify, I think we should also be very skeptical of AI research conducted by computer scientists. I think we're no better. It's just that the kinds of mistakes tend to be different. And also when there are mistakes in AI research that's about building new AI technology, it gets discovered relatively easily because companies will try to build products out of it and it just doesn't work. And so you know if something went wrong and you go back to the drawing board. But if you're building AI for

healthcare or something like that. These mistakes can persist for years and are not easily observable because you're making predictions about people's futures, for instance, right, that are years in the future. And so it can be hard to spot them. So let me give an example. Epic which is a healthcare technology company, built a sepsis prediction tool. So sepsis is an infection that can be deadly in hospitalized patients. And so it's important for doctors.

to have early warning that someone is developing or might develop sepsis. And so I can see why they built it. It's a very well-motivated tool. It's different from the ones in criminal justice, for instance. I'm not doubtful of the value of the tool in the first place. It's just a question of, they build it correctly? And it turned out there was one widely used tool in hundreds of hospitals. It took a few years for someone to independently evaluate it.

It turned out that the way that Epic had gone wrong is that one of the signals that the algorithm was using in order to predict whether someone would develop sepsis is whether they had been prescribed antibiotics for treating sepsis. Right? So what's happening here is that the AI is being trained on data from the past and the data doesn't clearly record

Siara (22:57)
Thank

Arvind Narayanan (22:58)
when someone developed sepsis, when they were prescribed antibiotics. And so it's essentially using data from the future. It's all mixed together in the data of past patients, but when you're trying to use it on future patients, obviously you will not have data on them being prescribed antibiotics at the moment when you want to predict whether or not they will develop sepsis in the future. So it results in...

a tool that's basically useless or works much less well than anticipated. And yeah, this seems like a really basic error, like what were they thinking? But in fact, you can see reasons why this might have happened. If you're building one of these things that's going to be deployed in hundreds of hospitals, all of those hospitals might have kind of incompatible

healthcare database systems and that variable might be recorded differently in many of these databases. And it's just hard for a developer sitting there to ensure that among the hundreds or thousands of variables that are going into the AI system, every one of them is being coded and being inputted in the correct way. And so yeah, like you said, these are all innocent mistakes. Nobody is trying to do a bad job here and such mistakes have certainly come up in.

research that I myself have done. I don't think anyone is above this. So it's less a matter of individuals and more a matter of systemically, you what are the processes we're going to put in place so that we can have more trust in research and the scientific enterprise and in the various industries that are so eagerly adopting these AI based decision making technologies.

Siara (24:33)
You've also been critical of

reporting of AI. so, are journalists consulting with AI experts like yourself or someone else before potentially publishing misinformation about AI? Because I've seen quite a few headlines that have already been disproven. Are they just relying on narratives that are

fed to them or how is that all working and what can, what should be happening for more responsible journalism on AI.

Arvind Narayanan (25:04)
Yeah, I mean, the thing is, everyone's doing the best with the limited resources that they have, right? So that's, in one sense, the depressing part of this. If there were someone who is clearly incompetent, whether it's the researchers or the journalists or whatever, there's that famous Steve Jobs quote that if it's a conspiracy, that's good news. You can overthrow the people in power. But if what we're seeing is the result of everyone just following their incentives, that's really hard to know how to fix.

Siara (25:31)
Yeah.

Arvind Narayanan (25:32)
So I think, with journalism, are a lot of limitations. There is so much oversimplified stuff out there, stuff that is just kind of reprinting or rewording companies, press releases, basically. And it's not because journalists are incompetent or trying to do a bad job. It's because newsrooms are under such financial pressure and you have to get three stories out in one day or whatever it is. so, yeah, journalists are contacting

experts like myself. But what that's going lead to is a quote saying, know, paragraphs one through five, here's what the is the amazing thing this company said they've built and paragraph six, here's, here's a skeptic as it's often frame saying, you know, maybe we should be careful, it's not going to make a difference to readers minds, I think. To to really tackle these problems, we have to think about it structurally. So one thing we're a fan of

Siara (26:14)
Mmm.

Arvind Narayanan (26:26)
is new funding models for journalism, more kind of nonprofit newsrooms who are funded by grants, for instance, and a reporter is funded for a year and they have autonomy to say, okay, I'm just going to do three stories this year, but I'm going to report them really deeply and I'm going to spend two months of that year, you know, becoming an expert on AI, right? So that's a very different way to even think about what the role of journalism should be and how it should be practiced.

And I think the good news is that there is a lot of innovation happening along those lines.

Siara (26:59)
You mentioned access journalism, which was a new term to me. Can you explain?

Arvind Narayanan (27:04)
Access journalism is when a reporter wants to report on, know, typically it's a company, but it could be something else, government, et cetera. And a lot of the most valuable information is going to come from insiders, right? Which is very natural. There's nothing inherently wrong with that. But to be able to get that information, it's often very important to be in the good graces of the company or whatever organization it is.

So the reporter has to play this very delicate balancing game between writing stories that are going to bring some accountability to this organization, but at the same time, not pushing that too far in order to be able to maintain access to these insiders so that they don't get cut off. And I think that's something I would guess that many journalists do to varying degrees. Some do more than others.

Siara (27:47)
Mm.

Arvind Narayanan (27:56)
So I think it's less a binary, know, are you an access journalist or not? And it's more of how do you draw that back? where do you draw the line? And again, yeah, it's a hard question, but I do think that in some cases, journalists have too little skepticism and are kind of choosing the wrong side of that balance, if you will.

Siara (28:17)
a lesson I learned from your book, which might seem obvious to most, is that as a non-technical person, I look at those technical folks in the public eye and I thought this is a good source for accuracy of technology. But something that your book made me start to think was

It's not always the sparkly, most famous, most rich technologists that are the best people to talk to about a certain

or some other discipline that I don't personally have education in. And so I'm curious if you have any advice for those of us who are interested in topics like these. But.

don't know how

world of where there's a lot of misinformation

around, there's a lot of information period flying around, it's very overwhelming.

do you recommend is the best way to really learn about not only AI, but technologies in general?

Arvind Narayanan (29:08)
Yeah, it's a, it's a hard problem. I think there is something we can learn from, from everyone from, from many different sources. I don't think we're going to find any one source who is the best in every way on any particular technology. So yeah, people working at companies, you know, they might have a lot more practical knowledge than I might, but on the other hand, they might have an incentive to hype up the technology, but also they're

they're in their own kind of bubble. Like, you know,

been

Siara (29:37)
Hmm.

Arvind Narayanan (29:38)
Silicon Valley, right? And the extent to which that's a bubble is really, really hard to fathom. So a lot of engineers who are in that bubble think that the whole world is biased against them and wants to, you know, push down tech, if you will. And the whole media is biased against the tech industry. It's basically in conspiracy theory territory. you know, that...

Siara (29:57)
Mmm, interesting.

Arvind Narayanan (29:59)
Yeah, that might be hard to believe, but that's where a lot of this, that's one of the places where a lot of this hype is coming from because of the sense that there is so much anti-tech bias. And in order to overcome that, you have to really hype up and sell these products to overcome the inertia that's out there. Right? Like every technologist will tell you, look at these lawyers. They're so threatened by AI taking their jobs. So they're all basically, you know, conspiring.

to pretend that AI is useless at legal tasks. Well, we've done the research and it's mostly pretty much useless as of today that might change in the future. And so you've got a bunch of people who are very smart technologically, but completely misunderstand the world, think that everybody is kind of out to get them and don't recognize that just because a tech is advancing quickly in a purely technological dimension doesn't mean.

Siara (30:35)
Really.

Arvind Narayanan (30:50)
that it's ready to be adopted in all of these kind of nuanced industries and societal settings. And so when people don't adopt the tech, they're very confused about why this is happening. And they conclude that it must be that there is some huge conspiracy going on. So that gives you a sense of the extent to which even experts can be, you know, they can be experts in one dimension, but can be deeply biased and confused in another sense. Right. So, and on the other hand, you know, academic experts like me, yeah, maybe

Siara (31:14)
Mm-hmm.

Arvind Narayanan (31:18)
We don't have a lot riding on the success of some tech product. And so I don't have necessarily those same incentives. But on the other hand, yeah, I'm in kind of my own academic bubble and I'm not the best one to tell you what sorts of biases come along with that. You should ask someone else, right? And so I think it's super important to understand what are the incentives and biases of different groups of people so that we can adequately account for

how the things they are saying might be distorted. So that's one thing. And another quick thing I'll point out is that tech is somewhat different from a lot of other areas of expertise in that it's actually very easy to get some basic fluency in it, but precisely because of how much hype there is, because it's portrayed as being created by these geniuses, right? So many people are intimidated.

Siara (31:50)
Mm-hmm.

Arvind Narayanan (32:12)
And, you know, take for instance, learning to code, right? This is something that really anyone can do. I'm not saying everyone should do it, but anyone can if they chose to. And there are a lot of people out there who in their jobs would really benefit from having a basic understanding of coding so that they can know when, you know, when a tech product is being pitched to them, does this seem reasonable? Is this even the kind of thing that computers can do, right? But because they're intimidated, they haven't taken that first step to try to learn to code.

And when it comes to AI, there's even more basic things. Spend a few hours playing with generative AI, right? This is something that everyone can do and I would go so far as to say everyone should do. And that is just gonna help you develop your intuition a lot more than reading news stories or watching videos, or even if you don't mind my saying it, listening to podcasts

right?

Siara (33:05)
I don't mind

at all.

Arvind Narayanan (33:07)
Yeah, so I would just emphasize the agency that every one of us has to be able to develop our own understanding instead of having to defer to the opinions of others.

Siara (33:17)
Yeah, and you even mentioned the book how the media often portrays AI like magic or God-like, even accompanying robotic imagery, which I thought was hilarious because it's so true, especially in the earlier days, well, earlier for me, days of AI making headlines is it's always this stock image of a robot and a human hand touching or something. And it's, you know, it's comical, but.

I mean, there is some harm in that where it kind of portrays A as something maybe a little bit more intelligent than it actually is. Would you agree with that?

Arvind Narayanan (33:51)
I mean, yeah, especially when they're talking about, I mean, AI can mean many different things, right? That's one of the central messages in the book, but especially those robot images are hilarious when they're talking about a glorified Excel spreadsheet, right? And how have we visualized and think about AI is going to dramatically change the kinds of questions we might ask about it. Even if we're inclined to be skeptical, if we're thinking about it as a robot, we're gonna ask, you know, is it

Siara (34:00)
over.

Yeah.

Arvind Narayanan (34:19)
Is it going to turn on me? But if we think of it as a spreadsheet, we're going to ask, OK, where is the data coming from? Who has done any kind of quality check on the data? Those kinds of questions.

Siara (34:27)
Yeah, and that's something that I would personally like to see more is just transparency around what data, not just the LLMs, but really any company is using for their

And then when you brought up the sort of conspiracy theory going on in the tech world where people are trying to, suppress technology for their own gain, it did make me think about how

just snake oil, was addressed in America, correct? Was it some sort of government structure, like an FTC-like structure that said, okay, no more, or FDA or something like that? FDA, okay. Yeah, so I mean, I'm thinking about, I know the FTC is the tech industry's favorite government sector.

Arvind Narayanan (35:02)
FDA, yeah, exactly. Yeah, yeah.

Siara (35:14)
And I see anti-trust laws are usually the controversy there, but it feels like for AI, they might be the ones to say, hey, hold up, you can't claim this And so have you seen that happen? Do you think it will happen in the future?

Arvind Narayanan (35:28)
The good news is that it's happening. So I started out mentioning the robot lawyer company, for instance, they did get into trouble with the FTC, the Federal Trade Commission. And I found it very enjoyable to read the whole complaint that the FTC had filed in court with so many great screenshots of all the lies that the company had on its website and how they had fabricated quotes from people making it seem like this was an amazing product and various other things.

The challenge, I think, is less so about expertise in government. That is a challenge, but not as big a challenge as people often assume. It's more about budgets. mean, the FTC goes after much less than 10 % of the companies they could be going after. Right. Much less than 10%, maybe more like 1%. So I don't know what the right solution is there. I don't think the FTC should be 100 times bigger, but maybe

Siara (36:11)
Mm.

Arvind Narayanan (36:20)
there's a more efficient way by

our enforcement agencies can incentivize companies to be more attuned to the law and ethics and what they're doing, even if not every one of them is actually getting into trouble with the regulator.

Siara (36:36)
so do you feel, do you have like certain, maybe not regulations in mind, but like where would you like to see that go? What does the future of helping to have a safe and productive environment for AI to flourish in a way that's not, so shady?

Arvind Narayanan (36:52)
I think a lot of this has to happen sector by sector, right? So we talked about the FDA for a second. I mean, there is so much AI in medicine. I mean, a lot of it is great. The FDA actually has approved many, many AI-based medical devices. But I think there's more to do. I gave an example of a flawed AI claim that wasn't discovered for many years. So how can you structurally change things so that it's easier to identify what works and what doesn't?

Siara (37:12)
you

Arvind Narayanan (37:19)
But also in medicine, there's a whole gray area, right? People are now using chachi PT for self diagnosis. So I'm not saying they shouldn't. I understand why they might, especially considering that it takes weeks to get a medical appointment or people might not be insured. There are so many reasons why people often take their health into their own hands. I'm not gonna judge them for the circumstances that.

that lead to those choices. But I think there has to be some sort of oversight. I don't think the answer is to get ChatGPT certified as a medical device and get it licensed, and that's too onerous, right? So there needs to be some innovation, lighter touch way of having oversight over the use of these new digital technologies in some, you know, for some tasks that are traditionally regulated in very different ways.

Siara (38:04)
Yeah, we've talked a lot about how there's so much buzz and there's so much hype. There's also just so much money going into AI right now. not only the money that's being invested into these companies, but I'm thinking about the employees that are joining, these companies that might not have a legitimate AI. I'm thinking about everyone who might purchase, a product, whether it be a consumer or a business, that's not actually

going to give the results that they were looking for. And so I'm wondering if you think that all of this could eventually lead to an AI bubble and what that might look like.

Arvind Narayanan (38:35)
Yeah, that's definitely possible. not going to make that prediction specifically. One reason is that something that is fundamentally built on hype can nonetheless last for a very long time. And so even if one thinks something is a bubble, the timing of that bubble can be very hard to predict. So when we look at generative AI specifically,

Siara (38:53)
We will.

Arvind Narayanan (38:55)
Just in hardware alone, companies are on track to invest more than a trillion dollars in building up these data centers that power AI. just an incomprehensible amount of money. And so far, they're not seeing gains from AI. That's nearly on the same order of magnitude. So yes, it's possible that there is going to be a bubble popping. But at the same time, I will say that in the long run perspective,

Siara (39:05)
Yeah.

Arvind Narayanan (39:19)
We are pretty optimistic about generative AI. We do think it can help every knowledge worker. So, Sayesh and I are programmers, for instance. I think most programmers today are already using large language models. It's really changed the way we do things. Frankly, it's hard to imagine going back to a time before AI assistance for programming, not because AI is better at programming, but because it just takes so much of their drudgery out of it, right? And so if AI were to have a similar impact across the economy,

Siara (39:43)
Mmm.

Arvind Narayanan (39:48)
then yeah, of course it would be worth that trillion dollar investment. And I do think that's probably going to happen gradually over some period of time. I just don't know what that time period is and whether the investors who have collectively put in that one trillion dollars are going to be okay waiting around for that long.

Siara (40:04)
Yeah, and it's definitely been a question mark for me of like, what is the outcome that we are looking for? Is it more productive tools for knowledge workers? Is it, generative AI that's just making everyone's easier, saving them time and money? And then, you know, you also hear the goals of something like artificial general intelligence or AGI. Sam Altman.

OpenAI CEO recently expressed confidence for achieving AGI and he said, they have the hardware available. It's possible with the existing hardware. So as a non-technical person, I said to myself, okay, so what does that mean? I don't think most people know all of the components of hardware that go into an LLM like Chat GPT. So is that significant?

Is there a lot more

that needs to happen to get to AGI? Or is it pretty clear that that hardware exists and really most of the innovation that needs to happen is within the software, within the cloud technologies?

Arvind Narayanan (41:07)
Yeah, I have a fairly extreme view on this and Sayaj and I have a paper, AI as normal technology, which gets into the details of a lot of this. think what you need for AGI is almost nothing to do with hardware and much less to do with even software than a lot of technologists would claim, but instead the hard nitty-gritty work that has to go into taking all of the

to millions really of subtle details that are part of any job description and making them accessible in a form that AI can actually learn them and be able to perform all of those tasks in the economy. I think right now we're nowhere close to that. You know, the way chat GPT learns, right? Reading text on the internet. I mean, it's kind of like learning to ride a bike by reading a description of it.

Siara (42:01)
Mm.

Arvind Narayanan (42:01)
it's

made it good at generating text, but to be able to perform in the context of a real job when a lot of the understanding of what it takes to do that well is tacit, is too nuanced to ever have been written down in text anywhere, let alone on the public internet. And so we're starting to see these really strong limitations when you wanna actually go from text generation to...

doing something that's more complex. To give an example of this, just in the consumer space, one of the things that companies have been trying to build for the last couple of years and have been claiming is imminent this whole time is AI that's gonna do things like book flight tickets or do our shopping for us, right? And this is just a perfect example. To us, it seems like something that is kind of so rote, so basic.

so annoying that we want to offload it to AI. But all of the little details that we don't even think about that go into that process. It's not something that AI has been able to learn by reading text on the internet. The way you could teach it to do that is by having it do it millions of times and learn from its mistakes. But that's hard to do because unlike making mistakes in predicting text,

Siara (42:50)
Mmm.

Arvind Narayanan (43:17)
making mistakes when doing shopping means that you've wasted money. So they haven't been able to train it that way either, right? So that just gives you a really basic sense of how different doing real world tasks is from the kinds of things that chatbots are doing now. And then if you want to go from there, you know, from travel booking to something more complex, again, imagine all the things that a lawyer or a doctor does, it's a couple of orders of magnitude more complex, right? So to me, the hard problems that need to be solved to get to AGI.

Siara (43:21)
Yeah.

Arvind Narayanan (43:45)
to have AI doing a variety of tasks in the economy have to do with how you're going to integrate AI with existing organizations, institutions, existing workflows, those sorts of things. Those are socio-technical questions. They're not purely technical questions. The AI industry completely fails to understand this. And that's the reason they have been over and over and over again, wildly over-optimistic about AGI. And I can...

fairly confidently say that they're going to continue to fail to learn that lesson.

Siara (44:15)
very interesting. So would you say like for AGI, do these companies know this or is this a form of snake oil in itself?

Arvind Narayanan (44:25)
I mean, they're seeing AGI as a property of a technical system, right? To me, that fundamentally doesn't make sense. It's like asking if, I should come up with a good analogy for this. I don't have one off the top of my head. Yeah, sorry for the digression there.

Siara (44:32)
Yeah.

Okay, you have incredible analogies. love all of the analogies in the book. For anyone

considering reading this book, it's very easy to understand because of the wonderful metaphors. So you get a pass. so yeah, I feel like AI in itself, the term is so interesting, because when we talk about intelligence in the context of AI, what does that really mean? Do you think, especially because the

definition of AI isn't very clear, do you think that

is what's leading to some of this shadiness,

Arvind Narayanan (45:14)
Yeah, for sure. I think this really plays to a company's interests Because there's no clean technical definition of AI that you can use to say this is AI and this is not, there's no definition police, if you will. And I don't think there can be one.

Companies like to rebrand things that have long existed as AI if they want it to seem more cutting edge or something that doesn't exist yet as AI if they want to try to convince people that even though this seems too good to be true, they have actually managed to build it because AI is advancing so quickly, like the video analysis software or the robot lawyer, right? Which doesn't work and in a real sense doesn't exist.

And so, yeah, that leads to a huge amount of confusion. And my wish is that we would even use the term AI much less. And when we're looking at the criminal justice system, for instance, I would rather call it automated decision making or some other much more boring term, which more clearly describes what it is. But it's hard to do. mean, it's the term AI that's bringing us together in this instance, right? Because everyone wants to talk about it. And I would have wanted to use the term

AI much less than our book, but our publisher very wisely and I agree said that that would not be a good strategy for obvious reasons. And so we're kind of all stuck in a way contributing to this hype and that's, yeah, I don't know how to break out of that.

Siara (46:35)
Yeah.

Would you like to see a formal definition for AI in the future?

Arvind Narayanan (46:45)
I mean, I don't think it would make sense because like we say in the book, it's an umbrella term for so many different things. So you could have formal definitions for those specific things, right? So we coined a definition for something that we call predictive optimization. That's the kind of AI that gets used in hiring and criminal risk prediction, that sort of thing. And we have a pretty clear definition for that. Obviously no one wants to use that term. A few of our academic colleagues use it and we're happy with that.

And so it's a challenge between defining a term precisely and that term

broadly enough that people broadly have some shared understanding of what it is. And it's hard to achieve both of that at the same time.

Siara (47:27)
Switching gears a little bit. can you please explain for those of us who are interested in the ethics of AI or how AI is progressing, can you explain the three warring schools of thought about the progression of AI? My understanding is that it's AI safety, AI ethics, and then something called EAC. Yes, this is me getting in the nitty gritty of the internet and trying to understand

and there's a few groups of people that you've mentioned so far that have me thinking they're part of the EAC. well, I'll let you explain it because clearly.

Arvind Narayanan (48:00)
Sure. Yeah,

so the AI safety community is probably the most well known at this point. They're worried about kind of more terminator style risks of AI and think that there should be strong intervention to minimize those risks, even if it takes something as dramatic as pausing the development of AI. The AI ethics community is in many ways

an older community coalesced about a decade ago and is more concerned with the present and immediate harms of AI, such as discrimination, a lot of the things we've talked about and the labor appropriation that goes into AI, for instance, right? The fact that there are millions of people in developing countries who are making a dollar or two an hour and are doing really work that's drudgery and terrible in many ways, like...

filtering out the worst of the internet in order to train AI. So they're concerned

that

Siara (48:56)
Thank

Arvind Narayanan (48:57)
of harms and are worried that all of this talk of futuristic, super powerful AI is making it harder for us to address these present harms. And then EAC is in contrast to both of these. They're kind of in a way the opposite of the AI safety community. They see technology as the solution to a lot of the world's

and want to accelerate the development of AI and technology in general so that we can get to that much happier future and are also skeptical of the safety dangers of AI.

Siara (49:30)
And you don't specifically prescribe any of these from what I've heard.

Arvind Narayanan (49:34)
I mean,

yeah, mean, all three that I've presented are somewhat caricatures. And I don't know that there is anyone out there who precisely fits into any one of these buckets. think the existence of such sharp divisions in the community is unfortunate.

I don't know. mean, I see our role as, you know, instead of being committed to one particular cause at all costs to just really look at the evidence and go where that takes us. That's hard to do, but we're at least trying to do that, I think.

Siara (50:05)
You mentioned the content moderation use for AI and in the book you talk about how AI won't necessarily solve that problem. Can you explain why not? As someone who is concerned for those humans who do

do that and knowing that that's been causing some major trauma for the workers in that space, that's something I was hoping AI would be able to help.

Can you explain why it might not fully solve that issue?

Arvind Narayanan (50:32)
Yeah, definitely. does help. So content moderation is what happens on social media platforms when we post things online. People often have to look at it and see which posts or videos or whatever violate the platform's policies and have to be taken down versus which ones can stay on. So AI is heavily used in that process and has been for a decade at least. And yeah, this is...

grueling work, it's traumatic. And so it's a good thing that AI is used in that process. I don't know if it will eliminate the need for human workers there. I mean, at a minimum, you need them to train the AI tools that are being used for this, but it can certainly cut down the amount of human labor. The part we're very skeptical of though, is when these platforms get into trouble for the way in which they try to balance

on the one hand, you know, keeping the platform safe and free of harassment and toxicity and so forth. And on the other hand, also protecting free expression and not curbing down too hard on people's needs to speak their mind. And this is the core hard problem of social media, right? Why there have been so many fights over it and why a lot of these platforms lack legitimacy in the eyes of the public.

So it's a very natural tendency, I think, for the Zuckerbergs of the world to say, we're going to use AI to solve this problem. We don't want any human bias in this process. AI is going to make the decision objectively. That, we think, is a pipe dream. These tensions are fundamentally about our values, and there aren't going to be objective mathematical ways to resolve those tensions between values.

When you bring AI into this process, the social media platforms are untrusted and AI is not trusted. And so that's going to only worsen the legitimacy crisis, not improve it.

Siara (52:24)
Do you have any hopes for how AI can genuinely improve society beyond the hype? there any areas where you see untapped potential for AI?

Arvind Narayanan (52:34)
I mean, let's start with the tapped potential. There are so many areas in which it already has made things so much better and self-driving cars is one of them. So in the book, we do criticize these CEOs and researchers a little bit for being widely over optimistic about how quickly they would be able to build them. The first prototypes were working well even 20 years ago, but it's taken until now for them to be widely deployed. And now they are widely deployed. Waymo is doing something like...

Siara (52:42)
Mmm.

Arvind Narayanan (53:03)
almost a million rides a month, if I remember that correctly, and it's increasing on an exponential scale. And so I do think that is going to dramatically cut down on the number of road deaths in some number of years. Again, it's hard to predict how quickly this will be widely deployed.

But when it does, and I think it's a matter of when, not if, I think that's going to be an enormously positive thing. But also much simpler things, right? Like the Roombas in our homes to even spell check and autocorrect and various little things that make our physical and digital lives easier. That's the kind of AI that I wish there were more focus on as opposed to these more grand promises and ambitions.

And these general purpose tools, which are good for what they are, but it's kind of like putting a buzzsaw into everybody's hands, everybody in the world all at once at the same time. As opposed to putting a lot of thought into building specific tools using these general purpose technologies, I think that's a balance that's quite off-kilter right now. And if companies were to put a lot more effort into product development, I think we're going to see a lot more useful tools.

Siara (54:10)
Yeah, so about product development in a recent issue of your newsletter, you mentioned how you'd like to see less God-like AI products and more like specific use case products. Can you give us more about that view?

Arvind Narayanan (54:24)
Yeah,

yeah, so the philosophy behind releasing ChatGPT and many other chatbots of that nature is that their general purpose, the companies themselves can't predict what people are going to want to use them for. And so they're just going to throw this out there. People are going to figure out for themselves how these things will be used. And the companies are going to spend their efforts instead of developing products. They'll just be building

more  more powerful AI that's going to pretty soon start to be able to automate everything. Sorry about my throat.

Siara (54:57)
No

worries. Is there any specific, I mean, I know that we all use AI every day, whether or not we realize it, but is there any specific AI product that you really love? You might use it daily or maybe a specific LLM that you trust to use for, a finite amount of use cases.

Arvind Narayanan (55:16)
I use a variety of AI products. And I don't mean to endorse one over the other, but one that I really like is Claude's feature to kind of create a little app on the spot. And the feature is called artifacts. And I often find myself using this both for work, but I've also when I'm spending time with my kids, when I'm teaching them something, it turns out that it's often

Siara (55:29)
Hmm.

Mm.

Arvind Narayanan (55:39)
very effective and fun to just kind of build an app to illustrate what I'm talking about. So I was teaching my daughter about fractions and it took literally just one minute to get Claude to build an app where you can move a slider and it shows you what the fraction is and that sort of thing. And that's a very easy way to visualize fractions. I've built some of these for learning phonics, various other things. And what's so nice about it is that

Siara (55:56)
Hmm.

Arvind Narayanan (56:06)
I don't have to go off, take an hour, build an app, right? I can do it just in that moment. I mean, even building an app in an hour is amazing. Something we can do now because of AI would have been unimaginable a couple of years ago, but this is the next step, right? From an hour to a minute, because it kind of does the whole thing. It can only do it for simple apps, but still, what's great about it is that I can do it in that spontaneous moment, right? My kids are young, so we're talking about fractions or whatever. And right there, I can pull something up that is going to make that interaction more fun. So that's been really great.

Siara (56:10)
Yeah.

it is.

Arvind Narayanan (56:34)
Another one with my kids is just using ChatGPT, but I'm sure there are many other apps for this purpose. We're walking around and they want to know what a tree is or what a bird is or whatever. Usually I don't know the answer. I can just take a picture and the chat bot will not only tell me what the species is, but also it's migratory patterns or whatever.

Siara (56:53)
Mm-hmm and speaking of kids I'm wondering if you have any advice for young people young talent that wants to enter the AI field Whether they are six years old because I saw a six year old coda website the other day it blew my mind Or maybe they're in college and they're like, what do I study? What is going to make the most sense? Obviously, we want everyone to have careers that they feel passionate in but I think some people are not sure about know the direction that they go in that will actually follow the way of the market

Arvind Narayanan (57:06)
Yeah.

Siara (57:22)
you are having CEOs like Sam Altman and actually give advice, like this is what kids should be studying. So I'm curious of your take on that.

Arvind Narayanan (57:30)
Yeah, I'll give two thoughts. One for really young people and another for somewhat older people maybe entering college. I think for really young people, and this is a very personal decision, parents have different philosophies on this. But my thought is that kids need many kinds of digital skills. And I think...

with appropriate supervision, they can really start early. And that's really going to help them as they become independent device users when they're a little bit older. So that's both, knowing how to use devices, but also avoiding all of these risks with devices, right? I mean, a big one that we talk about all the time is addiction and mental health. To me, both as a parent and based on some of their research is to me, I think the only

solution to that is going to be teaching the skills to avoid addiction. Abstinence is not really going to be a sustainable solution. You can do it for a while, but you're kicking the can down the road and then you still at some point need to learn to resist and overcome device addiction. Everybody has to learn it at some point. And I think in many ways it's actually easier to learn it as a younger child. But again, it's a very personal decision. And more for

Siara (58:41)
Hmm.

Arvind Narayanan (58:43)
A little bit older people who are thinking about skills and the economy. My prediction is that because of AI, I think a more well-rounded set of skills is going to be more important than just deeply technical skills, for instance. So having a little bit of technical skill, knowing enough, for instance, about computer programming to be able to use AI in order to build apps, but where you're really broad, right? And knowing

what is necessary to build in some particular area, being able to talk to people and understand their requirements very efficiently. So if you have that kind of well-rounded set of skills, that's going to serve you really well, I think. And I'm thinking from the perspective of a software engineer, but whatever your chosen path is, think breadth is going to be really important.

Siara (59:27)
And then like you do have so many positive things to say about AI, but I would guess that you've probably gotten the you're an AI skeptic, maybe overly skeptical. But how would you address that criticism? Because there was a point in the book where you mentioned it's important to be so critical of AI so that we can actually see the light of the AI that's going to make a big impact. And I think that's really important. So

Take a minute to explain what your overall view of AI is in terms of how should people feel, whether they should feel excited or scared, if they don't really have any skin in the game, they're not really in the tech industry. What's your take?

Arvind Narayanan (1:00:07)
I very much understand why some people might feel excited, some might feel scared, and any reaction in between. I mean, I don't think it's a skeptical book. I think if people read the book fully, I doubt that they will see it as a primarily skeptical book. In the long run, we're very optimistic about many types of AI, not about criminal risk prediction, for instance, but many of the other things we've talked about today.

Siara (1:00:30)
Thanks.

Arvind Narayanan (1:00:31)
But there's a good historical analogy. When you go back to the Industrial Revolution, to me, it was one of the best things in the history of humanity and led to enormously improved living standards all around the world. But it took many decades to achieve that effect. But in the meantime, it was a pretty shitty period, right? Because it led to mass migration of workers from rural areas to the cities.

where they were living in tenements, extremely unsafe conditions. There was a lot of poverty. Worker safety was not a thing yet. So people would very often die in workplace accidents and other labor rights, right? Minimum wage laws, those weren't there yet. And so it was the horrible conditions and the immediate aftermath of the Industrial Revolution that actually led to the modern labor movement. So

early on we were making the pie bigger, but we weren't distributing the pie in a way that was just. So I think something similar is happening with AI. We're making the pie bigger. We're not yet distributing it in a just and responsible way. so during this period, I can very much imagine why people might have a negative reaction, but I do think we have the agency in some cases individually.

Siara (1:01:29)
Yes.

Arvind Narayanan (1:01:46)
in other cases collectively to change that situation.

Siara (1:01:49)
Thank you for that. Okay, so how can we learn more about your work? I know that there's an AI snake oil newsletter, but I'm curious if you and Sayash have been cooking up anything else and just how can we continue hearing from both of you?

Arvind Narayanan (1:02:05)
Yeah, we have the newsletter. We're both on social media and people should also feel free to drop me an email if they like, if they want to share a thought. I don't always have the ability to reply, but I try. So yeah, really grateful for all the interest that people have shown in the book and in our work.

Siara (1:02:15)
Mm-hmm.

Awesome. then I ask this question to every guest and that is what are you logging out of this year and what are you logging into? It's completely how you interpret it. It's up to you. The intention is just to share, you know, what you want to leave behind this year and something that you're leaning into. And it's perfectly timely as 2025 approaches.

Arvind Narayanan (1:02:41)
Yeah, definitely logging out of all the outrage on social media, would say. I think many people are feeling that they've had enough of it. So I kind of, I think I finally made that decision. It's been nice. even simple things like, you know, for me, social media was often the way to...

Siara (1:02:47)
Yes.

Arvind Narayanan (1:03:00)
get the word out about my work. And now we've switched more to a newsletter and just a simple change in format, I think really changes the tone of the conversation. You can write deeper, longer things and people aren't dunking on you for clicks and there's no recommendation algorithm that amplifies the most outrage inducing posts that people are making. So,

I think those are all small changes we can make. And what am I leaning into? Maybe for me, it's this idea of engaging more with my intellectual adversaries, if you will. So this is a thing we have in the scholarly world called adversarial collaboration, where two people who have very opposite beliefs on something come together.

with a frame of mutual respect and wanting to kind of learn from each other. I've had the chance to do that a few times. I've really enjoyed it. And again, it's one of the things that you can never do on social media, right? You have to do it more in a more intimate setting. And I hope to do much more of that going forward.

Siara (1:04:04)
Amazing. I'm absolutely with you on the letting go of internet outrage. I'm working on figuring it out, but I like your I'm walking away from this conversation with that. But thank you so much for sharing all of your insights with us. This has been highly valuable. check out the book and hopefully we have you on the show again. Maybe we can

along one of your intellectual adversaries and have a little debate.

Arvind Narayanan (1:04:27)
Wonderful.

Thank you, Siara. This has been really fun.

Follow us on Social
Subscribe to The Log Out Report:
Siara Singleton
Host, The Log Out Podcast

Subscribe to Log Out
wherever you listen.