On this episode of Invested, Michael hosts Oren Etzioni, Harvard’s first computer science graduate, venture partner, startup founder and a prominent AI researcher.
Oren Etzioni is Professor Emeritus of computer science, and founding CEO of the Allen Institute for Artificial Intelligence (AI2). Etzioni is the founder and CEO of TrueMedia.org, a non-profit dedicated to fighting political deepfakes, which launched in April 2024. Etzioni is a Technical Director of the AI2 Incubator, and a venture partner at the Madrona Venture Group.
Please rate this episode 5 stars wherever you stream your podcasts!
Follow Oren on X: https://x.com/etzioni
Subscribe to Invested here: https://content.aleph.vc/invested
Learn more about Aleph: aleph.vc
Sign up for Aleph’s monthly email newsletter: https://newsletter.aleph.vc/
For the transcript of this episode, go to: https://content.aleph.vc/podcast-episodes/oren-etzioni---episode-33
Subscribe to our YouTube channel: https://www.youtube.com/@aleph-vc/
Follow Michael on Twitter: twitter.com/mikeeisenberg
Follow Michael on LinkedIn: https://www.linkedin.com/in/mieisenberg/
Follow Aleph on Twitter: https://twitter.com/mikeeisenberg
Follow Aleph on LinkedIn: https://www.linkedin.com/company/aleph-vc/
Follow Aleph on Instagram: https://www.instagram.com/aleph.vc/
Oren Etzioni:
There was a famous fake image of former President Trump, supposedly on the Epstein plane surrounded by underage girls. And it was meant to be quite a damning picture.
And the famous actor Mark Ruffalo forwarded it to a million followers on Twitter, because he was like, “Oh yeah, that's just the kind of thing Trump would do.” It turned out to be embarrassing for him when it came out that this image was fake. So I really hope, and I really believe that both sides of the aisle, whatever your political views are, you don't want to be duped.
You don't want this election to be decided based on fake. You want it to be decided in an orderly fashion.
Michael Eisenberg:
Welcome, everybody, back to another episode of Invested. I am thrilled to have Oren Etzioni on with me. In the run up to the show, I just reminded him that I once kind of obnoxiously came up to him at a conference in Scottsdale, Arizona, and introduced myself, to this legend in my eyes, Oren Etzioni.
And I'm just thrilled to have him here. Oren, rather than me listing your achievements, which I may do anyway as the episode goes on, can we ask you to tell us about yourself?
[00:00:29] Oren Etzioni:
It's a pleasure. You know, talking about myself is one of my favorite topics. I was born in the United States.
My mother moved back to Israel with myself and my older brother, when I was a year old. I grew up in Israel. Then my mother ended up in Australia with her husband. And I came back to the U.S. when I was about 13, 14. And since then I've been here as an academic. I became a professor of computer science at University of Washington, but also founded, or co-founded several startups, up to now where this year, I've co-founded a startup that's a non profit called TrueMedia.org that fights political deepfakes. So that's the TLDR.
[00:01:21] Michael Eisenberg:
And we're going to talk a bunch about TrueMedia, but I also want to point out that you were the founding CEO for the Allen Institute for Artificial intelligence, a highly relevant topic.
You've been a venture partner at Madrona Ventures for, I think, two decades. Madrona Ventures, of course, one of the first venture capital funds in the Seattle area, going back to, I believe, the late 90s, if my memory serves me correctly.
Oren Etzioni:
Yes, I've been around that long.
Michael Eisenberg:
And you've received just about every accolade out there.
The Robert Englemore Memorial Award, Distinguished Paper Award from the IJCAI, the AAAI, Fellow National Young Investigator Award in 1993, and two Test of Time Awards from the ACL and IUI, and I could go on. And so I just want to thank you for taking the time to do this. The thing we start with on the show always is, I ask guests, what is your core value?
Like the most important thing in your values hierarchy.
[00:02:25] Oren Etzioni:
I'll give you two different ones right there. There are more, I guess. First of all, I would say that family and personal things, health, always comes first. People who neglect that for other things, again, there can be good reasons, but that's never been my experience.
Professionally, I would say that life is too short to work on a project you're not incredibly excited about. And so, there are so many technical problems, so many fascinating business puzzles, and we have so little time. So I prefer to choose things that I think are really worthwhile, whether it's helping consumers shop better, which has been the topic of some of my startups, or fighting political deep fakes, which I'm doing now. It's always been the case that I get up in the morning and I'm just really excited to get going because it isn't like, “I have to do this, you know, boring stuff, but it's going to be very lucrative.”
It's much more like, I love what I'm doing and hey, it may be lucrative and maybe not, but this is just so amazing.
[00:03:39] Michael Eisenberg:
You know,I often tell the story to my kids. I had a rabbi in the ninth grade. He and I didn't get along that well, but he had one line that I've taken with me for my entire life.
He said, “There's no better expression in the English language than ‘killing time,’ because once it's passed, it's dead forever. And that had a huge impact on me. Huge impact. And so, I want to actually dig right into TrueMedia. So, we're recording this about 40 days before the U.S. election. The election that's had more money spent on it in the shortest period of time probably of any election before.
It's really the first election since we're in what I would call the full-blown AI era. Or maybe, we haven't even gotten to the full-blown AI era yet, but certainly we have enough really deep fakes. I've joked to a few people that sometimes reality is funnier than the deep fakes, given some of the politicians around the world these days, but nonetheless, deep fakes are a problem.
And so what I'd like to do is just have you kind of slowly unpack this for us. Tell us about what you perceive as the big drawbacks of these AI deepfakes. I want to fashion the problem, or describe the problem first.
[00:04:56] Oren Etzioni:
So I love how you started this because it is actually a little bit more subtle than people think.
Very little, right? We're so polarized nowadays. Very little is going to take a diehard Democrat and make them a Trump supporter, make a diehard Bibi supporter, and turn them against him, et cetera, et cetera. So what happens though is the disinformation operations, whether they're state sponsored or whether they're what I sometimes call ‘disinformation cyberterrorism,’ they attempt to undercut trust–to gnaw at the edges, to foment divisions, to create fear, uncertainty, and doubt, and just to get people to a place where they can no longer feel like they know what's real and what's true and who's who.
And that is a lot easier to do by just showing people a bunch of stuff where after a while you're like, well, gosh, I'm not sure what's real and what's fake, and where you have people showing real things and other people saying, “Oh, you know, that's not real. That’s AI.” So it's, I think the biggest effect is undermining our confidence that we know what's real and who we can trust.
[00:06:22] Michael Eisenberg:
Why is it so easy? Or is it easy–to undermine people's confidence in what is real? Like you said, on the one hand, people are, “I'm a diehard Trump. I'm a diehard Harris. I'm a diehard Republican. I'm a diehard Democrat. I'm a diehard Bibi guy. I'm a diehard anti-Bibi guy.” So if that's the case, why is it so easy to undermine people's trust in what's real?
[00:06:40] Oren Etzioni:
Two key facts or factors. The first one is social media. So it used to be that we would get most of our information from a trusted news source, you know, Walter Cronkite on CBS with his reassuring voice. I'm probably dating myself here. A bunch of folks are like, “Who's this Walter Cronkite guy?” But we all had trusted, high-quality news sources.
Now, many people, particularly young people, are getting their news from social media, from TikTok, and there are surveys that demonstrate this. The thing about social media is very different kind of editorial decisions, if you like, right? Algorithms are driving them. They're click-driven. Often more extreme content is more clicked on, and you just don't have the same level of quality control.
So that's one piece of it. If your source is social media, you have truth and lies side by side. Maybe you see them in subsequent videos. The second thing is, we are visual animals. When we see something in an image or a video, it's visceral in a number of ways. We tend to react to it. So now the combination of generative AI, which allows people to create credible videos and images that are fake or that are highly manipulated, coupled with the social media stream, is really an Achilles heel for democracy and for our ability to decide what's fact-based and what's not.
[00:08:18] Michael Eisenberg:
I want to ask a hard question before we kind of continue down the deepfake area. So, I do remember Walter Cronkite from when I was a little kid, and when I grew up there was Dan Rather and Peter Jennings in the United States, and others. And I remember the first time I was written about in a newspaper, mainstream media, and it was false. And then I said to myself, “Wait a minute. If this is what they write about, and it's under the masthead that people trust, well how much of the other things I read here are trustworthy?”
And just recently, for example, a report's come out about the BBC that I would say deeply undermines, or should undermine anybody's faith in what the BBC puts out there.
So did we actually, was Walter Cronkite, or Dan Rather, or Bill Jennings, Peter Jennings, that was his name, Peter Jennings, actually tell us the truth? Or were there editorial decisions that are not too different from algorithms, but we just didn't know any better?
[00:09:23] Oren Etzioni:
Wonderful, wonderful question. And of course you're right, and we see this so much, particularly in reporting also in the United States about the Middle East and about Israel.
The framing, the slant of the story, it's not even–and on truth. But if you report, say, what's happening in Gaza without giving it any context, right, it's just an interview of a person who is suffering horrifically, right? In Gaza, their family and so on, then obviously that has a slant to it.
And that does originate from a particular editorial decision. There's no notion of, well, who's responsible for this? Is this Hamas using people as human shields? What led to this? It's just kind of this human interest story that sounds horrible. And this is repeated again and again on both sides, right?
The New York Times, NPR, BBC is maybe some of the worst offenders. And then on the other side, you know, the Wall Street Journal and others, and of course, Fox News do that as well.
So that is as old as time, right? Probably the snake in the garden of Eden did not practice full disclosure when he was sharing supposedly factual information with Eve.
That said, we're still in a different place. When I show you visually something that just didn't happen–for example, the smiles on the secret service agents who were surrounding Trump during the assassination attempt, or images of Iranian planes bombing Tel Aviv, right? Which never happened. Or a leaked audio in Slovakia, of a candidate in their quiet period 40 days before the elections. So I think the thing that's different is there's always been subtle pulls and pushes in various directions. There aren't these smoking guns or bombs, to use these metaphors, that show up, that you see without any interpretation and that are just factually non-existent. They're completely faked. So there is really a big difference.
[00:11:42] Michael Eisenberg:
So, okay, so you believe that the deep fakes created mostly anonymously, not under anybody's imprimatur or brand, are more pernicious than what you would call framing, or slanting, or verbiage, or even some images that put on a mask that are produced by mainstream media?
That's a fair assessment? Because, you believe–I'm just trying to regurgitate what I think you said–because you believe that the images, and the videos, and their ability to proliferate on social media–and I would add, maybe this you did not say–can be socially confirmed by people like me, and by my circle of friends–makes us more pernicious and undermining people's understanding of what is true and what is false, what is fake and what is real?
[00:12:31] Oren Etzioni:
Yes. Let me just give two quick examples though, to drive this point home. So one that's not even social media, but during the November primary, there was a fake audio of President Biden, sounded just like him and it was robo-called–that's how it was shared–with tens of thousands of voters in the New Hampshire primary, and it told them to stay home.
So this is election interference, again, not via social media, not visual, in this case audio, but a very specific and very credible attempt by an individual to influence the election. Second example, which was via social media, there was a picture of the Pentagon being bombed, not at 9/11 a long time ago, but in the last, I think it was 2021, in the last few years, that resulted in a glitch in the markets. This video that looked like the Pentagon was being bombed was shared widely, just like you said.
And traders, right, are always trying to get the edge on each other, traded the market down very sharply until they realized this was fake. So again, two specific illustrations where in real time, something going viral or spread in one way or another can cause a very specific and targeted harm.
[00:13:53] Michael Eisenberg:
I'm glad you mentioned that point about the market. I was going to head there. Maybe we'll get back there. I wonder, the deep fakes like the Biden robocalling fit into one category, which is what I would call permanent damage because, you know, a moment passed that you can't get back, which is an election day–versus things like the markets, which recover from these things, because actually our truth system on the outside, whether it's Community Notes on Twitter or hey, we can kind of figure this out, actually works reasonably well.
Michael Eisenberg:
Do you think about, in the framing of the problem, these kind of temporal moments that pass, versus things that we have a timeline to fix, or do you think it doesn't matter?
[00:14:31] Oren Etzioni:
You're absolutely right. If we delve into this, we can find a very interesting taxonomy, some things that have lasting damage, some things that are temporary attempts to manipulate the market, or manipulate the voters, and some things that are more kind of like erosion, right?
So again, another concrete example just to help us bring this conversation down to earth–there was a parody video of Vice President Harris that Elon Musk shared with 191 million followers on Twitter, but without saying it's a parody. It was a deep fake video. Now, most of the people who saw it, and it was viewed tens of millions of times, might have known that it was fake, but did it affect them subconsciously?
Did it kind of contribute to their confirmation bias? So that's yet another thing. It's not so much a concrete, tangible instance, but part of a flood of an attempt to brand or rebrand candidates or countries in what they do. And of course, I think Israel has been a victim of this, because beyond the slants that you see, say in BBC and beyond the genuine horrific images that arise., there's also fake ones that are viewed again millions and millions of times, and they just contribute to a particular narrative, which I think is highly flawed.
[00:16:08] Michael Eisenberg:
Before we dig into the TrueMedia solution, I actually want to ask you a personal question. When you view content online, is your default assumption that it's fake or real?
[00:16:18] Oren Etzioni:
When I view content, and I think this is a very important point for everybody to internalize, because again, we have a free tool at TrueMedia.org people can use, but a lot of people won't. And so what I do when I look at something is first, I monitor my reaction. And the more outraged I am, the more emotional reaction I have, the more I actually slow down and say, “Okay, how do I persuade myself that this is bona fide?”
So that's the first thing, is monitor–
[00:16:49] Michael Eisenberg:
You are a true student of Daniel Kahaneman. Well done. The late great Daniel Kahaneman.
[00:16:52] Oren Etzioni:
Yes, he was actually a friend of my parents at Hebrew U back in the day, but that's another story for another time. And then the second thing I ask myself is, do I know the source?
Not the purported source, like it'll say CNN on it or something, but do I actually know who shared this? And only then do I start to think about, do I believe it? And so on. So I am very careful with things that are very powerful.
[00:17:29] Michael Eisenberg:
Tell us about TrueMedia, what it does, what you're trying to accomplish, whether you think you can be successful, given that this cat is way out of the bag.
[00:17:39] Oren Etzioni:
I came to this problem in 2023. I was actually in a small meeting with President Biden and a number of members of his staff, Governor Newsom from California, some AI folks, and we all brought in some of our moonshot projects. and some of our concerns. And I'm actually a big AI optimist.
I think there's a lot of great things we can do with AI and, you know, we can talk about that. But in this context, I came away actually quite afraid, because Tristan Harris, who's a luminary warning us about the effects of social media, was outlining some of the potential risks and scenarios around deepfakes.
And I was thinking, I can just imagine a situation where, 48 hours before the election, or even on election day itself, you see a video of one candidate or another being rushed to the hospital. You see the talking heads get on TV and saying, “It doesn't look good.” You see reports of an active shooter at a polling station.
You see some things that can have lasting damage, because before they're debunked, they change people's voting behavior. And of course once the election is over, the election is over. And so I said, “Well, let me dig into the tools that are available to the general public, and to [00:19:00] election officers, and to the mainstream press media organizations, to rapidly debunk this. And I quickly learned that these tools are not out there.
There are three-letter agencies as we call them, or major corporations like Microsoft that have their own tools, but they're not widely available. And so I launched into the problem of. building a deepfake detector that is available to the public. Fast forward to now, as we're getting close to the election, just a very short while ago, last week, we announced public availability of our tool on TrueMedia.org.
Anybody can go upload a social media URL. If you have an account, you can upload a video or an image. And you can assess whether it's fake or not. The social media URL, by the way, I should explain, what we do is if you upload something from Instagram, Facebook, TikTok, etc., we will find the video or image at that URL, and we will analyze it, And provide you in real time, within a minute or so, our assessment of whether it's real or fake.
So, I got into it to help solve a problem, particularly in 2024, and particularly in the run up to this, one of the most consequential elections in history.
[00:20:26] Michael Eisenberg:
Do you think consumers are using it, or is it journalists using it to figure out what to do? Or political operatives?
[00:20:32] Oren Etzioni:
It's been very interesting. Most of our users fall into two categories.
They're media organizations, fact checkers. For example, during the Indian election, we were able to debunk some fakes in India. And we have what I would describe as a somewhat haphazard assortment of concerned citizens. Some people are like, “Ah, I'm [00:21:00] not going to worry about it. Yeah, I've got better things to do.”
And other people, and I couldn't tell you who's who, are extremely worried about it and get very involved in uploading media and checking it, highlighting things that they think are potentially fake. So we have both of these. We have had people from political parties and operations using the site, but fewer. They’re more kind of, boots on the ground, get the voters out. They're aware of the issue, but I think it's not their most pressing issue, frankly.
[00:21:36] Michael Eisenberg:
How many links or objects are you getting in a day?
[00:21:38] Oren Etzioni:
It really varies, but I would say it's in the thousands over a week, averaging, so growing very quickly.
Some of it also comes from places like Telegram or WhatsApp that's not open to the public. Some of it is very fanciful. It's not all political, you know. There's dating scams. There's all kinds of stuff that we see. And the data that comes in helps us to tune our algorithms and get better and better at this detection problem. I want to make one more point though, that's really important. Cause you're talking about the efficacy of this and the impact. When I started, I was very concerned about the number of items being uploaded. My concern has shifted to the number of eyeballs that see it, right?
So one video, right, shared by Elon Musk or one fake image of Taylor Swift shared by President Trump, former President Trump, can have a huge impact, more so than 10,000 videos that are only seen by a small number of people. So I realized that when you look at the number of eyeballs that they say, the number of people who view this on a weekly basis, it's in the millions every single week.
[00:23:11] Michael Eisenberg:
Well, that's impactful. Do you have a favorite object that TrueMedia has caught, or a most egregious story? Take your pick.
[00:23:21] Oren Etzioni:
If you go to our website, you see what we call notable deep fakes, and you see all the way from a video claiming that Ronaldo is Pro-Palestinian, which is a fake video, to the example of the Secret Service folks surrounding Trump after he was shot and smiling, which was an image manipulated, right, they were grimacing, but it was changed to them smiling.
And I would also say that we contributed to a takedown operation where another organization called NewsGuard used our tools to identify sites, bots that were sharing a substantial amount of disinformation in Europe. So these are some of the things that come to mind, and there are a variety, from an actual seemingly state-sponsored disinformation operation to just something that's meant to promote conspiracy theories and create a distrust.
[00:24:31] Michael Eisenberg: Oren Zamir, your colleague at TrueMedia said, I should ask you: Why did you make TrueMedia a nonprofit? And by the way, this ties back to something we talked about before we got on air, which is the importance of business models to kind of sustain a lot of this innovation. So I’d love to hear why you made it a nonprofit.
[00:24:50] Oren Etzioni:
First and foremost, it was extremely important to me to serve a public mission. Having done some startups, again, I feel [00:25:00] fortunate that I'm comfortable. “Eizehu ashir hasameach b’helko.” The person who has enough, has enough. So I'm not–
[00:25:14] Michael Eisenberg:
The person who has enough is rich.
[00:25:17] Oren Etzioni:
Thank you. Thank you for the better translation. So I'm rich, not because I have so much money, but because I have enough. Exactly right. So being that, I was focused on the public mission, and it was clear to me that the lowest friction is when you do something that's high integrity and that's free.
People don't have to fulfill purchase orders and so on. As we've worked with major media organizations and social networks, we've learned that they do want enterprise-grade packages. So I think that there is the potential. And of course there are companies like GetClarity in Israel, RealityDefender and Hive in the U.S. There are different organizations that are venture-backed and building very interesting businesses. And so I definitely don't discount the commercial potential. However, our mission is different, and was different. And it was to urgently–because a year sounds like a long time, but in the world of technology, it's very fast–to get this tool in people's hands before the upcoming election.
So that was our focus. And I was fortunate to have Garrett Camp, who was the Co-Founder of Uber, to back us out of his philanthropic organization, camp.org. So I had the benefit of not having to score business deals and achieve revenue numbers, and just focus on getting the tool out.
[00:26:57] Michael Eisenberg:
Given how polarized things are–and I have no idea what Garrett Camp's politics are. I don't know him– do you wonder or worry that somebody perceives TrueMedia as partisan because of whatever the donors’ political views are? Is it on some level of business cleaner than that?
[00:27:15] Oren Etzioni:
I do worry that people will perceive TrueMedia as partisan.
And that's one reason at the top of our website, it says in big bold letters, “Nonprofit, nonpartisan, free.” Nevertheless, we know that, like so many things in the United States and elsewhere, disinformation research has been vilified by groups in Congress, has been politicized.
And I think that's unfortunate. So we see, and let me give a quick example–we see examples of disinformation that's meant to impugn the people on the right. There was a famous fake image of former President Trump, supposedly on the Epstein plane surrounded by underage girls. And it was meant to be quite a damning picture.
And the famous actor Mark Ruffalo forwarded it to a million followers on Twitter, because he was like, “Oh yeah, that's just the kind of thing Trump would do.” It turned out to be embarrassing for him when it came out that this image was fake. So I really hope, and I really believe that both sides of the aisle, whatever your political views are, you don't want to be duped.
You don't want this election to be decided based on fake. You want it to be decided in an orderly fashion. The people who really benefit from undermining trust in the election process in this way are the people who are anti trust. Democracy. And so I worry about it, but I can assure you that we are firmly nonpartisan and the tool doesn't discriminate.
You upload something that's fake or that's been manipulated. It doesn't check the politics. It checks the statistical signature of what's there and gives you an automated assessment that is nonpolitical. It's technical. It says, ‘Was this manipulated? Was this synthesized by AI?’ And for that reason, actually, we shy away from things like the truth or even from statements like facts.
Like, is this a fact about the budget deficit? Or even articles, right, because text is so malleable. We really formulate a technical assessment. Was AI used to create this image, video, or audio?
[00:29:42] Michael Eisenberg:
Before switching gears to really a deep dive on AI, I have to ask you an, I don't know, maybe it's an impolitic question.
But you grew up part of your life in Israel, and you reference either pro-Bibi or anti Bibi; you spent most of your life in the United States, and you keep talking about both sides of the aisle, and how people are so rooted in what they believe. Is there any way forward to get people to actually focus on issues, to be more malleable in their thinking, to have actual policy and conversations about the future at this time of such rapid change?
Or do you think we're just on a path to devolution, which is democratic countries, and it's happening in many of them right now, will just over time devolve into more atomic units? Whether it's the United States coming apart into some number of states or regions; Israel coming apart in some way, and other democratic countries.
Is that just the course of history we’re on? Because obviously, by the way, if you look at the last 20 years, there are more countries in the world now than there were 20 years ago. There's been devolution, right? It happened in Bosnia, it happened in Czech Republic. We're in kind of a phase like that, it feels like.
[00:30:56] Oren Etzioni:
I'm not a political scientist, but since you asked, I'll share with you my relatively uninformed view. I would say that certainly we're on a trajectory where a number of things, and certainly polarization and conflict seem to be getting worse. Are getting worse. I think by most measures, I do want to highlight that these are nonlinear processes, right?
So sometimes things have to get worse before they get better. I would be very hesitant to extrapolate linearly. That's a mistake often. And so it remains to be seen how we respond. There are attempts to rein in the influence of social media. companies, social networks, to hold them more accountable.
I think it remains to be seen what will happen in the medium term. That's what I would say, but yeah, in the short term, I'm extremely worried about what we see everywhere. I just saw a video from somebody on Twitter. He said, “I'm going to keep posting the video of the mayhem on January 6th, right, in our nation's capital, at the Senate–a mob storming the Capitol building–to remind us that this can't happen again.” And whatever my political views are, and whoever wins, we cannot afford to devolve into civil war. And of course, neither can Israel. So, without taking a side or other, I think it's safe to say that we should all be opposed to civil war, and we have to uphold democracy.
[00:32:55] Michael Eisenberg:
So switching gears now to a deeper dive on AI. So you have been studying AI for decades. I think I remember at your speech that I heard you gave, you talked about agents way back then, and now they're back to being called chatbots or agents, you know, they have different names. The first time, by the way, I read about agents, was in the second half of the 90s, in a paper written by a Researcher at Hebrew University named Jeff Rosenschein, who you may or may not know–the first time I read about it, he had a company called Agent Soft, in like 1997. What I’d really love to hear is, you know, you have this incredible perspective, decades-long perspective–what about your perspective on AI has changed?
How do you frame AI today? Like, AI's been around for a while. The moment we've had recently is the LLM moment, not the AI moment. And I think that's kind of muddled out there in pop culture. So I wonder how it's changed, how we got here, and where you think at a deep level where we're headed? I know it's a big question.
[00:34:01] Oren Etzioni:
Sure. How much time do you have? But let me touch on it, and then we can dig deeper where you like. First Rosenschein is an old friend and colleague. I would say that my most wonderful tour of Jerusalem was actually given to me by Jeff. There's nobody like somebody who, he just took me to all these nooks and crannies and beautiful spots and neighborhoods.
Just amazing. He's terrific. And we were working on different technical approaches to agents, back in the nineties. The reason that I stopped working on that problem is because I realized that even if we build these powerful agents, our motto back in the day was, “You tell the agent what, and it figures out how.” You give it a high-level goal, and it figures out how to accomplish the goal, which of course would be tremendously powerful. We don't have to delve into all the little details. The software can do it on our behalf, right? And that's the idea of an agent, whether it's a travel agent or another kind of agent, it does things on your behalf.
I realized that to do that well, we need to have a language to speak to the agents. And we delved into research on how we would communicate with our agent to avoid the kind of behavior that’s actually destructive, where you tell the agent, “Hey, reduce the disk utilization on my computer by 20%, the disk is getting full.” And it says, “Yes, sir.” And it comes back and says, “Okay, I've deleted your important proposals and datasets. And by the way, there was a bunch of backups of these, but those took a lot of space. So I deleted those too. Didn't I do a great job?” It's like, “No, no, I didn't mean you….”
Right? But it's too late. So we really need to think about that. Fast forward to today, where as you aptly said, we're having an LLM moment. One huge thing that's changed, and it's really quite miraculous actually, particularly to those of us who've been in the field for decades, you can now actually talk to your programs.
And again, one has to be very careful using anthropomorphic words, but in a very real sense, it does understand you. It comes up with responses that are very helpful and informative. It has plenty of problems. And now actually one of the hottest trends around LLMs is to turn them from chatbots, from things you converse with, to systems that can actually take actions on your behalf, and can formulate multi-step plans, which you need to do even for something as simple as, “Hey, you know, plan for me, or book for me a short family vacation,” or what have you.
And so agents are back. And one of the biggest changes is now we can communicate with them, in a language that to a certain extent, both sides, the agent and you can understand, and thankfully that language is natural language, English or Hebrew, what have you, because that's the language that we're most conversant in.
[00:37:14] Michael Eisenberg:
And so going back to what changed about your perspective, so you look now at the LLM revolution, it's accelerating at a very rapid pace. You had concerns about agents doing things that might be deleterious or destructive because they didn't get it. And now we've got kind of, what I would say, I would say the opposite problem, the opposite challenge, which is, these things are getting really good and really fast.
By the way, I'm publishing a new book in a week, it's coming out, it's gone to the printer. A buddy of mine called me last night and said, “Hey, I got this online publishing thing. Do you want to translate into English and Spanish?” Because I wrote it in Hebrew. And I said, “Yes.” This morning I got a 270 page file that I started reading through and like, my jaw dropped. It was 95 percent of the way there, includes biblical verses, footnotes, the whole nine yards. It was really 95 percent. You know, I've paid tens of thousands of dollars for translation. So I'm really interested like in, okay, Oren Etzion says, “I used to be sanguine about this. I'm worried about this. I used to be worried about this. I'm not anymore. I used to think this was the limit. It's not, you know, the ceiling is a lot higher than I thought.” I know it's a big question.
[00:38:35] Oren Etzioni:
Well, let me first say that, one really needs to answer it in a more granular fashion. So I always believed since I got into the field, in, gosh in the 80s, right? I like to say I got into big data, which is another name for all this body of technologies when it was just little data, it's definitely grown a lot. The concern was always this existential concern, right?
Will it take over humanity? And the question was always can you actually build? human level intelligence AGIs, is sometimes called artificial general intelligence all these things I always believed that, yes, we would be able to build it. The question would be, how long would it take? , what shape would it take? Et cetera.
And so I would say that a lot of what's changed is the underlying technology and the timelines. But the fundamental questions have not changed. And my fundamental beliefs have not changed. So the belief is, yes, we can build highly intelligent software and so on. And on the question of whether it's a threat to humanity, I believe that AI is a tool, not a being, and I like to highlight that intelligence is not the same as autonomy, right?
You can have a very powerful technology. In fact, we do with with ChatGPT and so on, but it just sits there until you give it the next query and so on and and we can have even self driving cars, which are now looking a lot closer with some of the experiments and and the success that Waymo has had in San Francisco and other other cities, right?
They're just available to the public. Well, you have a self-driving car, but you decide where you want to go. It's not like you get into the car and you said “Hey, I'd like to go to an ice cream place,” and it says, “No,” or “You really should go to the gym, I'm taking you to the gym.”
That's not how it works, right.
[00:40:38] Michael Eisenberg:
In Israel, that’s what the taxi drivers tell you.
[00:40:39] Oren Etzioni:
Well, right. The taxi drivers, the humans have autonomy. So I just think it's extremely important. And again, this issue gets more complex, but my one line summary, if people take away from our conversation about AI is, AI is a tool, and the choice is ours as a society, as individuals, we get to decide how it's used.
And there's a very important conversation to be had. How does it affect privacy? How does it affect jobs? How does it affect democracy? But it's not some being that is asserting dominance over us. That's going to tell us what to do. And so we have to fight it. We need to have a broad societal conversation to decide how we as a society use AI.
[00:41:29] Michael Eisenberg:
What is the most interesting, and I would call, “out there” opportunity, that you can think of right now, where you say AI is going to catapult us in ways that nobody even can think about or imagine right now?
[00:41:43] Oren Etzioni:
I really like to think about AI as a way to solve humanity's thorniest problems. It's a very general technology. It's an enabling technology. So you pick the problem. The next pandemic, doctor overwork and physician error in hospitals. Accidents on our highways, right? In the United States, there's 40,000 highway deaths each year and more than a million injuries. Each problem like this where many lives are lost, AI could really help. And there are concrete projects on the way to do that, right?
The much more rapid design of vaccines–Moderna is famous for having used AI. There's going to be a next pandemic, whether it's a natural or something that escapes from a lab–who's going to help us solve that? Climate change? We need carbon sequestration, right? We need to get carbon out of the atmosphere. So we have a huge set of very difficult problems. And in each of these cases, we can build AI systems that would help us solve these problems better and faster.
[00:42:56] Michael Eisenberg:
And when you think about the near or long term risks, bBe specific, not general. What worries you?
[00:43:06] Oren Etzioni:
I'm obviously worried about the impact on democracy, very much in the short term, but also longer term as institutions erode. There's impact on the legal system, right? There's rules of evidence. And we need to figure out if this video or a photograph being submitted into evidence has been manipulated, and so on.
So there's, I would say, AI-based forgery. I wrote an article for the Harvard Business Review in 2019, which is eons ago, right, in AI time, entitled “AI-Based Forgery.” I think AI based forgery is a problem that we need to think about very, very carefully. I think the impact on jobs is perhaps overstated.
They do improve efficiency, but you know, the famous [00:44:00] line, “You're not going to be replaced by AI. You're going to be replaced by another person who knows how to use AI if you don't.” So I think that there will be impact on jobs, but I think it can be managed effectively to increase productivity.
Those are some of the key things. And my focus, what I would love to work on next is actually AI as assistive intelligence. We're all getting older, right? There's this kind of gray boom. And people as they get older tend to develop disabilities. I'm particularly thinking about difficulty in seeing, right? But there's difficulty in hearing, difficulty in moving. Each of those things, AI, particularly based on LLMs, can help. For the first time, we can have software that can help to see on your behalf. in a general environment, right?That would be a tremendous help to those of us who over time start to develop difficulties in seeing.
And so we can talk about that. Overall, I see huge potential benefits of AI that are very concrete in assisting, in saving lives on the highways, in the hospital.
[00:45:22] Michael Eisenberg:
We actually have an investment in a company called OneStep that is using AI to help people rehab, particularly older people from both knee replacement and hip replacement, and other injuries in the legs.
And it's stunning what you can do on one mobile phone with no other devices, just using the computation on the phone, and in the cloud of AI, to help people walk better again. It's actually stunning.
[00:45:44] Oren Etzioni:
More power to you. The mission, you mentioned the Allen Institute of AI, which I created and ran for the late Paul Allen from inception to, you know, 250 people and all that.
Our mission was AI for the common good. And that wasn't just you know, lip service. We were pursuing AI to help humanity. So kudos on that investment.
[00:46:10] Michael Eisenberg:
So, the Allen Institute has a branch in Israel. What are you working on there?
[00:46:17] Oren Etzioni:
So as a nonprofit, we're an open book, and we publish, so people who are interested can go and look.
Our focus there is on natural language, and specifically natural language in service of science and medicine. Again, let me give you a very concrete example of a project. A lot of people end up having to need chemotherapy. We all know somebody in our family or friend group, etc. More than one person, right, who's had to have chemotherapy.
Chemotherapy is remarkably primitive, because it's a cocktail of a relatively small number of medications and medications whose effect has been studied. The doctors are very hesitant to have more complex mixtures, because it's costly and slow to understand their effects. Well, A.I. scouring the medical literature and analyzing it, using natural language to figure out what's going on.
Potential effects and side effects and to spur the development of more sophisticated chemotherapy is the kind of thing that can be done using the tools developed at AI to Israel. It's led by, you have Goldberg on the academic side, Ron Yachini, who actually, you might know from venture capital world on the business side, and they are doing just phenomenal things in natural language to help science.
[00:47:49] Michael Eisenberg:
You know, I find, I actually got this tip from a friend of mine, Daniel Schreiber, the CEO of Lemonade, I don't know how long ago, which is before you go to any doctor or check with any medical professional or anyone else, you should query a bunch of AIs, and I found it incredibly helpful.
It's remarkable what you can kind of scope out. You have to be careful not to be a hypochondriac by nature, although I'll argue, by the way, that the hypochondria that someone could develop from Google searches is far worse than the hypochondria you develop from an AI by exchange. But to your point, I think it’s been absolutely remarkable.
I want to switch gears for another second on AI. I had a meeting actually today where a senior researcher in this field said to me, “You know, I think the world of junior programmers is coming to an end. And instead, you're going to have people whose jobs are to manage coding bots, and that's actually going to be a job. And not an easy one, and a well paying one, a remunerative one.”
You were the first student, I understand, to major in computer science at Harvard University. Is that correct, by the way?
[00:48:56] Oren Etzioni:
Yes, again, this was a paper-based system and Harry Lewis was the first person who signed my paper form, and he looked up and said, “Hey, you're the first,” because Harvard had a major in Applied Math and so on.
So, yes, I was the first person to what's called, concentrate to major in computer science.
[00:49:20] Michael Eisenberg:
Okay. What is question one is what is the comp sci degree of 2025? That's the first question. And then the second question or second part of that is, what all these people who study coding and are kind of juniors trying to break in gonna do?
How do they ever become seniors if, you know, the bots are coming for them?
[00:49:40] Oren Etzioni:
I don't think that the bots are coming for them. One of my favorite sayings that applies very much to AI is, ‘Never mistake a clear view for a short distance.’ And so what that means in this context is, yes, we can see on the horizon tremendous capabilities, including programmers, AI programmers–but even a junior programmer right now is a lot more sophisticated than any coding bot.
And I'll explain why in a second. And so again, what they need to do is, first of all, they need to understand the basic capabilities. Just because we have calculators doesn't mean that we don't learn to multiply and even do long division. I can't tell you the last time I did long division, because I do use a calculator, but it's very reassuring to still be able to do these things myself.
But the most important point is that these tools will accelerate their productivity, will solve some of the thorniest problems, but it is still the case that mapping from a high level need, right, “Okay, I need you to build a package that does this, that, and the other thing,” to specific code that's tested, that's robust, that doesn't contain hidden bugs, and most importantly, does actually what the person wanted–is still very much a human skill. Because what people ask for is often ambiguous and incomplete. I say, “Build me a website,” even something as simple as that, “build a website for my company.” Okay. Well, should it have videos in it? What content should be authenticated?
What should be available to the public? Is it aimed at professionals or consumers? My statement is often very vague. There's a process we have to elicit what I really mean, what are the requirements to build it? And then, what should it look like on mobile? Are you aiming at an international audience?
So it should be a very light site that loads very quickly. Or are you aiming at people sitting in their office in San Francisco, where it can be a very heavy site with very cool virtual reality, and videos and so on. There's just, it's an illusion, and I'm giving these intuitive examples, but with any programming tasks, there are many questions and clarifications you need to do and many decisions that a human needs to make.
It's not just spitting out, spewing out the number of a number of bits that the compiler is able to compile. So, a useful tool. Not a replacement even for the most junior of programmers.
[00:52:36] Michael Eisenberg:
And what's the compsci degree for 2025?
[00:52:38] Oren Etzioni:
I'm not sure I understand your question. Are you saying, if I'm getting my degree–
[00:52:48] Michael Eisenberg:
You were ahead of your time, we can argue. I don’t want to argue. You were definitely ahead of your time, with your comp-sci degree, many decades ago. You were ahead of your time in AI. If somebody wanted to be ahead of their time at the forefront of what's going to be the future,
What's the compensating degree? Or by the way, is it a degree at all?
[00:53:11] Oren Etzioni:
Got it. Got it. So first of all, I would definitely say that studying computer science, AI and data science in 2025 is extremely forward-looking, right? We're still on an exponential, maybe not Moore's law, but other exponentials.
And so it's just an arena that's exploding. Another one that I would consider is biology, because biology is generating tremendous data. There's tremendous opportunities there, both to save lives, but to create new molecules, new drugs, and so on, and so on. So I would be studying those things. I think brain computer interfaces is something that we really haven't had in the past, but over the next 25 years and, Elon Musk, but others are delving into. It's extremely, extremely valuable on this question of, do you have to have a degree or can you kind of you Peter Thiel and some others have advocated, just drop out of school, I'll give you a hundred thousand dollars, do a startup?
I think for most people, having the degree is still extremely valuable. Remember that early in your career, always, but especially early in your career, you want to maximize learning, right? So the question becomes, how could I learn the most and the fastest, and not just learn details that are going to be obsolete? And maybe even in two years, but learn fundamental concepts, and fundamental constructs of critical thinking, of being able to express myself, and so on.
And I I do think–and again, maybe I'm biased because I've been a professor for so many years–but I think that a high quality, B.A., B.S., a high-quality degree program, it's still extremely valuable.
And the worry about jumping into a startup is that you become knowledgeable on a very, very narrow swath of skills and ideas. And then when that startup is over, where do you find yourself? What's the long term play? If you're not, if you're Peter Thiel yeah, it doesn't matter. But most of us aren't.
There's what's called the long tail. So for 99.9 percent of the population, which includes you and me, and everybody who's not Mark Zuckerberg or Bill Gates, who dropped out of college to become billionaires–and by the way, they’re autodidacts. They learned a tremendous amount without the degree program. But the degree program is a huge, huge help.
[00:56:04] Michael Eisenberg:
You mentioned students, and teaching, and being a professor. So I pinged Eric Horvitz, who's been your colleague since the 80s, a fellow AI researcher, Chief Scientific Officer at Microsoft. I said to him, “What do I have to ask you?” He said, “First of all, I should tell you that Oren is a fabulous human being and a mensch. But I should ask you to start with, how do you care and mentor your students?” And I thought that was just a great question. Because I think it talks about how we don't just transmit knowledge, but we transmit values and critical thinking skills to a next generation of people. So I'd love to hear, what is your secret? He was raving about it.
[00:56:42] Oren Etzioni:
Well, first of all, thank you for the kind words, and thank him. He's a mensch, and for years in my talk, I would be quoting him about how doing AI is actually a moral obligation, because we can save lives using AI, as I was saying earlier, right? The opposite of the kind of view that AI is a very negative force for our world.
[00:57:11] Michael Eisenberg:
I want to interrupt you there. I love that comment. I want to restate it and pin you on it. We have a moral obligation to engage in AI. It can save lives, make people's lives better, improve people's health, etc. I think that's an incredible framing. I want to understand though, at a deeper level, because it implies then also that people who are trying to stunt AI, or slow it down or regulate it are taking an immoral stance.
[00:57:47] Oren Etzioni:
I think that the moral obligation is to engage with it, not to try and shut it down. So I do think that I wouldn't go so far as to call them immoral, but I think the people who want to shut it down are making a mistake. They can be well intentioned, but they're making a mistake. I think the people who want to promote responsible development of it, right? The balance–because look, it's a power tool. Another metaphor for AI. It's a power tool. And so with power tools, you need to do it in the right way.
So those who want to promote responsible use of AI are very reasonable. And then we can get into the detail of how do you do that without losing the good.
I think the people who are, I do think are immoral are, or at least–immoral is maybe not the right word, but damaging–are the scare mongers, the fear mongers, the AI alarmists who just are screaming, “The sky is falling. AI is going to take over humanity.” And they have a far-fetched, speculative argument, you know, that AI will take over humanity and keep us as pets.
And they use that to slow down very concrete developments, like in AI, and medicine and elsewhere. So yes, I do have some concerns about the most extreme groups, fighting against AI as an existential risk.
[00:59:22] Michael Eisenberg:
I've written about and talked about it–I view utilizing AI also and improving it as a religious obligation as well, that comes from, if people are created in God's image, we have a responsibility to use whatever powers we have to make their lives better and to take responsibility for other people. And because there will be, perhaps, a gap between people onboarding to the AI world and those not onboarding, it behooves those of us who have access and interaction with AI, to engage in it. Interestingly, there's kind of the doomers on the one hand, and then I would call it, there's a lot of Luddites–that's a bad term–I think it's uninitiated more than anything else, on the other hand. What do you think is the responsibility to kind of get the 99 percent of humanity that may have heard of ChatGPT, but it's not really engaged with it on, and what can be done about that?
[01:00:18] Oren Etzioni:
Wow. First of all, I do want to say that I'm not religious in the observant sense, but a lot of my values are very much anchored in Judaism. So for me, as I was mentioning, the most exciting opportunity in AI is to save human life and the sanctity of human life. “Mi shematzil nefesh achat k’ilu hitzil olam um’lo’o.” “You save a single life, it's as if you save the whole world.” Very core Jewish value.
So a lot of my thinking does come from the notion that we really have these opportunities to save lives in these different ways. I actually think that the biggest service, in the perspective of history, of ChatGPT has been to raise awareness of AI. So many of us have been watching deep learning and associated technologies gain capability over time. And some people almost discounted ChatGPT as, “Yeah, this is just a scaling up of what we already know. This is not a new thing.”
The sense in which it was a new thing is the wake up call to hundreds of millions of people, literally, about the power of AI. So AlphaGo, right, was that for many people, because they thought, you know, AI could never play go, and ChatGPT was for many other people who could suddenly interact with it and see how powerful it is.
So I think now we're actually on to the next phase. It's less about making people aware that AI exists, and instead making them understand what it is, what it isn't, and what are the questions that as a society, we ought to be debating around AI policies around things like universal basic income, and around things like what are the appropriate regulations and strictures for AI.
And those are very complex policy topics, and policy is so thorny. It makes me glad that I'm a technical person slash business person. I don't really have to get into the nitty gritty of how do we formulate successful policies. I know how not to do it. I would not do it the way that the EU has done it. I think that's been very faulty, although I like to say that the road to regulatory hell is paved with the good intentions of the European Union. So they have good intentions. They've made huge mistakes, and I hope we can do better elsewhere.
[01:03:06] Michael Eisenberg:
But how would you onboard, you know, 300 million Americans, nine and a half million Israelis, and I don't know, a few hundred million Europeans who are not engaged yet in a real way with this AI world?
What would you do educationally, culturally, to kind of give people from this bounty to help better their lives and save lives?
[01:03:27] Oren Etzioni:
I would do two things. In the extracurricular realm, there are wonderful organizations like Code.org that have taught people outside of school how to write code, and then, now how to engage with AI systems and they just help to raise AI literacy. So I would say we should have a campaign that people understand–whether they're technical or not–they need to improve their AI literacy, and that it's easy to do, right. If you have not spent at least 10 hours cumulatively playing with ChatGPT or the equivalent, you're doing something wrong, whether you're a medical assistant, or a home worker, or a kid.
And of course, most kids have. So I would say that's that piece. The second piece is in school, because formal schooling is still where a lot of us get more of our knowledge, we have to up level the skills of the teachers, right? They need to become conversant in AI, and we do need a crash program to make sure that the schools, the teachers in elementary school, in high school, everywhere understand how to teach AI so that they can do it instead of shying away from it.
[01:04:54] Michael Eisenberg:
We're getting close to the end of our conversation, and our time is getting up. I'd like to talk to you for 2 hours, but I need to do 3 short questions before that. I want to give you the opportunity to answer the question I cut you off in the middle of, which is, how do you mentor and coach your students? Eric Horvitz’s question.
[01:05:12] Oren Etzioni:
Again, he's so wonderful to pose it like that. I go back to, again, the Chazal saying, “Eizehu chacham halomed mikol adam.” “The wise one learns from everyone.” And so first and foremost, I look at people I work with as people. I don't see them as pawns on the chessboard or means to advance my career.
I'm thankful that I've had a good career, but it's always been based on taking a collaborative approach, always based on, what can I learn from you? And again, another Jewish saying is, you know, “I've learned a lot from my teachers, and even more from my colleagues, but most of all from my students.”
So when I'm working with a person, I think of them as a person. And I think, what can I learn from you? What can I teach you? And how can we have a wonderful relationship? And with that mindset, I think we're poised to work together to have a great relationship.
Michael Eisenberg:
Amazing.
Oren Etzioni:
it doesn't always work, but again when it doesn't, you know, we can part as friends. But when it does, it's really wonderful.
And so I'm very proud of my students. You mentioned Oren Zamir. He was my PhD colleagues like Eric, who I've worked with on papers and other projects over the years. And just, AI to Israel, where, which I started and am so admiring of, and it does so much. And this is, some of your listeners will be like, of course, you're saying it's about people.
But to us technical people, this is a revelation. Think about the people! Think about the relationships, something all too easily missed by computer scientists.
[01:07:04] Michael Eisenberg:
Speaking of relationships, Eric mentioned, and I hope I'm not crossing a line here, that you had a very close relationship with your father.
And he sounds like he was a very special man. If you feel comfortable, I'd love to hear more about your father, your family, and how it's influenced you, your motivations and passions. You mentioned at the beginning, that's your core value. So I'd love to hear more about that. And then I'll get to my rapid fires.
[01:07:25] Oren Etzioni:
Sure. Well, I'm very fortunate to have had a close relationship with both my parents. My mom who still is in Tel Aviv, and my dad who passed away in the last year. And so, it's natural for Eric to mention him. I think they both shaped my thinking on various topics, and so on. Let me just pick one of the many topics where, even though my dad was a sociologist and a political scientist, I do have that connection. My dad always highlighted that professors have a special obligation to to be what he called ‘public intellectuals,’ not just to be in the ivory tower and writing our obscure papers and getting together in our societies and so on, but also engage with the public and his way of doing that was fighting against the war in Vietnam, fighting for Israel, which has not always been popular in academic circles, but basically using the fact that he had tenure and he had a kind of public perch to argue for what he believed was right for me.
I'm also a public intellectual arguing for what's right in my own way, promoting values. But also thinking about how AI comes out of the ivory tower, comes out of the technical paper into applications that we use, whether it's to save lives or to make shopping more streamlined.
A lot of my startups were helping consumers save money or make better shopping decisions right there. There are many different ways, but I always felt the ivory tower is a launching point, not an end state where you're locked up in the tower, like, you know, whatever it is, Sleeping Beauty before she escaped.
[01:09:25] Michael Eisenberg:
Thank you for sharing that. Three rapid fire questions for you. You mentioned earlier UBI in this era–you pro or against?
[01:09:34] Oren Etzioni:
I am very strongly pro what you might call UBS, universal basic services. Our society is rich enough that people should have a roof over their head. People should have access to adequate medical care.
I am very opposed to just sending people checks that they can use to avoid work, that they can use for drugs or alcohol or just gambling, and so on. So I think, yes to UBS, no to UBI.
[01:10:07] Michael Eisenberg:
Good nuanced answer. I want to ask you how we get housing for people when we look at China blowing up buildings right now, but we'll leave that for another conversation.
Second, you mentioned earlier in the conversation and I quote it, you used the word anthropomorphic, and talking about AI, that we need to be careful about that. So I'm curious. Two quick things. Do you say please and thank you to your chatbot?
[01:10:28] Oren Etzioni:
Never. And I think that's actually very important. We need to, as they become more and more lifelike, we need to distinguish between AI programs and people. It makes no sense to do that, but I do that advisedly, because I'm not talking to human beings. So I try very hard to say please and thank you to every human being. And I try very hard not to do that to machines.
[01:10:57] Michael Eisenberg:
So no tips for the chatbots either then, right?
[01:11:00] Oren Etzioni:
No tips for the chatbots. Just do your job.
[01:11:06] Michael Eisenberg:
Alright. I got one last question for you. So you may or may not know this, but I gave a quite impassioned speech in the Knesset about what I would call amateur hour in the Israeli government on AI policy and AI investment. What would it take for me to get you to come spend some time here and help Israel get on its feet from an AI perspective and come help us lead a charge?
[01:11:27] Oren Etzioni:
So I am part actually of an advisory board led by Yoav Shoham.
[01:11:33] Michael Eisenberg:
Yoav Shoham, I'm aware, yeah. He was just on the podcast, by the way.
[01:11:36] Oren Etzioni:
Oh, wow. Yeah, wonderful. Yeah, he's very thoughtful on these topics. So I would love to help. I really wish, sincerely, if we can get to a more peaceful state, right, where the flights are flowing again, the hostages are back again. I don't claim to know how to do that. A lot of people do. But when we get to that state, you won't be able to stop it. Right? Anybody who just says AI, I'll rush in there and offer my help pro bono. I love Israel. Of course, there’s huge potential for AI in Israel, and I would love to help.
[01:12:19] Michael Eisenberg:
I'm signing you up and hosting you for Shabbat dinner. You know, I promised that was the last question, but I got one more. Do you think at all about the energy challenges around AI, or do you think they're overblown?
[01:12:28] Oren Etzioni:
They are not overblown. So again, an important paper we wrote a few years ago was entitled “Green AI.”
People can look it up. It was the lead story in the communications of the ACM, arguing that we have paths to reducing the energy usage of AI. At the time, again, people pooh poohed it. They said, “Ah, no, the data centers is where it's at, the cloud.” AI is not using a lot of energy. That's the funny thing about exponentials, right?
A few iterations of the exponential, and now nobody's poo pooing green AI. So I think it's important. And I encourage people to look at the paper, because we outlined some important paths forward.
[01:13:07] Michael Eisenberg:
I just want to say thank you to you, because as the master of the chatbots, we don't thank the chatbots, but this has been a super interesting conversation.
Hope the listeners enjoy it. And for our listeners, you can find Oren Etzioni on X, otherwise known as Twitter, @Etzioni. That's E-T-Z-I-O-N-I, and also on LinkedIn as well. Oren, I cannot thank you enough. Have a Shana Tova and really thank you for doing this.
[01:13:33] Oren Etzioni:
Michael, thank you. Really wonderful to talk to you. Shana tova u’mevorechet.
Executive Producer: Erica Marom
Producer: Yoni Mayer
Video and Editing: Ron Baranov
Music and Art: Uri Ar
Design: Rony Karadi