On this episode of Invested, Michael hosts Yoav Shoham, Co-founder and Co-CEO at AI21 Labs.
Yoav is professor emeritus of computer science at Stanford University. He is Fellow of AAAI, ACM and the Game Theory Society, and among his awards are the IJCAI Research Excellence Award, the AAAI/ACM Allen Newell Award, and the ACM/SIGAI Autonomous Agents Research Award. Prof. Shoham has co-founded several AI companies, including TradingDynamics (ARBA), Katango and Timeful (GOOG), and AI21 Labs. Prof. Shoham is the founding chair of the AI Index initiative, and chairs the Scientific Advisory Committee of Israel’s National AI Initiative.
Please rate this episode 5 stars wherever you stream your podcasts!
Follow Yoav Shoham on X: https://x.com/yshoham
Follow Yoav Shoham on LinkedIn: https://www.linkedin.com/in/yoavshoham/
Subscribe to Invested here: https://content.aleph.vc/invested
Learn more about Aleph: aleph.vc
Sign up for Aleph’s monthly email newsletter: https://newsletter.aleph.vc/
For the transcript of this episode, go to: https://content.aleph.vc/podcast-episodes/yoav-shoham---episode-27
Subscribe to our YouTube channel: https://www.youtube.com/@aleph-vc/
Follow Michael on Twitter: twitter.com/mikeeisenberg
Follow Michael on LinkedIn: https://www.linkedin.com/in/mieisenberg/
Follow Aleph on Twitter: https://twitter.com/mikeeisenberg
Follow Aleph on LinkedIn: https://www.linkedin.com/company/aleph-vc/
Follow Aleph on Instagram: https://www.instagram.com/aleph.vc/
[00:00:00] Michael Eisenberg:
For a professor of AI and especially the way we kind of consider what you refer to as statistical models or LLMs today, I find you much more creative brain side than mathematical brain side.
Yoav Shoham:
Oh, it's a total accident. I'm a computer scientist. I agree.
Michael Eisenberg:
Welcome to our latest episode of Invested. I am thrilled to have with me Yoav Shalom, or doctor
Yoav Shoam, who was a professor at Stanford for many years and is now the CEO and founder of AI21, a foundation model company based here in Israel. But instead of me introducing you, Yoav, why don't you introduce yourself?
[00:00:36] Yoav Shoham:
Well, first I can confirm–I am old. I was at Stanford for more years than I'm willing to admit.
[00:00:41] But I need to correct one thing. I'm co-founder and co-CEO of AI21 Labs.
Michael Eisenberg:
Okay.
Yoav Shoham:
Otherwise, perfect.
Michael Eisenberg:
That's all you have to say? Give us more of your background.
Yoav Shoham:
Ah, what can I say? Born in Israel, grew up in Haifa. Did computer science by mistake, just because my parents thought I should study something, so I studied that. Didn't really care. Discovered AI. Started reading on the internet, and–no, there was no internet. But went to the libraries and read, went to do a PhD in the States. Came back to Israel for an abbreviated post-doc. But then already had the job at Stanford waiting for me in 1987. Yes, I'm that old.
[00:01:23] Michael Eisenberg:
Now, everybody who listens to podcasts today is talking about AI, but when you say, “I discovered AI,” this is not in the current moment of LLMs, right? You discovered it a long time ago.
[00:01:36] Yoav Shoham:
That's right. It's interesting. Today's young folks, brilliant young folks, really think of AI as almost synonymous with, if not LLMs, at least deep learning, more broadly construed, and certainly, you know, within machine learning, even more broadly construed, but AI has a longer history, and it's a broader umbrella. And a lot of the limitations of LLMs right now, if you really want to overcome them, you need to have this broader perspective.
Michael Eisenberg:
[00:02:08] Ah, this is exactly where I wanted to go in about 15 minutes, but we're going to go there right now. So, tell us about your discovery, what you discovered about AI way back–it's now 40 years ago.
[00:02:18] Yoav Shoham:
I was drawn to AI just because of the intellectual curiosity. It just seemed like an opportunity to think not so much about, not only about computers, but also about people and the relation between them, and what does it mean for a machine and for me to be intelligent.
So, that's the reason I was drawn to it. It's an opportunity to do philosophy and psychology while getting a computer science salary. And that really is still my interest. But then at the time, machine learning was always there, but the emphasis back–so people don't realize that AI had another AI moment in the ‘80s. So in the ‘80s it was a similar phenomenon where you went to a conference, standing room only, even the VCs couldn't throw money fast enough at AI.
That was the ‘80s. And then in the 90s, we had what we call the “AI Winter,” where you weren't allowed to admit you were doing AI. And then in 2000, suddenly, is the beginning of the new wave. But the technology was very, or the emphasis in technology was very different. Back in the ‘80s, it was all about the symbolic system, how to represent knowledge explicitly and draw inferences in a very deliberate way.
[00:03:36] And machine learning was there as an intellectual activity, but didn't play an important role. And now, of course, it's the mirror image. We do statistics. This is what we do. At scales we couldn't imagine. Enabled, not so much because of algorithmic advantages–that also–but mostly the insane abundance of data and compute.
[00:03:57] And it's amazing what it does. But we kind of left money on the table when we pivoted away from, you know, reasoning to statistics.
Michael Eisenberg:
Explain. This is a critical point to explain.
Yoav Shoham:
Maybe the best way to explain–look at today's LLMs, large language models. And it's amazing what they do. And among other things, they do arithmetic, which is in some sense mind-boggling. You know, we didn't expect when we trained models that they'd be able to do arithmetic. And they do it. You give it two digit numbers to add up, and they'll do it just fine. You'll give it six digit numbers to multiply, and you'll get garbage.
And that's because God did not put neural nets on Earth to do arithmetic. We know how to do this. HP told us how to do it in 1970 with a calculator. So, not everything starts and ends with a language model. That's an example of reasoning, an area in which we've optimized how to do inferences in a symbolic way.
[00:05:04] And so that's an example of reasoning. But the list goes on and on. And so the trick is, and this is, by the way, why we started the company–LLMs weren't a thing when we started seven years ago. But that deep learning was necessary but not sufficient. And you need to bring reasoning back into the picture.
Michael Eisenberg:
And I want to specifically focus on your comment, we left money on the table, by kind of abandoning reasoning for a decade and then kind of getting to this big data moment, which we'll come back to in a second, and then to LLMs. What does it mean we left money on the table?
[00:05:41] Yoav Shoham:
It means that we could do certain things well then.
[00:05:45] Types of reasoning, it's not just arithmetic–you didn't need AI for that–but it's actually temporal reasoning. I give you a bunch of information about events that took place, and I ask you questions about certain events and what was the temporal relation between them. A whole area within AI called “temporal reasoning,” and their algorithm for doing that.
The language model today can't do it. I actually recently published a paper with a colleague of mine called “Understanding.” What does it mean to understand something? And we give that as an example. So that's an example of something we could do in the ‘80s we can't do with LLMs, and that's the money we left on the table.
[00:06:20] Michael Eisenberg:
Just yesterday, I was working with a colleague and we asked it, post the Biden debate debacle, to write a column as Thomas Friedman would write it about a certain topic. And we fed it the parameters of the article we wanted, or the opinion piece we wanted it to write, in Thomas Friedman's style.
And it produced a remarkable copy of Thomas Friedman's work on this new topic. And then we asked–it was ChatGPT–we asked ChatGPT “Tell us how you got to this structure,” and it was able to tell us how it got to the structure, like, by dissecting it. And then we said, “Okay, point to other Thomas Friedman pieces where this structure exists,” and it was able to do that.
[00:07:10] I think this is something, for example, that these LLMs are perfect at, which is structured reasoning. But temporal reasoning is very complex for these things, because it doesn't understand the concept of time right now.
[00:07:25] Yoav Shoham:
For example. Yeah. By the way, I wonder whether what it produced for Tom was the same as what he actually wrote recently about the–
[00:07:33] Michael Eisenberg:
It was on a slightly different topic. And I don't know if it was updated for that post. Basically, tried to have Friedman write a piece about the lame duck part of Biden's presidency, what he should do.
Yoav Shoham:
I see.
Michael Eisenberg:
As opposed to whether he should withdraw from the race or not. It was actually an incredible piece.
Yoav Shoham:
Yes.
Michael Eisenberg:
But what it showed me, by the way, was that Tom Friedman actually has a structure to how he writes that's incredibly consistent and predictable.
[00:08:03] Yoav Shoham:
That's interesting. So, listen, first of all, language models are amazing. They really are, and they surprise us all the time. And they do exhibit certain things that you almost have to say there's reasoning and understanding there. And then they don't. And understanding when they do and when they don't is a crapshoot, which also raises the question of, well, what does it mean for them to really understand the subject matter they're involved with?
[00:08:38] And they're amazing at pattern detection. Patterns, by definition, compress information, which is a mode of understanding–by projecting a high dimension to a small or lower dimension, it means somehow you've extracted an essence of some sort. And they do it by their [00:09:00] construction. But the essence that they hone in on, and how much of that is relevant and sound, and how much they are not able to extract is really a crap shoot.
I have a colleague who published an, I thought, a very insightful piece on the distinction between being able to produce expert output and being able to understand the output. And how people and machines, or at least people and LLMs, are different in this regard.
Michael Eisenberg:
Explain.
Yoav Shoham:
That people, for me to produce something that looks like Thomas Friedman's piece, would really force me to somehow understand something in a way that I could communicate to you, that I would be able to communicate, to articulate to myself. And then I'd be able to–it's hard for me to speak Friedman-esque without having that structure in mind.
And I wouldn't do it very well. I wouldn't do it as well as the language model, whereas language models, I just said, do it amazingly well. Whether they really understand it in a way, it's unclear. And by the way, the mere fact that you promise to say, “explain it to me,” and they give you a cogent sounding explanation, doesn't mean they really understand it.
The same example I gave earlier about temporal reasoning, so in this same paper, I prompted–and this is not a knock on GPT because it's true for RLNs also, but, I prompted, at the time it was GP3. 5 to give me a temporal reasoning answer, and it gave me a very confident wrong answer and an explanation behind the answer. And then I asked it, “Well, are you sure it’s that?? And I gave it a hint, and it said, “oh, I'm sorry, the right answer is such and such.” It gave me the explanation. Again, a totally wrong answer. So, explanations could be only a veneer, and not display a real understanding.
[00:11:07] Michael Eisenberg:
So on that point, like if I'm a user, and this happened to me by the way in the early days of ChatGPT, where I asked it a question about something I thought was a historic event, with Nachmanides.
Yoav Shoham:
Yes?
Michael Eisenberg:
And I remembered in my own head reading about it and it gave me a date, and by the way, it gave me three sources in Jewish texts for the things, and I went to look at the Jewish texts, and they're not there. After it gave me the actual text from it, and they weren't there, and I began to become confused as to how I was able to make up this thing that I remembered from 30 years ago, and somehow it knew it. And so I posted it to Twitter, and a classmate of mine from college said that we saw a movie about Nachmadidies in which this scene appeared.
Yoav Shoham:
That's great.
Michael Eisenberg:
And it must have somehow, I guess, Chapter 50, found the script or something like that in one of its indexing or training sessions, and then brought it back, and somehow invented the sources. And like, so the question I asked myself is given this, and it's very convincing–I'd even written something on the basis of it–like, how am I as the human supposed to know what to trust and not trust?
[00:12:19] Yoav Shoham:
I think that's the key question. I mean, one of my less charitable colleagues called this authoritative bullshit.
Michael Eisenberg:
I know some people like that too, by the way.
Yoav Shoham:
And only some of them are in office.
So I think this is interesting. It's interesting not only for intellectual reasons, but also for pragmatic reasons. You know, it's this age of, I don't like the term ‘fake truth,’ but you know, post truth. And I think one of the biggest issues for us as a modern society, you know, hoping to maintain a liberal democracy, we're eroding the information layer underlying their society and, to the extent that we're relying more and more on automated tools to produce and display information, that you should have known, how are we to know, is critical.
First of all, I don't have the right answer. When I grow up, it's something I want to deal with really. But in a limited way, it's something we've put an emphasis in our own–so you know, we, it’s not a main focus, but we have a very successful application called WordTune that helps you write and read.
[00:13:39] And what we try to do, when we suggest things you might put into the text that are contextually relevant, we give you the sources. And so, at least you can inspect and say, “Do I trust this? Do I trust that?” It's a key issue. And some of the chatbots now do that, introduce pointers. But it's an issue how to trust the information you get.
[00:14:06] Michael Eisenberg:
And what would you teach kids today in order to help them kind of contend with this era?
[00:14:14] Yoav Shoham:
Again, yeah, I don't know the answer, but I think it's gotta be a Socratic approach. It's just teaching people critical thinking, whether from a person or a machine.
Michael Eisenberg:
How do you do that?
Yoav Shoham:
How do you do this with your kids?
Michael Eisenberg:
So I actually got asked in an article I was interviewed on AI over the weekend, it was published over the weekend. And they said, “What should kids study today?” And I said three things, but I left out the fourth, which I'll come to in a second. The three I said were, I think, math and coding are good because they teach you how to think in structured formats, and philosophy was going to be critical, because I think there's going to be an endless amount of both ethical issues, and you need to learn to question everything.
And so I thought philosophy. But the fourth thing I should have added was actually Talmud.Because I think it teaches you how to think using a Socratic method that's very valid, very critically, and to try to get at truths and multi-layered truths that you kind of get at, and I think in my own life, the thing that serves me best for critical thinking is actually Talmud study.
[00:15:30] Yoav Shoham:
I can't boast that background and knowledge, but I recognize what you say. I think we agree on this, that the challenge is to not make everything relativistic and conditional, and a matter of perspective. At the end of the day, one plus one is two, and October 7th happened, and these are things we can't relativize away.
Having said that, not, you know, we do need to question all facts, even these facts. And it goes back to–so what is the basis for accepting something? It's interesting, I'm falling back on this latest kind of weekend hobby I had, this paper I finished writing after years of trying to do it with a colleague, by the way,Kevin Layton Brown. And we asked the question, “what does it mean to really understand?” So here's the thing, that's actually–it's a theoretical paper, but it's actually, I think, relevant to our conversation.
First of all, when we discuss the subject matter, we've got to have some level of competence. We can't really speak with any authority on whether it's math or Talmud, or world politics, without displaying some kind of general competence. Call it a passing grade in this domain. But it won't be perfect, and there's certain things we will say, I don't know, and that's fine.
But we can't be ridiculous. If I say something that is patently wrong, and I cling to it in the face of evidence, then there's something fundamentally wrong. And when you think about how to achieve that, the issue of explanation becomes very, very central. I'll try to explain what I mean by that.
When I ask you, okay, so why do you believe something? Why is something the case? So Feinman has this very fun, you can find it on YouTube, somebody asks, “Why do magnets attract?” And it goes through this long, “Well, it depends at what kind of answer you want, because of this, but why this…” And as he goes to a deeper explanation, so explanations are a funny thing, but at the end of the day, explanation is an appeal to a source of authority that's unquestioned. Why does one plus one equal two? Well, here's the addition table. We accept this as truth. Why did October 7th happen? Well, here's all the evidence. Here's something that we saw live on TV.
And you can close your eyes and ignore, but then we just don't have a common base for conversation. Sometimes it's more involved. I mean, sometimes you need a more complicated procedure to explain, but somehow you need to fall back on something that both the explainer and the person who's judging agree is a common basis.
[00:18:45] I think this is something we need to establish. These are, and by the way, this goes back to Talmud, right, because there are principles of reasoning that are enshrined there. So whether it's these exact principles or others, we need to agree on [00:19:00] them, whether Talmud or Wittgenstein or something in between, I'm not sure.
[00:19:03] Michael Eisenberg:
I find it fascinating that you go to Wittgenstein in deconstructionism because that had a huge impact on my thinking in college.
Yoav Shoham:
How so?
Michael Eisenberg:
Because I think there's a lot reflected in language, which is what he was interested in. Apropos LLMs, by the way, that also help tell us about culture.
[00:19:23] And I think if you're trying to understand whether people are having the same conversation or different conversations, understanding the cultural backgrounds that they come from, and the words they use, and the types of parables and homilies that they use is valuable for that. Why did you mention him?
[00:19:41] Yoav Shoham:
I went through a period. I went through a phase.
Michael Eisenberg:
At what?
Yoav Shoham:
I don’t remember, I think it was early college days, or not early college, my graduate studies, where I had the luxury of just reading a whole bunch of philosophers and psychologists and, and one of them was Wittgenstein. And,of course, the very structured way it went about things. was very closely related to the development of logic at the time, logic and AI, were intimately sort of intertwined. My thesis was about logic of time and causation. And so, somehow….
[00:20:17] Michael Eisenberg:
We're gonna cover that more. So Jack Clark who is your fellow chair of the AI Steering Committee–by the way, what is the AI Steering Committee at Stanford? That you chair, right?
Yoav Shoham:
Well, let's speak about what the AI Index is.
Michael Eisenberg:
You're the founding chair of the AI Steering Committee, right?
Yoav Shoham:
Right, so there's something called the AI Index. Maybe take a step back. There was a project also anchored at Stanford called the AI 100, which is a 100 year study on AI. Which my friend and colleague, Eric Horvitz from Microsoft, started. And I was asked to join its inaugural, sort of, committee. The vision there [00:21:00] was to do periodic, every five years or so, a big study on AI with an expert. Which they do and it's great.
[00:21:07] But my feeling then was that the rate of change in AI, this is even before the latest craziness–going back, yeah, maybe ten years ago–that the rate of change is too rapid to only do things, so let's do like, something that's ongoing, kind of like an index that we update real time. And so that was the vision, and I started sort of an offshoot from that, and it became an independent thing, and it puts out these annual reports.
It does other things, but the main thing it does is puts out these annual reports that are amazing. And I get to say this because I actually no longer chair it. I'm on the committee, but I no longer chair it. And my colleagues there did an amazing job, and they, there are like, yea-thick tomes, pages–the idea was not to be an apologist for AI, nor an alarmist, just inform the conversation.
[00:22:05] Lots of data, and so that's what the project is. And like I said, I no longer chair the committee, although I'm still on it until they kick me off.
[00:22:15] Michael Eisenberg:
And so, Jack Clark said–he's your fellow chair on the Steering Committee–at a panel discussion with you in Beijing in 2017, you said that you thought AGI, or artificial general intelligence, which is on par with human intelligence, was hundreds of years away. Jack specifically mentioned that you said 500 years away. Do you still think that?
[00:22:39] Yoav Shoham:
First of all, I don't want to be held to a quote that is attributed to me. I want to see the source, because it's actually–
Michael Eisenberg:
Jack, isn't he the source?
Yoav Shoham:
Well, I'm willing to believe that Jack said this. I need further evidence that I actually said what he said I said. It's a little unlike me to say, because as soon as you mentioned AGI, Artificial General Intelligence, my first reaction is, I hate that term, because I don't think it means anything. Well, that's a strong statement.
[00:23:14] I think it's so loose and ill defined that machines have always been better than people at certain things. They'll continue to be better at more and more things. At what point does it become AGI? I don't think that's well-defined. So I think machines are amazing. I think that I do believe that where we are in the evolution of artificial intelligence is not nearly as far as some people think. And in Beijing, I actually remember that slide, to describe where we are in AI–and already then there was a lot of excitement about AI–I had an image o a cartoon I reconstructed from my childhood of, you see a kid looking at the stars through a telescope, but standing on a stool to get closer.
I think it's not a bad metaphor for where we are in AI right now. The stool we're standing on right now, LLMs and stuff, it's amazing. But I think there’s a big gap between true intelligence understanding and where we're at today.
[00:24:23] Michael Eisenberg:
The, kind of, meme is, we need to be concerned about AGI, in fact, this whole, which I imagine is not fully true, this kind of whole thing that happened at OpenAI because somebody saw something that reminded them of AGI–and I kind of find these things a bit fantastical. That was a worry. What worries you about AI?
[00:24:48] Yoav Shoham:
Let me start by what doesn't worry me. What doesn't worry me is that tomorrow machines will wake up, thank me for my services to the evolution of the species and send me off and replace me as a species. I don't think that'll happen.
[00:25:13] Michael Eisenberg:
Hallelujah.
Yoav Shoham:
I don't know, you look at some of the people, you're not sure, but….
Michael Eisenberg:
You're thinking about artificial intelligence for politics right now. I see what's going through your head.
[00:25:25] Yoav Shoham:
You know me too well. But I think there's real issues as it is with any technological, I wanted to say, revolution, of inclusiveness, which segments of society will benefit, will reap the benefits, of inclusiveness, of how to spread globally among different parts of the globe. There's issues, I think issues of the impact on both employment, and training, and education is real. Now, I actually believe that every technological advance, including this one, always produces more jobs than it eliminates.
But it does eliminate some jobs, so the transition can be painful to people.
Michael Eisenberg:
I'm with you on this. I agree.
Yoav Shoham:
And it certainly calls for a change in the nature of the jobs. I mean, an example would be, say, writing. So copy editing is essentially a non-job anymore. Editing, in the sense of a magazine editor or a book editor, exists, but now the editor has many more tools and some aspects of it can be automated.
And so I think the nature of the job there changes a little bit. So I think that's an issue. The magnitude on the workforce, I think, I don't know. There are many studies. I know you know them well, but I remember the McKinsey estimate of, what is it, like a couple of years, three, four years ago–I can't give a better estimate, but I don't know that the basis for the numbers listed there–at some people, at some point people predicted that two million truck drives will lose their jobs. I don’t see that happening any time soon.
[00:27:20] Michael Eisenberg:
I actually think it's more likely–because that stuff has got regulation–I think the more likely thing I keep referring to is BPO, or business process outsourcing, in India. You know, it's just endless amount of forms, and form analysis, and it could be for insurance, or 401ks or, you know, benefits or a million things that get done by a million Indians or Filipinos, and they don't see the customer. And so, over time, which is probably getting faster, those jobs get replaced in a similar way to copy editors getting replaced, and maybe people who write marketing copy kind of more broadly. Anything where there's a form I think is where you start.
[00:28:03] Yoav Shoham:
I think actually, you're right. I think these are better examples. Every process automation, you know, like RPA, but on steroids, I think is ripe for disruption here and it's happening. Yeah.
[00:28:16] Michael Eisenberg:
And those jobs are considered not blue collar jobs, they’re considered like, on the border of white collar jobs right now.
Yoav Shoham:
I agree.
Michael Eisenberg:
Yeah. So tell us about AI21 Labs, which you co-founded and are co-CEO of.
[00:28:29] Yoav Shoham:
So, we started really in 2018. I think formally end of 2017, if I remember correctly. And when I say ‘we,’ it's me and two other people. It's Ori Goshen, who's half my age, twice my brains. And Amnon Shashua, who's just twice my brains.
And the reason we started the company is odd. We didn't start it to solve a problem in the world. We started it because of where we saw the arc of AI. It's very much what you spoke about before. We believed that deep learning was necessary, but not sufficient. We needed to put reasoning back into AI, and so let's figure this out.
Michael Eisenberg:
And?
[00:29:08] Yoav Shoham:
Working on it?
Michael Eisenberg:
Yeah. And what is the best example? So, if for argument's sake, I have ChatGPT or Mistral, which for lack of a better term–
Yoav Shoham:
Never heard of them, but keep going.
Michael Eisenberg:
Or Claude, which is, you know, kind of these chatbots with a big LLM in the background. And, Claude is doing some other fancier work right now. Where does AI21 sit in that landscape?
[00:29:37] Yoav Shoham:
So, I think it's good to look at the evolution of the recent evolution of LLM lands. And in the beginning, there was a very simple temperature zero, kind of use of LLMs. You have an LLM, you give it a prompt, you get a completion, a single completion, and you're happy.
When GPT3 came out, we developed our first model called Jurassic 1. And at the time it was slightly bigger, slightly better than GPT3. And at the time, they were the only game in town. And that was the first stage. But since then, things have evolved. First of all, it's not always linearly kind of dependent, but roughly– first of all, we realize these are stochastic systems.
[00:30:35]
So even if you just call it an LLM, there's something probabilistic going on. And you want to maybe sample it several times from the distribution and do something with the sample, for example, in WordTune, the application we built on top of it, we wanted to give you–you write something, and you're given suggestions of what else you might write.
And the idea here wasn't to kind of do a word smithing or word play, a synonym or something, but to give you something that was more semantically, perhaps, relevant. You could say, and this actually happened, you could say, “It was nice meeting you last night,” or write it. And the system would suggest “it was love at first sight,” which is not obviously a paraphrase, but I think actually what you might have wanted to say. And so, to do that, you wanted the language model to generate diverse–language models weren't optimized for that. And so that's the next step of kind of tweaking the inference that you get for diversity, and checking the results.
This takes you in the direction of where we are today, of doing AI systems. But in between the various tools, for example–this is going to 2022, I think it was–We spoke about tools, which was, to us, I think, an obvious thing. I mean, you don't want to do arithmetic, we said, with the language model–let’s call a calculator.
[00:32:02]
You don't have access to an employee database? Call the database. You want to do web services–these are all tools. And so, but this is a little oil and water. And so how do you get the language model technology, which is continuous kind of backpropagation with these discrete operations. And so we published something called the Miracle System, MRKL; it stood for something, doesn't matter right now.
[00:32:26] Michael Eisenberg:
Another politician, Angela Merkel.
[00:32:27] Yoav Shoham:
Yeah, not that. Although I have nothing but good things to say about her. And by now I think that standard that’s generally accepted is that you need to use tools. You need to reflect on the address. That's also for–and we're slowly moving towards what a lot of people like to call agents, which is another term, I dislike, even though I coined the term ‘agent-oriented programming’ back in 1991, I want to say.
Michael Eisenberg:
Wow.
Yoav Shoham:
In fact, here's a curious fact. Last week, I got an email from the AI Association that this paper won the Test of Time Award, and I got some award for that. But that's more of a historical anecdote than anything else.
[00:33:20] Michael Eisenberg:
I'm curious to hear what the trophy looks like for that award, given that it's an agent.
Yoav Shoham:
[00:33:24] Me too. Me too. We'll find out. I'll you know. I haven't gotten it yet. So people speak about agents, and I think people are touching different parts of that elephant, but it's very much an evolution of these ideas of using tools, and doing self reflection, and this very vague notion–planning. It's kind of Kahneman's thinking fast, thinking slow. You know, right now it's very transactional. You call an LLM, in fact, you call an LLM, you're the initiator, and the language model “thinks,” quote unquote, for between a fraction of a second to fractions of a minute, and gives you back the completion of the answer.
But you're going to see systems where the execution could take longer, in fact, could take weeks and months. And, in fact, the system may initiate things.
Michael Eisenberg:
Give me an example.
Yoav Shoham:
You want to task your research agent to do internet search on an ongoing basis, subject to your guidelines of what you're interested in. And it'll go and do internet search, and maybe you have an urgent need that you tell it, “Oh, I need an answer like in an hour, I’m preparing a presentation.” And maybe you say that, “I need you to fact check things. So I want you to go deeper and chase links.”
Or maybe it doesn't require that. And maybe you have more or less compute for this. So it's a complicated kind of, new kind of operating system, kind of that's required to manage all of these things.
[00:35:08] Michael Eisenberg:
What do you mean before specifically when you said ‘AI system?’
[00:35:13] Yoav Shoham:
So ‘AI agents’ are maybe the catchy phrase for what is the behavior you'd like from these things. To make it happen, you need a system. And I'll give you an example. In fact, when you're using the chatbots that you like to use, you're using, it's more than a single call to an LLM–there's some pre-processing that goes on, there's some checking after, there's already something above it. And you'll see more and more of that, where the LLMs are a very important tool, but they're a tool.
[00:35:53] So what you have now is a whole system where there's various things that take place. LLMs are called more than once, and more than one LLM. They’re bespoke software that's written to take care of certain things. There's tools that we mentioned and some of that, all that needs to be orchestrated to produce the solution to the problem you posed. That's an AI system.
[00:36:16] Michael Eisenberg:
When you look around and you talked about, quote, the chatbots or the agents right now, and the need to build systems, and you look around and you see the amount of money being raised by OpenAI, Mistral, AI21–it's raised a lot of money, but nowhere close to these numbers, right, as OpenAI has?
[00:36:36] Yoav Shoham:
It's close to Mistral, not close to OpenAI, yeah.
[00:36:37] Michael Eisenberg:
Does the ability to get a big return, a venture-scale return on that make sense to you? Where does it come from?
[00:36:47] Yoav Shoham:
Well, that's more of an AI question for you.
[00:36:47] Michael Eisenberg:
But you raised the money, you got to return, right?
[00:36:49] Yoav Shoham:
Well I, we hope to. I'll tell you why we need it. Whether it's a good investment for our–I hope it is. We need it for two reasons. The main reason is compute. These language models, as you know, are expensive to train. There's no question of how many of these do you need. And we can speak about that. But, it's expensive to train.
And that's already, by the way, it doesn't have to be hundreds of millions or billions to train them.
It can be a mere dozens of millions. Which adds up, you know, a dozen million here and there, it adds up. But that's the main source. The other is talent. We're fortunate to have really good talent, and you know, that, that costs money. And so, you know, so I'd have to say, we've raised a little over 300 million right now and we're happy with that, and we'll probably raise more because, to train the next models, we'll need to fund that.
Michael Eisenberg:
[00:37:54] Is model training a moat? Or is it not really a moat?
Yoav Shoham:
It's a really important question. First of all, it is, in various aspects. It's a moat for young startups because it's expensive, and they don't have money. For rich companies, it's no longer a scarce commodity, the talent to build a good language model. It is scarce to build an excellent language model. And it's an important distinction.
[00:38:32] Michael Eisenberg:
And that's a talent question more than a data question or a compute question, right?
[00:38:38] Yoav Shoham:
Well, you need all of the above, but one moat is the talent. You know, information diffuses.
Michael Eisenberg:
And computing costs comes down generally.
Yoav Shoham:
Yeah, let's see what, how these curves intersect–because it's not just the amount of data, it's also the length of training.
Michael Eisenberg:
Yep.
Yoav Shoham:
But right. And the other thing is that not all language models are created equal, not just in quality, but also in the type of stuff they need to do. Just for example–one reason, by the way, we have more manageable expenses is that we're not chasing the full multimodal language model.
We're looking at the enterprise, and we ask, so what is the data in the enterprise? 80 percent of the data is text. And what's not text tends to be charts and graphs, but it's not donkeys on the moon. So that hasn't been part of what we've been chasing. And so our models are really good at that, even if you don't have a crowd pleasing sort of, you know, a multimodal model.
But even more specific, you have a model that goes to certain things, certain tasks. So I think we're going to see multiple models. And having access to a very strong general purpose model is an edge.
[00:40:09] Michael Eisenberg:
You're, then, the believer in what I would call the large language models rather than the small language models, and the more circumscribed ones–the ones that kind of focus on a given niche. Like you could see an accounting model, if I'm not mistaken.
Yoav Shoham:
No, no, no, totally. The reason I'm hesitating is because we sort of pioneered the topic of–so Gartner just have a new category called ‘task specific language models.’
Michael Eisenberg:
Oh, now the space has arrived. There's now a Gartner model. It's a task-specific language model.
Michael Eisenberg:
And I think they do credit us with coining the term. So we're a big believer in that. But one of the reasons I hesitated is because in order to create a really good model, whether it's a certain domain, or a certain task, or what have you, you really want to have a very strong basis from which to drive this model, which is, by the way, tends to be smaller, and better at the task. So it’s not either-or, it's and.
[00:41:02] Michael Eisenberg:
You look at like, for example, what happened to Mustafa Suleyman's company, right? It was kind of, you know, sold for the body parts to Microsoft's expensive body parts for that. And you look around and there's been, I don't know how many companies that have raised hundreds of millions of dollars at this point, if not billions. You know, the character, I just raised a pile of money, and, how many of these do you think actually make it?
[00:41:24] Yoav Shoham:
I thought you were going to mention Adept, if you've seen, you know, a similar story to, Inflection. I think they're companies who made a bet on a specific application. And that's a–wow, I don't have the English term for this.
[00:41:48] Michael Eisenberg:
You either get a purple heart, a badge of honor–either get a badge of honor or you'll hang your head in disgrace.
[00:41:56] Yoav Shoham:
Something like that.
[00:41:56] Michael Eisenberg:
Yeah, it rhymes in Hebrew, but it doesn't work in English so well.
Yoav Shoham:
Yeah. One of the few cases where Hebrew is pithier than English.
[00:42:04] So yeah, I mean, creating a—Right. So I think as you bet on a specific application, it become more tenuous. We, by the way, we realized, for example, that writing assistants are going to become very generally available both by many companies and by the big players.
And so we always believed that WordTune was–because we didn't have the enterprise market ready then–so it was a very good way for us to understand how does application building on top of language development actually work, one, maybe the only one of the few companies would have, you know, our toes in both areas. And it's doing very well in terms of, you know, R and everything. But If we had bet the future of our company on that, we wouldn't be doing as well as we're doing. And so, I think that's part of what you see is, companies who are creating foundational technologies being, I don't want to say agnostic, but not narrowing their success to a particular application.
[00:43:19] Companies are putting all their eggs in this one basket.
Michael Eisenberg:
Do you worry about Google, Microsoft, the big players kind of crushing you?
Yoav Shoham:
Well, I worry about a lot of things, including that, all the time. I'm a worrier. Listen, I have nothing but huge respect for those companies, and certainly if I were in the business of just doing writing assistants, I'd be even more worried.
But Google, by the way, we have a very close partnership with Google and with AWS. We now also publish, we publicize with Azure also. And so, this is not a winner-take-all world that we live in. But we do want to make sure that where we emphasize, where we do concentrate, is we have reason to believe we'll win.
Either because focus–at the end of the day, at Google, mighty Google, there's a team of 50 people, which is already a large team, focusing on the problem. So we have, you know, 100 people. So we can, if we concentrate, we don't have a problem of scale. And the other thing is, and this is not, not a knock on any company, but big companies, it's hard for them to move fast.
And I remember my previous company, which we sold to Google, by the way, it was very successful on both sides, called Timeful.
[00:44:50]
Michael Eisenberg:
We'll come back to that in a second.
[00:44:52] Yoav Shoham:
But, you know, we had this, our calendar, it was a calendar and to-do list combined, and mighty Google, we’d gotten wind of the new calendar that's coming out called Timely.
Mighty Google coming out with a Timely, when we're coming out with Timeful, it's scary. But the truth is, we're able to do much more and much faster than they did. And so, I think that could serve us well also.
[00:45:16] Michael Eisenberg:
If you had a million dollars to invest in any AI company other than AI21, where would you put it?
Yoav Shoham:
I'd ask you to invest in Aleph, so you can invest it for me.
Michael Eisenberg:
That was almost political, that answer.
Yoav Shoham:
One day I want to be able to–no, the truth is, but this is a side comment–when I returned to Israel eight years ago, I thought I wanted to become an investor and I joined one of the investment clubs by colleagues you and I know well.
And I realized that number one, I didn't really care, I didn't come to any meeting, it didn't move me, it didn't get my juices [flowing]. Plus, it's a profession. And I didn't have the energy to learn the profession.
[00:45:55] Michael Eisenberg:
Fair enough. Well avoided. You also have the seat as basically the Chairman of the Policy Forum, for AI here in Israel.
[00:46:06] Yoav Shoham:
Again, if you allow me to slightly correct the title, Israel has a national AI program as does any country in the world. And it's an interesting way it's constructed. And they wanted to have a scientific advisory committee. So they asked me to kind of put it together and share it. So that's what I do.
[00:46:26] Michael Eisenberg:
What is the core policy recommendations you'd make on a national level? What countries, in particular smaller countries, should do about AI right now?
[00:46:38] Yoav Shoham:
First of all, invest money. We could speak in a moment about where the money should go.
[00:46:44] Michael Eisenberg:
See, I got you to be an investor. Just like that. It took me a second.
[00:46:47] Yoav Shoham:
There you go. But the problem is you can't compete in investments, let's say, with Saudi Arabia or UAE. And so, you got to invest smart. You got to invest in compute, and make it available to both academia and the startup ecosystem. And you really want to invest in the development of human capital.
We in Israel, as you know, we have very strong academia, but on a good day, if you count the number of people who do core AI among professors, you won't get to a hundred people. We need more than that.
[00:47:27] Michael Eisenberg:
Well, we have a lot of people who are Israelis in America or UK or France doing this.
[00:47:36] Yoav Shoham:
We do. And one opportunity is to make the environment here very inviting for them to come to Israel, either on a permanent basis or a partial basis.
Michael Eisenberg:
[00:47:47] How would you make it attractive?
Yoav Shoham:
Well, the real answer is the country as a whole. To be an attractive area, regardless of AI. But maybe that's above my pay grade. With regards to AI, I think, you want to provide–the basic table stakes is compute and salary, and we do have a built in limitation in Israel because of the way Israeli academia is constructed, so it's a little hard to pay professors something competitive, but I think there's ways around it. And to put in front of them huge challenges: intellectual, technological challenges.
Michael Eisenberg:
Give me an example.
Yoav Shoham:
I'd rather not, but the reason is that in this national program, I suggested that in addition to all the good stuff that's happening, let's declare some moonshots, which A, would excite people to get involved, and B, if successful, would really catapult us to an even higher kind of standing we have in AI currently. And what those moonshots are is something I think in discussions, I'd rather not enter that discussion.
[00:49:16] Michael Eisenberg:
Got it. How would you say the government is doing here on investing in compute and attracting the talent right now?
[00:49:21] Yoav Shoham:
I think that there are really good people who are trying to do the right thing. There's a lot of willingness to invest, and it's moving, maybe not as fast as we can, but for example, now on compute, I think finally–and I take no credit for that–but there'll be a call, an RFP for providers to provide not tens of thousands of GPUs, but at least a small number of thousands of GPUs for the community. I think that's a really good start. I think the Ministry of Education has agreed to put money to increase stipends to postdocs and afterward, and I think we need to do more of that.
[00:50:04] Michael Eisenberg:
I want to go back to your time at Timeful, one second, that you sold to Google.
Yoav Shoham:
Going back in Timeful.
Michael Eisenberg:
So Jacob Bank, we asked him to see what he wanted to ask you. So he was your Co-Founder at Timeful. And he said I should ask you: what is your other office?
[00:50:23] Yoav Shoham:
So it so happens I'm speaking to Jacob in two hours. I don't speak to him that often. But I'm speaking to him in two hours. Hi Jacob. Jacob’s amazing. My other office was always where, it'll sound terrible, but where I like to smoke a cigar. That's my other office. So, back in the Timeful days, we were–
Michael Eisenberg:
Should I ask you where you like to smoke a cigar?
[00:50:51] Yoav Shoham:
Where?
[00:50:51] Michael Eisenberg:
Yeah. Like, where was that physical other office?
Yoav Shoham:
Oh, at Timeful in the parking lot.
Michael Eisenberg:
The parking lot. Okay.
[00:50:56] Yoav Shoham:
Yeah. I actually had a folding chair in my car. I'd go open the trunk.
[00:51:02] Michael Eisenberg:
Beach chair in the parking lot, kind of?
[00:51:03] Yoav Shoham:
Yeah exactly.
[00:51:03] Michael Eisenberg:
That's great. So everyone could go by and watch you smoking outside, because you can't smoke inside. What would you do when it’s cold out?
Yoav Shoham:
Bay Area? California?
Michael Eisenberg:
Oh, I guess you're right. You couldn't do this in Boston. And what are some big decisions you made? In your other office, on the beach chair in the parking lot, smoking a cigar.
[00:51:21] Yoav Shoham:
Funny. So we needed a name. So we had a temporary name for the company, and you'll enjoy it. It was called CafeBotz. Or in Hebrew, “kafe botz.”
[00:51:37] Michael Eisenberg:
So it means black coffee, or like, you know, dirty, muddy coffee.
[00:51:40] Yoav Shoham:
Yeah, and so I had the URL, but we needed a real name.
[00:51:47] And so we had this agency, this branding agency that we worked with, and I really liked the people. Really smart and nice. And we went to their offices, and they took out magazines and took turns and we gave them scissors, and we cut out phrases, you know, I felt like I was in kindergarten.
[00:52:09] And then we ended up with a short list that nobody was really happy with, but we said we'd go with one of these. And I went to the parking lot and came up with Timeful, and we called the company Timeful.
[00:52:20] Michael Eisenberg:
Amazing what smoking a cigar can do in a beach chair.
Yoav Shoham:
Amazing, isn't it?
Michael Eisenberg:
If you'd just done it on the beach, it would have been even more perfect.
[00:52:27] Yoav Shoham:
Yeah, especially in Israel. Yeah.
Michael Eisenberg:
Do you go to the beach here?
Yoav Shoham:
I do, yeah.
Michael Eisenberg:
You're a big beach guy?
Yoav Shoham:
I don't have as much time as I'd like, but I love jogging on the beach. I like, you know, stand up paddling.
Michael Eisenberg:
Do you bring a cigar?
Yoav Shoham:
I'm not sure I'm willing to admit it.
[00:52:46] Michael Eisenberg:
Okay. Next time you host office hours there, I just want to get an invite, that's it. It's all I'm asking. You know, I'll bring a glass of wine, you'll bring the cigar, we’ll be good.
[00:52:53] Yoav Shoham:
I'm a little self conscious here because I know there's an image of cigar smoking and a certain privileged class. I actually liked that before I–
Michael Eisenberg:
Found out that politicians do it?
Yoav Shoham:
There you go. And, or, you know, smoke filled rooms and–it's funny. I had a meeting with investment backers the other day, and I was on Zoom and I was on my porch. I said, “It's the first time I'm meeting investment backers and I'm the one smoking a cigar.”
[00:53:23] Michael Eisenberg:
I love that. So, Jacob also said that you recruit teams in kind of an interesting, serendipitous way. What is he talking about?
Yoav Shoham:
Boy, I mean…
Michael Eisenberg:
He says you're hiring kind of non-traditionally.
Yoav Shoham:
To be honest, I'm not sure. I'm going to have to ask Jacob. I will say that I usually, so I think I get people, and I like people who are smart and who are nice. I don't know if that goes to what Jacob was speaking about.
[00:53:56] Michael Eisenberg:
He says it's not exactly an organized process, right? You find them, and recruit them, and persuade them.
[00:54:04] Yoav Shoham:
Well, I have to say, we haven't worked much in my previous company nor in AI21 with recruiters and so on. A lot of it is kind of word of mouth and people who know people. Yeah, again, I don't know what Jacob had in mind, but I think if there's something you're excited about, it's easy to bring people to join you on the journey.
[00:54:25] Michael Eisenberg:
For a professor of AI, and especially the way we kind of consider what you refer to as statistical models or LLMs today, I find you much more creative brain side than mathematical brain side.
Yoav Shoham:
[00:54:35] Oh, it's a total accident that I'm a computer scientist. I agree.
Michael Eisenberg:
It's an accident that you're a computer scientist.
[00:54:39] Yoav Shoham:
Oh, I wanted to be an architect, but I was traveling abroad, by the way, apropos, on a wine tour in France. And my parents really made it clear that I should study something. But for architecture, I had to go back to Israel and take a drawing exam.
And also, I didn't want to do that, but I said, “I can't draw.” So I said, “Sign me up for something that my”–I had high matriculation grades–”so sign me up for something that I can get into.” They signed me up for computer science.
[00:55:07] Michael Eisenberg:
Amazing. Speaking of your serendipitous career as a computer scientist, you still went on to be a professor at Stanford for 29 years, and I wonder, you know, you've now moved back to Israel, and you look at what's going on in college campuses in America and on Stanford's campus, and what do you think about that? How do you think about it? Because, you know, your time there was different, very clearly. Things have rotted or gotten a different way. How do you relate to this?
[00:55:37] Yoav Shoham:
I mean, how do you relate to it? It's so raw and unsettling. And, even though I haven't been at Stanford for–I go back, I'm in touch with my colleagues–by the way, there are not a small number of Israelis there also. And Stanford is not an extreme case, but even there it was so unsettling that I felt compelled to reach out to the president, and initially to multiple people and say that, you know, this is not a time for moral, sort of, hedging. It's time to show moral leadership and this, I remember saying, this was on October, writing to my colleagues and to the president saying, “This is what happened. If you can stomach it, go on Telegram and look at what happened. But it's just, you won't sleep afterwards. But it happened. Now, what's going to happen is going to be horrible for both sides. And I'm not going to get into what I think should happen, but it's going to be horrible. But don't forget this moment, because this is what started it.”
And I still feel that way. And I feel like, see all these social undercurrents that I don't feel an expert on, but these identity politics and the oppressor-oppressed, and somehow, you know, everything getting projected onto this very ignorant, simplistic viewpoint. Even by well-meaning, good kids, I think there's a moral obligation for the leadership of universities to risk being unpopular. And say what's right.
[00:57:25] Michael Eisenberg:
Do you think they exercise that moral obligation?
Yoav Shoham:
I do not.
Michael Eisenberg:
You do not?
Yoav Shoham:
I do not.
Michael Eisenberg:
And why do you think that? Why do you think they can't exercise it?
[00:57:33] Yoav Shoham:
First of all, I think they have a very hard job. I think they have multiple constituents that they need to serve. Students are an important constituent. Parents of students are important constituents. Faculty are. I would have to say donors are. And it's hard to balance all of that. So I understand why it's not easy for them. Having said all of that, I don't think any of that excuse is not doing the right thing. And, I would have to say, I think it's cowardice.
[00:58:12] Michael Eisenberg:
You mentioned the word ‘balance,’ you have to balance all of that, and given your perspective both on AI and philosophy, I find myself wondering whether this, kind of, obsession with balance has cost the world its moral position, has cost the world its leaders. We have no leaders because everyone's trying to find balance.
And in real life, there's no balance. I've said this in many cases, there's no balance. And certainly when there are moral issues, there's just no balance. And in order to be a leader, you need to strike out of balance. By the way, if you want to be a leader in AI, you know, the weights, how you weigh it, the weights and measures matters a lot, right?
[00:58:51] Yoav Shoham:
You never win by being average. I totally agree with that. The opposite danger is being simplistic and gravitating toward the extreme position that will never win, and in fact, usually is morally wrong. And so, I think you need to have a clear moral compass and act according to it.
I agree. It's not a matter of just calculating the vector of all these influences and going the, you know, the lowest friction and direction. I absolutely agree with that. By the way, you know this better than me, in business you'd never win if that's what you did.
[00:59:33] Michael Eisenberg:
No, if you don't bet in size, you don't matter. And, you got to make hard choices. And I feel we let people get away with easy choices today, or with not taking a stand. And that's why we have no leaders. It's not an accident at this moment in time, after 20, 30 years of balance, we have no leaders.
[00:59:54] Yoav Shoham:
You think that's true across the world or in certain liberal democracies?
[00:59:57] Michael Eisenberg:
I think it's true mostly in liberal democracies right now. Too many constituents, including social media, and people who aren't afraid to be criticized in public, because the price is high. But all leaders pay prices.
[01:00:15] Yoav Shoham:
I agree. Now, I have the luxury of not being a politician, not having to run for office, which is a hard thing.
Michael Eisenberg:
[01:00:22] But neither do college presidents have to run for office.
Yoav Shoham:
They do.
Michael Eisenberg:
They kind of do.
Yoav Shoham:
They do. I mean, they have to–but you’re right, I mean, they have a small number of people they need to convince.
[01:00:31] Michael Eisenberg:
To speak outside of your comfort zone, meaning not in your immediate pocket, you know, Claudine Gay, for her lack of moral standing, will finish her career, life in disgrace.
[01:00:46] Yoav Shoham:
I'm sorry, I didn't understand what you said. No, I actually didn't hear the question.
[01:00:46] Michael Eisenberg:
Oh I said, Stanford, you were at for a while, but you know, you can see people who didn't express or exert leadership around them, like Claudine Gay from Harvard ended up finishing their careers and their lives in disgrace.
[01:01:00] So there's actually prices to pay for not being a leader. That's clearly true.
[01:01:05] Yoav Shoham:
I think also the question is, can you kind of look yourself in the mirror regardless of how well it served your career?
[01:01:16] Michael Eisenberg:
Yeah. I think a lot of people tell themselves a lot of stories.
Yoav Shoham:
For a while.
Michael Eisenberg:
Maybe. Yeah. I agree with that. I totally agree with that. So, to wrap up on the conversation, Dan Ariely has said about you that you're a “combination computer science philosopher.”
Yoav Shoham:
I'll take that. I like that.
Michael Eisenberg:
What does it mean though?
[01:01:36] Yoav Shoham:
Ask him! Why are you asking me? I like to ask the big questions, which doesn't always serve me well.
I mean, for example, to become very successful academically, the standard practice is just to write a lot of papers, each of which may not be–just get to the minimal published amount sort of thing. And I've never liked that. I've always asked, I like to ask the big questions, which sometimes don't admit very satisfying or good answers. But maybe that's what makes me a philosophical computer scientist.
Michael Eisenberg:
What's the biggest question you're asking right now?
Yoav Shoham:
The biggest question I've asked and partially answered is, what does it mean to understand?
Michael Eisenberg:
Tell us.
Yoav Shoham:
It goes back a little bit to what we discussed before. And so a couple of things. First of all, if you think about people, but it's true also for machines, you never say, “Oh, the person or the system understands.”
They understand something. They understand the subject matter, they understand, you know, mathematics, they understand philosophy. So that's number one is, what is it that you do or don't understand? And second is, how would you exhibit your understanding? And again, I'm sorry if it's a little redundant for what I said before, but you gotta be somewhat proficient.
If I can't answer any question in mathematics, I don't understand mathematics. I don't need to be able to answer every question. It's okay even if I explicitly say, “I don't know the answer,” but I can't–and there's certain things I can't be ridiculous about. I can be ridiculous by giving a wrong answer, or by not knowing.
Sometimes it's ridiculous. I don't know how much 1 plus 1 is. No matter how good I am at other additions–I don't understand addition. So, those are some telltale signs of understanding. You might ask, is there a deeper sense of understanding? That is, this is just the way you exhibit it, but internally–and people, there's a lot of discussion about in AI and philosophy, and I actually believe that the answer is no. This is very much a Turing kind of like–it's not just a test, it's actually the essence of understanding, being able to answer a question.
[01:04:09] But then we have intuition to somehow–when we understand things, there's some deeper principle that is being displayed. It's not just, you know, there's a famous story about one of the mathematicians, I think it was Gauss, but I'm not sure, who was given as a young kid, some arithmetic series, and asked what the sum was, and he immediately gave the answer.
And they said, “Oh, you know the formula?” I said, “No, I just summed it.” And so, but of course, you know, the way most of us mortals do is we know something more general, the general principle. And that somehow intimates a right to understanding and then, how does that fit into the picture? And I think this is where explanations come in–why do teachers at school ask, you know, explain your answer?
Why do they say that? It's because if you're able to explain the answer, the teacher can infer that you could have answered many other similar questions that the teacher didn't ask. This is a way of probing you for the deeper thing. So essentially, an explanation is worth a thousand answers. And so I think that's sort of the main elements of understanding general proficiency, avoiding ridiculousness and explanations.
Michael Eisenberg:
[01:05:26] And what makes you human? Or what makes us human?
Yoav Shoham:
I wish I knew. So we all have this intuition, that we’re - vayechasrenu me’at mehamechona -
That we’re somehow different from machines.
Michael Eisenberg:
“A little less than machine.” That is a paraphrase from a verse in Psalms, which says that you made the man slightly less than God.
Yoav Shoham:
Right.
Michael Eisenberg:
So Yoav reinterpreted it or rephrased it as you make it a little less than a machine.
[01:05:55] Yoav Shoham:
So I don't know who's less than what, but at least we're different. We all have this intuition that we're more than a glorified toaster. And I used to have this course at Stanford for freshman seminar, for young kids before they got corrupted–the title was, “Can Computers Think? Can They Feel?”
[01:06:17] And I'd start the course by–it's a long answer to your question, but bear with me–so I’d start the course with these six questions. Let's see if I can reconstruct them. Can the machine think? Can they understand? Can they be creative? Can they feel? Can they have free will? And can they be conscious?
And I’d ask them to think about machines in general. You know, not just the current laptops. And first, I'd force a vote at the beginning of the course. And, you know, you've got to vote, you can't hedge depending what you mean, yes or no. And then we speak about AI and machine learning, all of that, and at the end they would vote again.
And I noticed two things. First of all, clearly at the end, people were both less sure of their answers and more charitable with competitors. All of the “Oh, of course they can't,” that went away. There also seemed to be a difference between men and women here, but this is not statistically significant so I'm not willing to–but by and large, it seems like women thought there was a difference and guys couldn't tell the difference. But that requires more research.
Michael Eisenberg:
[01:07:20] It speaks volumes. Yeah.
Yoav Shoham:
But I gave a TED Talk about this, and I spoke specifically about free will. And I looked a little into it, and it's fascinating, because obviously we have this feeling that it's somehow innate and one of the archaic characteristics of humans being responsible for our actions and so on, moral people, and you see–what have people written about it?
So Isaac Moshe Wiesinger in his Nobel Prize speech said, “We have to believe in free will, we have no choice.” And this philosopher, Ray Smullyan, who passed away, he's kind of a funny guy. I remember hearing him talk. He looks like Dumbledore, but he actually is a magician also. He's an amazing guy, and he has this parable, in a short story he wrote, about a mortal requesting from God to be absolved for this burden called free will. And through a series of Socratic questions, God kind of gets across the message, not that it's a bad request, it's an incoherent request. And then you have Rabbi Akiva, HaKol Rashu Tzafui V'Rashut Netonai–everything is foreseeable and free will is granted.
What does that mean? I find it fascinating. This is an example of what makes us human. I think we don't know the answer, and asking where the machines can have these qualities is just a new lens, not to understand only machines but to understand ourselves.
[01:09:08] Michael Eisenberg:
And so, to finish up, what is your core human value?
Yoav Shoham:
I’m not sure if it’s my core, but I'm really curious. Just curious.
Michael Eisenberg:
Just curious. What makes you most curious? What are you most curious about?
Yoav Shoham:
What does it mean to be human?
Michael Eisenberg:
Well done. Thank you, Yoav, for joining us on Invested.
Yoav Shoham:
Thanks for having me.
Michael Eisenberg:
And if you enjoyed this podcast, please rate us five stars on Spotify, Apple Podcasts, wherever else you listen to podcasts. If you want to learn more about Yoav Shoham, you can follow him on X @Y S H O H A M, Yoav Shoam. Thanks, Yoav. Appreciate it.
Yoav Shoham:
Thank you.
Executive Producer: Erica Marom
Producer: Sofi Levak & Yoni Mayer
Video and Editing: Ron Baranov
Music and Art: Uri Ar
Design: Rony Karadi