Thinking Fast and Slow, Deep Learning, and AI [video](lexfridman.com)
Thank you Lex. I think your podcast are the most interesting in the internet. But...
Readings books is awesome because you are getting the condensed knowledge that someone have spend maybe decades of their lives to compile. When I listen to a podcast the same applies to a lesser extend. The problem is Lex does not ask interesting questions. He dwells into philosophical and common questions i.e "What is the meaning of life", "is math invented or discovered". This questions are interesting but are not in the guest field of expertise which in turn causes them to give a generic response. This devalues the quality of the podcast.
I disagree with this. One of the reasons I like this podcast so much is that the philosophical questions often lead to guests revealing profound insights into their outlook on life (or at least you get to see more of the interviewee's personality!).
The other thing I like is how Lex is willing to disagree with the interviewee. It leads to much more interesting discussions when someone has to explain their reasoning rather than just stating it outright. I think it helps that Lex seems to always ask the same questions I was thinking of!
I agree with this. I’ve been in the AI field(s) for over 25 years and I really wanted to like this podcast. The people Lex gets on are amazing, but I haven’t been able to get through any of the episodes as they get too philosophical.
I understand it would be difficult to get ones head around the matter produced by each of these giants to ask more specific questions. Would it be worth asking others in the field about the key ideas that require explaining and which topics are controversial?
While I definitely understand where you are coming from, I think Lex's style can grow on you.
My advice would be to ask shorter questions, sometimes Lex you try to explain what you mean. I would just drop the question and let the other person start talking instead of you trying to fill the silence.
Also I do really enjoy the questions about meaning of life (maybe more discussions on free will would be great too).
In any case, hats off to you Lex for interviewing some of the most interesting people in the field (and those who are not exactly in the field too).
Also I love how you show up with a suit.
Lex here. The amount of positive and thoughtful comments here is humbling. From the bottom of my heart, thank you.
Here are things I didn't realize is the role of the interviewer (my role) but I now know they are:
1. Push towards depth, because not all people go there naturally themselves.
2. Ask for clarifications if I don't understand something. This can make me sound stupid, but it's a worthy sacrifice. I will always sacrifice ego for the chance to understand something basic or hopefully fundamental. In fact, I play dumb sometimes just to force explanation of basics on which the technically deep ideas are built.
3. Disagree respectfully (at times playing devil's advocate) to give a chance to the interviewee to argue their point.
4. Speak whatever question or point I have clearly, concisely, quickly, and then shut up and listen. My role is to give the other person a break and to throw up ideas that spark their passion. I really struggle with this (especially the concise, clear part).
Thank you again for the kind words. I'll keep improving!
Another AI person here (Stanford PhD student). I concur the philosophical stuff is mostly annoying for me -- for me it comes off as random self-satisfied pseudo-intellectual self-indulgence that makes for nice clickbaity-youtube clips but does not make for actually good conversations. The list of guests is great, but i'd prefer the conversations to me more humble and less wannabe insightful.
Sure, keep improving as much as you can. Just take care not to lose what distinguishes your podcast/style in the process!
Interesting philosophical thoughts are not limited to philosophers. A philosophical question asked to a top-achieving person in any field often yields a thought-provoking answer. I have heard some of those in Lex’s podcast. A diversity of perspectives is helpful for such questions.
One can tell that Lex does his research and also asks specific questions pertaining to the interviewee’s expertise. I think he strikes a good balance in many/most episodes.
Because it looks like Lex is reading comments here, I would suggest that an adequate compromise for going into more technical detail on podcasts might be to link to external resources and cover them in a more high level on the actual podcast itself. It’s my general belief (admittedly experience based) that rigor is best developed in isolation; however, the podcast format is great for developing a high level intuition.
Keep up the good work lex. The quantity of content you put out at the level of quality you do is very impressive.
The few times I've tried to read Kahneman's book I must admit that I stopped pretty early. I found his writing style too authoritative, without presenting much evidence for the theories he lays out.
I can see where you might feel this way. I recommend giving it another try and come at from two considerations. The first is observing their experimental designs. They are very clever and well described. The second is the extent to which these experiments have been replicated with consistent results using populations with a range of socioeconomic and cultural differences.
Given the consistency of these experimental outcomes, may help to explain his authoritative tone. After all, he won a Nobel prize in economics which is completely outside his domain expertise.
If that is not enough to entice you to give it another go, consider the book, The Undoing Project, by Michael Lewis. Lewis, in his highly researched and using detailed anecdotes describes how Kahnemans and Tversky collaboration initially began as a collaboration around understanding the decision making process in circumstances where there is a high degree of uncertainty and acquiring additional information is not feasible. The Khanamen/Tversky collaboration is described as a deep collaboration where neither man claims credit for their respective contributions. Only both minds together could have produced their research findings.
Lewis also describes how many reviewers of his book, Moneyball, expressed the many parallels between Kahnemans/Tversky's research and themes covered in Moneyball. Lewis was not familiar with this research when he wrote Moneyball. It was not until others pointed out these parallels that Lewis decided he should engage with Kahneman to write The Undoing Project.
Given that this is what he had to say on priming, the authoritative tone starts sounding a little less authoritative: "When I describe priming studies to audiences, the reaction is often disbelief . . . The idea you should focus on, however, is that disbelief is not an option. The results are not made up, nor are they statistical flukes. You have no choice but to accept that the major conclusions of these studies are true."
I agree completely. If you take everything in the book as 100% gospel, it will let you down. However, there are lots of perspective changing things to learn. I particularly learned a lot about anchoring and reacting vs thinking.
Well, afterall, he is an authority, and too much evidential detail likely would detract from his goal to help the general public improve the ability to identify and understand errors of judgement and choice, in others and eventually in ourselves, by providing a richer and more precise language to discuss them.
You might prefer his scholarly writings over those intended for a general audience:
And you're well served with this response, as the actual work he did is basically not reproducible. No matter how many best selling books made into movies; it's still bullshit.
I read it on advice of a good pal, and it sure seemed like bullshit to me.
I believe this is overstated and makes it sound like all of his work is bs, when the majority of the book is just fine.
(I could be wrong and would be happy to be proven so. Just not a fan of the all-or-nothing attitude people apply toward this book when that doesn't seem warranted.)
If some data is wrong and you can't tell which is which, how can you trust just some parts of this book? Are we to fact-check bit by bit until everything gets sorted out and we know for sure what parts of the book to trust?
Besides, why go through all that trouble if there are heaps of good psychology books out there that aren't plagued with errors as this one? Since our time is so limited, I think it is a good heuristic to avoid any non-fiction book that it is known to contain errors.
Your complaints aren't unreasonable, and I wouldn't disagree with skipping the book to read others instead. I agree with whoever said the book needs a version 2.
However, I don't think this justifies dismissing the entire book. "I'm not sure which studies weren't reproducible and I don't feel like looking them up," is a very different statement than, "This whole book is bullshit." There's really no reason to make that latter overstatement.
If some data is wrong and you can't tell which is which, how can you trust just some parts of this book?
That's kind of the same problem that a psychology researcher faces; some of their data is going to be wrong.
The question winds-up being how "robust" your claim is, can you survive having some points being wrong? For Thinking Slow And Fast, the robustness of the claims is kind of a mixed bag imo.
> why go through all that trouble if there are heaps of good psychology books out there that aren't plagued with errors as this one?
All of psychology has suffered in the replication crisis, but my understanding is that Kahneman & Tversky's stuff is better than most. Their work was mostly solid and in a different era. The real bullshit began in the era of celebrities doing TED talks.
Edit it would be better for me to distinguish Kahneman & Tversky's own work from the work of others described in the book. Eg there is stuff in the book on priming which is definitely TED-era and doesn't replicate.
I feel like there are two ways you can take a book like Thinking Slow And Fast; One is that there are variety of mental processes, some faster than others, some more conscious than other, etc. The other is that there's a hard distinction between "the" slow process and "the" fast process.
If you take with the first implication, it's plausible, useful as one more datum. But it seems like replication problems make the hard distinction approach more problematic.
> The other is that there's a hard distinction between "the" slow process and "the" fast process.
I've yet to read the book, but I've listened to him on several podcasts, and I've never gotten the sense that he wouldn't see it as a continuum, didn't he even say something early in the interview that the "1 & 2" is more of a metaphor? (his answer to that question starts at 9:00). System 1 is trainable, for example, and I can't imagine he'd suggest that isn't highly dimensional.
As far as I understand it, only a small part of his book is based on distrusted research (and was that even his work or did he just write it up? Cannot remember).
Most of the book is still considered correct.
Plus, he readily admitted to the faulty parts and made a very strong request to the affected research teams to clean up their act and pretty much re-do all experiments multiple times, by multiple labs, with external oversight.
The things Kahneman worked on himself all did pretty well out of the replication crisis as far as I can tell. The fact that his findings got so much pushback when introduced probably helped a lot with making them more rigorous. And Kahneman didn't dig in his heels when the crisis hit. But there is an awful lots of stuff that needs to be expunged from his book because it was based on what turned out to be bad science.
> But there is an awful lots of stuff that needs to be expunged from his book because it was based on what turned out to be bad science.
Can you give an example of this? I haven't seen anything he said in here that fundamentally needs to rest on a theory that could be the outcome of some study, but maybe I'm interpreting somehow differently.
I'm not sure if this is what you're looking for, but here: https://replicationindex.com/2017/02/02/reconstruction-of-a-...
You are making a sweeping generalization of the entire book.
Bear in mind: As has been repeatedly pointed out, it is only the priming-related chapter (called 'The Associative Machine' in the book) that put "too much faith in under-powered studies". Not the entire book!
The book is a synthesis of forty years of Kahneman's research and his collaboration with his late colleague, Tversky. A wide range of topics are covered; and it still absolutely merits reading. Patiently dive into the book and make up your mind.
(And yes, a v2 of this book definitely is worth it, given the "authority" of the Nobel Memorial Prize.)
I've heard something similar here on HN before (still didn't read the book myself), and what baffles me: wasn't he awarded the Nobel Prize for that work? Are those just political bullshit nowdays as well, like Oscars?
Sveriges Riksbank Prize in Economic Sciences (in memoriam of Alfred Nobel); not The Nobel Prize. Economics is mostly a set of shaggy dog stories in service of ideology, rather than a branch of legitimate scientific inquiry. I mean Paul Krugman won the same prize; he appears to be have a Brier score of 1 on his predictions.
Etymology and history aside, important quote from Wikipedia is:
> it is administered and referred to along with the Nobel Prizes by the Nobel Foundation
So, for all intents and purposes it kinda is "the Nobel Prize in Economics". If they don't give a fuck about legitimacy of it, I don't see why they would care for the rest of "Nobel Prizes".
No, it really isn't "the Nobel Prize in Economics" which is why it has a different name, and the Nobel family want to get rid of it. It should be emphasized that it is a nonsense Nobel and economics a nonsense subject every time it is brought up.
The nobel family has, understandably, been trying to stop the "Nobel" price in economics for quite a while.
He was awarded the Nobel before anyone was aware of the replication crisis in his field.
Add to that list "grit" and "fixed vs. growth mindset" -- has any solid, interesting work in sociology or behavioral science been done recently?
What's BS about grit and a growth mindset?
I'm not the author of the original comment, but a few things I've found questioning the replicability of the "growth mindset" work are here:
* Summary of a some criticism by researchers trying to replicate the original studies: https://www.buzzfeed.com/tomchivers/what-is-your-mindset?utm....
However, this BuzzFeed article has itself been critiqued for strawmanning "mindset theory" and not highlighting some of the meta-analyses that have given evidence to the theory of "growth mindset". Source: https://www.thecut.com/2017/01/mindset-theory-a-popular-idea...
* It's worth noting that Carol Dweck herself has commented on how she believes her research is being inaccurately applied in schools: https://www.edweek.org/ew/articles/2015/09/23/carol-dweck-re...
* Here are two (pay-walled) meta-analyses done on "growth mindset" research: (1) https://www.scribd.com/document/326191990/InPress-BurnetteOB... (2) https://commons.lib.jmu.edu/diss201019/17/
As a footnote, I will say two quick things: first, I've found the theory of "growth vs. fixed mindset" in helping me challenge myself in areas I may otherwise not have. Second, on the other hand I have worked full-time in a high school as a CS teacher and we discussed growth mindset quite often in this school. Many students became "immune" to the idea and rolled their eyes at it to the point of making it a meme around the school.
The effects of both were greatly overhyped by books and news articles when they first came out. Replications found much smaller effects, e.g., https://marginalrevolution.com/marginalrevolution/2018/03/gr...
Just to piggyback off of this comment, from anecdotal experience as a high school teacher, I found the portion of the replication quoted in Margin Revolution to be quite true:
"… as expected, average effects were small because many students are already doing well, do not have motivational issues, or are not in environments that encourage or support growth-mindset behaviors. When we take account of such factors, more noteworthy effects emerge. The improvements in the gateway outcome of 9th grade GPA were concentrated among adolescents who are at significant risk for compromised well-being and economic welfare: those with lower levels of prior achievement attending relatively lower achieving schools. The finding that an intervention can redirect this adolescent outcome in this sub-group, in under an hour, without training of teachers, and at scale (i.e. in a random sample of nation’s schools), represents a significant advance."
Personal experience lines up with the result that lower-achieving students may benefit more from the "growth mindset" idea than others. For instance, I did notice that messaging I gave to students with a "fixed mindset" towards studying CS/math seemed to improve motivation, work ethic and interest over the course of a semester.
This is a book that really, really needs a second edition.
I had to put it down at around 30% but not for your same reason, but because I just found it badly redacted (boring for me to keep going). However, I believe that the info is quite compelling and useful, but might need another style.
If you want an easier introduction I suggest Michael Lewis’s book the Undoing Project.
Just wanted to say thanks for doing the podcast! It has become my favourite podcast to listen to (and I listen to a lot of them!). The style of it is relaxing and yet mentally stimulating at the same time. I enjoy all the philosophical questions as well!
Thank you! To me, asking philosophical questions of world-class mathematicians, physicists, computer scientists, engineers is eye-opening and inspiring. It's taking a break at looking up at the stars. I love the messy details of engineering, but every once in a while it feels great to pause, and reflect on the big picture of our incredible, unlikely existence here on Earth.
As Oscar Wilde said: "We are all in the gutter, but some of us are looking at the stars."
I love the gutter and I love the stars!
Lex, you're doing an amazing job. The level of commitment and hard work you put in to make this podcast is incredible.
On the technical depth side criticism you're getting, I would say the one with Ian Goodfellow had optimal balance.
And don't take destructive criticism about bad attitude or tone etc by some commentors . For someone introverted it's already draining to put yourself in front of a camera ...
Thank you sincerely for the kind words and for understanding the toughness of getting in front of the camera as an introvert. The criticism on top of that is TOUGH to take, but in moderation it helps long-term. With this HN thread, I told myself I'll meditate on critical advice, consider action items to take in order to improve, and then move forward letting the more personal destructive (as you say) criticism fade from memory. That last part is easier said than done, but that's life.
Lex’s podcast is my new favorite. I’m not a Computer Scientist, so I thoroughly enjoy hearing him ask Philosophical questions to these great minds and having them answer with the humble nature of mere mortals... which of course, they are. Keep up the good work Lex, I first heard you on Rogan’s podcast, you are bringing something really unique to the world and I am sure we will all be better for it.
Great podcast Lex!
I am intrigued about the discussion on explainable AI. How do you feel about the quality of the current XAI research? What do you think are the most important directions for that field? And what do you see it looking like in 3-5 years?
Lex, amazing work with your podcast.
If someone is willing to dig deeeper into technical stuffs of any of your guests, internet is full of resources but there's no way to find the personal insights one could get from the podcast.
I do really hope to see Robert Sapolsky and Douglas Hofstadter sooner or later.
I speculatively emailed Kahneman once about a blog i was working on. He emailed me back. Cool guy.
I am not sure if this is covered in the podcast, but from the title, I wonder if in deep neural net research there are papers and implementations of a system 1 and 2 like model. One part of the neural net could be system 1 for fast decision making, and other could be system 2, with more deliberate analysis. They could communicate over some shared medium.
I got your answer! He indeed explicitely says in the interview that deep learning is really system 1 only. That was surprising but makes total sense — it's an unconscious, automatic, fast, effort-free response, which is exactly what system 1 is.
Note that both system 1 and 2 are trainable and able to perform complex tasks (he takes the example of a chess player for whom only strong moves come to mind, which is system 1 playing chess in that case; and system 2 is more of a "validation" for system 1's output in such a case).
He doesn't go into it but I think you could make the reverse argument, that system 2 is "checked" by system 1, when we "feel" that something, even though "correct", is just "not right" for instance. That kind of judgement over a thought or idea is nearly instant, it's very system-1 like, and keeps popping up in our thinking as we "judge" said thoughts and do some triage as we go along.
As for AI and system 2, the problem is that system 2 is conscious, deliberate, and aware of causality and meaning — and the last two are really hard problems for now. He mentions earlier ML models pre-DL (when they tried to do it the hard, symbolic way iirc?), and indeed the question of whether current architecture can or cannot generalize up to system 2 is opened. Yann Lecun apparently thinks it can (just that we don't know if it's right around the corner or very, very far away), Lex (and most AI experts I heard) think not, that there's a fundamentally 'other' kind of architecture(s) required.
Would you consider using neural nets to solve integration and differential equations as it doing system 2 reasoning?
Or what tasks are in the domain of system 2?
Sorry, late reply, hope you get this!
I believe it's totally "system 1", and actually by design.
First of all it's not a new kind of NN, it's more about applying a given problem to another technique, name consider math syntax as just another kind of language:
> represent complex mathematical expressions as a kind of language and then treating solutions as a translation problem
(which might seem obvious but I guess it took that much refinement to yield actual results)
Now, take these quotes, emphasis mine:
> Humans who are particularly good at symbolic math often rely on a kind of intuition. They have a sense of what the solution to a given problem should look like
> By training a model to detect patterns in symbolic equations, we believed that a neural network could piece together the clues that led to their solutions, roughly similar to a human’s intuition-based approach to complex problems.
Intuition, intuition-based approach: this is exactly what system 1 represents.
Also note the results:
> Our model demonstrated 99.7 percent accuracy when solving integration problems, and 94 percent and 81.2 percent accuracy, respectively, for first- and second-order differential equations.
One major difference between systems 1 and 2 is that 1 is fuzzy, intuitive, it's not always exact, it's very analog; whereas system 2 is able to be correct, exact, precise — and like the researchers themselves validating the 5,000 answers, you'd expect a "well trained" math intelligence to solve 100% (or close enough) of these problems. It may take time but give yourself 20 years and you'll get there no doubt; whereas this narrow language-AI with one hundred million examples still makes mistakes.
Very system 1 indeed.
Typically the two systems are called 'model-free' and 'model-based' learning.
Model free (system 1) is fast, stimulus-response mappings. In ML, these mappings are called a policy. Most of reinforcement learning, including deep-learning-based RL, is model free.
Model-based (system 2) is less widely used. In this case, an agent or system is trying to learn the dynamics of a system and use those to project or forecast into the future. This is really helpful for e.g. learning a control system. Being able to use a model of the world to make accurate predictions lets you plan.
I believe the paper SlowFast Networks for Video Recognition goes somewhat in that direction. The architecture is split into a fast pathway, to capture motion information, and a slow pathway that captures spatial semantics. https://arxiv.org/pdf/1812.03982.pdf
Can't get enough of Lex interviews, absolutely brilliant!
Fridman's podcast has to be among the highest concentrations of high-calibre guests ever assembled in this area. Week after week it's another huge name.
Lex here. Thank you. I realize I sometimes miss opportunities for depth or insight with these brilliant folks, but I work hard to improve. The comments on this post, positive and critical, help. So again, thank you.
Thank you for what you do man! btw I'm curious, how do you convince (if at all) to come to your podcast to have a chat? And what's the process like from guest selection to actually recording the podcast to finally releasing it? Thanks once again!
I'll echo the comments here and just say thank you Lex for your calm, incisive, and enlightening interview style.
In an era where the quality of journalism and media has moved from objective analysis toward editorialized hysteria on all topics, the approach you take is especially refreshing.
Thank you. Truly. I hope to live up to this.
Listen to a few of Lex's podcasts (his own and his appearance on Joe Rogan) before you dismiss his interviewing style and/or tone. You may very well learn to appreciate it and his breadth of technical knowledge as much as the quality of his guests, which probably the highest of any technical podcast so far.
Lex here. Thank you for this. Words of criticism can hurt pretty bad these days, so this kind of support really helps. Thank you.
If it helps, remember that paradoxically the better you're doing the blunter the criticism will be. If this were some fledgling little podcast run by a nobody, it would feel cruel to criticise too harshly. But this is the freakin' Lex Fridman AI podcast, the untouchable juggernaut at the top of its game. What could I say that could possibly hurt something (and someone) so successful?
Of course, I'm sure you don't feel untouchable, but maybe there is a certain cold comfort in knowing that inconsiderate criticism is a hallmark of becoming an institution. Keep up the good work, man.
For me the podcast is one of the few things that I really like. The philosophical discussions allow to get a glimpse of the mind of the interviewed person. Sometimes I'm quite shocked that some people seem so superficial - from a philosophical point of view - even when they belong to the best in their field.
Thank you for the kind words. I think it's important to think both about the mathematical/technical foundations of an idea and its place in the big picture of human civilization.
lex. quit academia for now. you can always go back later. carpe diem. you have a huge opportunity now and of you dont go full time youll never really know if you could have 'made it' as a AAA interviewer of household international fame.
give yoirself a time limit. you can always go back to academia.its time ypu accept youve hit an inflection point and now is the time you decide if you will take the risk of pursuing geometric growth. zeev
What a coincidence, I am currently reading that book and just watched Fridman's latest video.
Thank you Lex! After taking classes in college, I sort of drifted away from the world of AI. Thank god I stumbled upon your videos. You re-sparkled my curiosity in AI with your podcasts. Your questions are so insightful and the guests that you bring in are amazing.
The quality of the guests is amazing but the host is extremely off-putting and difficult to listen to.
I find that Lex often side-steps technical depth and wanders into the philosophical side of things. He has the opportunity to ask diving questions but he simply scratches the surface with nearly every guest. It's not entirely his fault, he's under time constraints but I'd hope for more intellectual stimulation. That said, I do appreciate his efforts and he seems like a genuinely nice fella.
Yeah I have an issue with that as well. He gets some great guests on the show and spends most of the time pushing a philosophical debate about AGI and futurism instead of letting the guests talk about their areas of expertise. I'd even argue that it's dangerous because it might give the casual listener an impression that ML/AI is much further ahead than it really is if all of these top minds are discussing AGI.
I'd much rather hear LeCun, Goodfellow, Schmidhuber and Bengio talk about what they're currently working on and where they think the field will go in the next year or two instead of their wild guesses about AGI. I guess the futurism crowd is much larger audience though.
Well, consider that this podcast series spun out of the AGI class that Lex ran. As such, it's hardly surprising that he puts a touch more focus on the AGI side of things. And for me, I like that. I can find videos of Lecunn, Goodfellow, Schmidhuber, and Bengio talking about the low level technical details of their work. I actually appreciate hearing them talk about AGI, as my personal interests heavily involve thinking about the connection(s) between the currently trendy AI stuff (DL, DRL, GAN's, etc.) and what might eventually become AGI.
Lex here. This is a point I think about a lot. I hear conflicting advice from brilliant folks I really respect. Some say "go deep on the philosophy" and others "go deep on the technical details of the person's expertise." The latter is something that surprised me, and something I'll definitely do more of in the coming months. In general, one of the things the internet pleasantly surprised me with is that people like depth (even those outside the field). I obviously love the details, especially in ML, CS, math, physics, and psych.
Thanks for the kind words. I work hard on this thing, and hopefully will improve with time.
Podcasts aren't great for super technical depth IMO given the lack of graphics much less equations, etc. But also IMO hearing about some of the technical details as opposed to what I might hear on an NPR are useful.
It's a difficult tradeoff I know whether for a podcast or even even for a lecture topic. I went to a couple of your sessions this IAP and I'll admit I found one fantastic (in part because it was directly relevant to some things I'm working on/talking about around AI privacy) and one not so much because I'm more focused on practical applications than the math underpinnings.
I don’t know. He could ask Jeremy Howard about the architecture of fast ai and we’d get some interesting answer that becomes dated and irrelevant very quickly. But when he asks what Jeremy’s favorite programming language is and Jeremy comes back with Microsoft flippin Access, you get a view into Jeremy’s commitment to making software accessible to the masses. Or when Elon pauses like Elon does and asks the AGI what’s outside the simulation, which simulation is he talking about? Is he asking the AI to describe its God or ours?
I don’t know, i really like these forays into the philosophical.
Thank you. Exactly! I ask these questions in hope to inspire the rare gems of brilliance. Sometimes those come from deep technical questions and sometimes from silly philosophical questions. Both have potential. I fail often with striking the right balance, but hopefully less and less over time.
This is too harsh in my opinion, but I agree that he isn't really a good host. I don't find him off-putting at all (actually the opposite, he seems to be a nice guy), but I really dislike the questions he asks.
This clearly is a podcast for quite technical people. His guests most of the time (like, almost always, except the cases he invites somebody because he was on a JRE) are people notable for their technical contributions. They discuss some very technical stuff the guest is known for and is literally the best possible person to teach us about. Instead he skips technical questions almost completely and opts for "what's the meaning of life"?! Come on!
When the discussion happens to be pretty technical (mostly, because the guest himself is more of a no-nonsense type of guy) it sometimes feels like some basic background necessary to understand the further explanation was skipped. I assume that it's my fault, since this is supposed to be common knowledge and host doesn't want to interrupt the guests to clarify such nonsense. And later on I understand, that Lex didn't understand that part either, but didn't make any attempts to clarify. Isn't that the point of an interview?..
I wouldn't want to offend him, but often I think "oh, such a waste!" listening to his podcasts. So much stuff is left unanswered.
So, yeah. Nice guy, but no so good interviewer. And way too romantic.
Thanks for the comment and the kind words about me being nice. I'll try to live up to that.
On the technical depth point, I agree. A lot of folks tell me they love the "meaning of life" questions. I love both the technical and the philosophical. My hope is to more and more try to go deep technically with the ML, CS, math, physics folks on the topic of their expertise, and find productive points of passionate disagreement or insight. This isn't easy, and I fail often, but I'm working hard to improve.
Thank you for your work, I hope I didn't offend you and I really wish you luck at improving (for obvious reasons).
The problem with being way too philosophical is that it restricts discussion to sharing opinions, which is totally ok, but the problem with opinions is that every single person on the planet has an opinion, but it's way more rare thing to have some knowledge. So, it may be really interesting to hear somebody's opinion about something, but as far as learning goes, I don't really gain anything from them: the more abstract and complicated the question, the less difference from asking a random person on the street. But when you have somebody in front of you, who has some knowledge that your audience (or you) doesn't have (be it technical, or an experience of making a wildly successful infotainment youtube channel, or anything else), you can learn so much more from every single conversation.
If you want to get into philosophical questions you should get some actual philosophers on the show because right now it seems like most of your guests are just making up answers to your questions on the spot and it's not very insightful. At the very least keep the questions more open ended, ask them what are the biggest road blocks to AGI and how do we get over them, or if they have some contrarian opinions about AGI.
Joe Rogan got so popular because he lets his guests drive the conversation and allows them to talk about themselves and whatever it is that they're interested in. He seems to really follow Dale Carnegie's advice from How to Win Friends and Influence People (https://en.wikipedia.org/wiki/How_to_Win_Friends_and_Influen...).
Joe Rogan is not a good example, since is unable to discuss complicated stuff (did you see the one with Bostrom? it was painful) and here guests are valuable for their technical knowledge. Do you want for Lex to sit there half-stoned and ask every guest if he tried DMT?
People watch JRE, because it's fun and he has a lot of very influential people as guests nowadays. It doesn't mean that more interviewers should be like him, God forbid...
I'm not sure why is it so. But I love listening to Lex. Unlike many podcast hosts, the way he lets his guests talk is amazing. His questions are well-crafted. His voice is definitely soothing to listen over a long commute!
Thank you. I try my best to ask good questions and then shut up ;-)
It's obvious how much thought you've put into your first powerful question with each guest, and at least this listener greatly appreciates your tone and the philosophical elements of your podcast. Please keep it up. You've got one of the most interesting podcasts going right now; right up there with Pivot, Rogan and The Daily.
Interesting, I find Lex to be one of the best podcast hosts I've listened to. I like his mix of philosophical and technical conversation. Most of his guests seem to genuinely enjoy and engage in the conversation.
Thank you for the kind words. Some people love the philosophy, some the technical details. I try to mix it up, and have fun with it. But it's good to know that people have more patience for technical depth than I realized. So I'll try to do more of it in the future.
Those are very strong words. Why?
The deadpan delivery and lack of personality, the arrogant attitude, the poor quality of the questions that he asks, the fact that he reads questions from a sheet in front of him instead of listening to his guests and having a real conversation with them, the way in which he comes across as disingenuous in most interactions... I could go on (and I know that a lot of this is subjective).
Lex here. I hear you. I'm working hard to improve. The parts about arrogance and disingenuousness I hope are not true, but I appreciate comments like this because they help me reflect on it, and see that there might be truth to it. Your words hurt but in the end are a gift, so thank you. I'll do my best to improve.
I wondered who'd call themselves AlanTuring on HN :-)
I think you come across very genuine, not arrogant at all. And it shows that you do your best to reflect, don't worry about that. However, while the philosopher (and sensationalist) in me want to disagree: yes, be more technical! You have a unique audience and guests to cater to, I can't imagine the pressure.
That being said, it's your podcast, and it wouldn't be where it's at if it wasn't for your character and honset curiosity, and its appreciated for exactly what it is. Cheers!
You should probably add that you're Lex in your HN profile.
I just did. I'm more of a lurker on HN, but might as well be open. I get criticism sometimes, but I it's probably good to just take it, accept it, and grow from it.
The deadpan delivery and lack of personality, the arrogant attitude,
Interesting. I don't see much in the way of arrogance when I watch these. What you call "deadpan delivery and lack of personality" I attribute to his being Russian, and just having a somewhat stereotypically Russian style of delivery.
Lex here. Yep. Russian. Underneath the cold stare and the suit is possibly some personality and a bit of humor. Then again it's like past or present life on Mars. Many astrobiologists believe it was at least once there, but no proof has been found yet.
Underneath the cold stare and the suit is possibly some personality and a bit of humor.
I have a close friend who is Russian, and I've found that, at least in his case, there definitely is a sense of humor there, despite the stereotypes. I just find his Russian humor to be very subtle and even after all these years I don't always pick up on when he's joking and when he isn't. From watching your interviews, I get a very similar vibe. I suspect you have a fine sense of humor, but that not all Americans will appreciate or recognize it easily.
How can you say he has both a lack of personality and an arrogant attitude?
I don't know how you could possibly listen to his interview with George Hotz (one of my favourites) and say that he has no personality.
I think you have assumed the worst, that his entire personality is an act, and this has coloured your view of everything else. Once you realize that he's being sincere, I think you will find it much more enjoyable.
For me it's the questions. He gets these incredible guests with deep understanding of their respective fields and yet they keep being asked nonsensical question they can't possible give a meaningful answer to.
For example almost every guest some variation of "Do you think one day we'll have superhuman AI?" Then the guest is struggling to come up with some platitude, because there is nothing else to say. It's a waste of time.
He is self-aware too, he sometimes apologizes for asking, yet he keeps doing it anyway.
Yeah, in the interview with Kahneman he doesn't even really get into any of his work in cognitive psychology and behavioral economics, instead he wastes half the show on hypotheticals about AGI.
We're as close to AGI today as we were 10-30 years ago, as in really far away. There's nothing that any of his guests can add to that debate that hasn't been covered in countless scifi novels. I don't care what Vsauce or Bjarne Stroustrup have to say about it.
To be fair though, it's quite clear that AGI is the intended overarching theme of the podcast. I'd argue that regardless of how any individual interview goes, Lex is creating incredible value by getting all of these top-notch thinkers to join a shared long-term conversation on what this technology means to us all.
To each their own. I enjoy observing brilliant people a bit out of their element.
There's plenty material available from/about Lex's guests' work; why not ask them to speculate about superhuman AI or whatever?
It's not a problem in of itself. I think the main issue is these are questions frequently asked by other podcast hosts already, and the people here are hoping for something more in-depth on topics the guests are more qualified to discuss.
What's wrong with using strong words?