Transcript
Claims
  • Unknown A
    Great job, Naval.
    (0:00:00)
  • Unknown B
    You rocked it. Maybe I should have said this on air, but that was literally the most fun podcast I've ever recorded.
    (0:00:01)
  • Unknown A
    Oh, that's on air. Cut that in.
    (0:00:06)
  • Unknown C
    Yeah, put it in the show. Put it in the show.
    (0:00:08)
  • Unknown B
    I had my theory on why you were number one, but now I have the realization, what's the actual reason?
    (0:00:09)
  • Unknown A
    You know us for a long time.
    (0:00:14)
  • Unknown D
    Yeah. What was your theory? What's the reality?
    (0:00:14)
  • Unknown B
    My theory was that my problem with going on podcasts is usually the person I'm talking to is not that interesting. They're just asking the same questions and they're dialing it in and they're not that interested. It's not like we're having a peer level, actual conversation. So that's why I wanted to do air chat and Clubhouse and things like that, because. Cause you can actually have a conversation.
    (0:00:16)
  • Unknown A
    I see.
    (0:00:33)
  • Unknown B
    Right. And what you guys have very uniquely, is four people, you know, of whom at least three are intelligent. I'm kidding. How could you say that?
    (0:00:33)
  • Unknown A
    Sax isn't here. Sax isn't even hearing you say that.
    (0:00:43)
  • Unknown E
    Kapal.
    (0:00:48)
  • Unknown A
    That is so cold.
    (0:00:49)
  • Unknown B
    Right. These three are intelligent and all of you get along, and you can have an ongoing conversation. That's a very high hit rate. Normally in a podcast you only get one interesting person, and now you've got three, maybe four. Right. Okay. So that to me was, why is this guy.
    (0:00:51)
  • Unknown A
    Who are you talking to? He's number.
    (0:01:07)
  • Unknown B
    We don't know. She'll remain mysterious forever. Of the four. Right. The problem is, if you get people together to talk, two is a good conversation. Three, possibly four is the max. That's why in a dinner table at a restaurant, four top. Right. You don't do five or six, because then it splits into multiple conversations. So you had four people who were capable of talking. Right. That I thought was a secret. But there's another secret. The other secret is you guys are having fun. You're talking over each other. You're making fun of each other. You're actually having fun. So that's why I'm saying this is the most fun podcast I've ever been on. That's why you'll be successful.
    (0:01:09)
  • Unknown D
    Welcome back anytime.
    (0:01:43)
  • Unknown C
    Thank you.
    (0:01:44)
  • Unknown B
    Thank you.
    (0:01:45)
  • Unknown A
    Welcome back. Yes, absolutely.
    (0:01:45)
  • Unknown B
    Keep it fun, guys. Thanks for having me.
    (0:01:47)
  • Unknown A
    188 and three smart guys. I can't even believe you'd say that about Sax. He's not even here to defend himself.
    (0:01:48)
  • Unknown B
    Sorry, David.
    (0:01:57)
  • Unknown A
    Let your winners ride Rain Man. David Sachs.
    (0:02:00)
  • Unknown E
    And instead, we open sources to the fans, and they've just gone crazy.
    (0:02:07)
  • Unknown A
    With it, Queen of Quinoa. All right, everybody, welcome back to the number one podcast in the world. We're really excited today. Back again, your sultan of science, David Friedberg. What do you got going on there, Friedberg? What's in the background? Everybody wants to know.
    (0:02:10)
  • Unknown D
    I used to play a lot of a game called SimEarth on my Macintosh LC way back in the day.
    (0:02:32)
  • Unknown A
    That tracks. Yeah, that tracks. And of course, with us again, your chairman.
    (0:02:38)
  • Unknown D
    What games did you play growing up? Jcal? Actually, I'm kind of curious. Did you ever play video games?
    (0:02:44)
  • Unknown A
    Andrea, Allison, Susan. I mean, it was like, a lot of cute girls. I was out dating girls. Freeberg. I was not on my Apple tune playing Civilization.
    (0:02:48)
  • Unknown D
    Let me find one of those pictures.
    (0:03:02)
  • Unknown A
    Whoa, whoa. Don't get me in trouble, man. 80s were good to me in Brooklyn.
    (0:03:04)
  • Unknown C
    Rejection, the video game.
    (0:03:08)
  • Unknown B
    Yes.
    (0:03:10)
  • Unknown A
    You have three lives rejected. Rejected. It's a numbers game, chamath. As you know. As you well know, it is a numbers game.
    (0:03:11)
  • Unknown D
    Yeah, Nick, go ahead, pull up, pull up. Rico Suave here.
    (0:03:19)
  • Unknown A
    Oh, no. What is this one?
    (0:03:22)
  • Unknown D
    Instead of playing video, here I am from the 80s.
    (0:03:23)
  • Unknown A
    That's J. Cow. That's fat J.C. help out your uncle.
    (0:03:26)
  • Unknown D
    Yeah, here he is out slaying.
    (0:03:32)
  • Unknown A
    Help out your uncle with the Vin Jhouse.
    (0:03:33)
  • Unknown C
    You know what he was slaying in there. A snack.
    (0:03:36)
  • Unknown B
    Yeah. You want pre oic and post oic, right?
    (0:03:38)
  • Unknown C
    Correct.
    (0:03:42)
  • Unknown A
    And weightlifting beef jerky. Go find my Leonardo DiCaprio picture, please, and replace my fat jcal picture with that. Thank you. Oh, God, I was fat, man. Plus, 40 pounds is a lot heavier than I am.
    (0:03:43)
  • Unknown C
    It's no joke. It's no joke.
    (0:03:57)
  • Unknown A
    £40 is a lot. There's so many great emo photos of me.
    (0:03:59)
  • Unknown C
    I'm proud of you.
    (0:04:03)
  • Unknown A
    Thank you, my man. Thank you. If you want a good photo, can.
    (0:04:04)
  • Unknown C
    You get through the intros, please, so we can start? Come on, quick.
    (0:04:08)
  • Unknown A
    How you doing, brother? How you doing, Chairman Dictator? You're good? You get all right? Really excited today? Today, for the first time on the all in podcast, the iron fist of angel list. The zen, like, mage of the early stage. He has such a way with words. He's the socrate of nerds. Please welcome my guy. Namaste. Naval. How you doing? The intros are back.
    (0:04:11)
  • Unknown B
    That is the best intro I've ever gotten. I didn't think. I don't. I didn't think you could do that. That was amazing. That's your superpower right there. Lock it in quick. Venture capital. Just do that.
    (0:04:39)
  • Unknown A
    Absolutely. That's actually, you know, What?
    (0:04:48)
  • Unknown B
    Interestingly, number one podcast in the world, like someone said.
    (0:04:50)
  • Unknown A
    I mean, that's what I'm manifesting. It's getting close. We've been in the top 10, so I mean the weekends are good for all in.
    (0:04:53)
  • Unknown B
    This one will hit number one. This one will go by.
    (0:05:00)
  • Unknown A
    I think it could. If you have some really great pithy insights, we might go right to the top.
    (0:05:02)
  • Unknown B
    I just got to do a Sig Heil and it'll go viral.
    (0:05:08)
  • Unknown A
    Are you going to send us your heart?
    (0:05:12)
  • Unknown B
    My heart goes out to you. My heart.
    (0:05:15)
  • Unknown A
    I end here at the heart. I don't send it out, I keep it right here. I put both hands on the heart and I hold it nice and steady. I hold it in. It's sending out to you, but just not explicitly. All right, for those of you who don't know, Naval was an entrepreneur, he kicked a bit of ass, he got his ass kicked and then he started venture hacks and he started emailing folks and saying, you know, 15, 20 years ago, maybe 15, here are some deals. In Silicon Valley he went around, he started writing 50k, 100k checks, he hit a bunch of home runs and he turned venture hacks into angel list. And then he has invested in a ton of great startups, maybe give us some of the greatest hits there. Deval.
    (0:05:18)
  • Unknown B
    Yeah. Twitter, Uber Notion, a bunch of others. Postmates, Udemy, A lot of unicorns bunch upcoming. It's actually a lot of deals at this point. But honestly I'm not necessarily proud of being an investor. Investor to me is a side job, it's a hobby. So I do startups.
    (0:06:03)
  • Unknown C
    How do you define yourself?
    (0:06:21)
  • Unknown B
    I don't. I mean, I guess these days I would say more like building things. You know, every, every so called career is an evolution, right? And all of you guys are independent and you kind of do what you're most interested in, right? That's the point of making money. So you can just do what you want. So these days I'm really into building and crafting products. So I built one recently called AirChat. It kind of didn't work. I'm still proud of what I built and got to work with an incredible team. And now I'm building a new product and this time I'm going into hardware and I'm just building something that I really want. I'm not.
    (0:06:23)
  • Unknown D
    And you fund it all yourself, Naval?
    (0:06:57)
  • Unknown B
    Partially. I bring investors along. Last time they got their money back. Previous times they've made money. Next time, hopefully they'll make a lot of money. It's Good to bring your friends along.
    (0:06:59)
  • Unknown C
    I'll be honest, I love that you said, I love the product, but it didn't work. Not enough people say that.
    (0:07:09)
  • Unknown B
    Yeah, no, I built a product that I loved, that I was proud of, but it didn't catch fire and it was a social product, so it had to catch fire for it to work. So I found the team great homes, they all got paid. The investors that I brought in got their money back and I learned a ton, which I'm leveraging into the new thing. But the new thing is much harder. The new thing is hardware and software.
    (0:07:14)
  • Unknown C
    And what did you learn building in 2024 and 2025 that you didn't know? Maybe before then?
    (0:07:34)
  • Unknown B
    The main thing was actually just the craft, the craft of pixel by pixel, designing a software product and launching it. I guess the main thing I took away that was a learning was that I really enjoyed building products and that I wanted to build something even harder and something even more real. And I think, like a lot of us, I'm inspired by Elon and all the incredible work he's done. So I don't want to build things that are easy. I want to build things that are hard and interesting and I want to take on more technical risk and less market risk. This is the classic VC learning, right? Which is you want to build something that if people get it, if you can deliver it, you know, people will want it and it's just hard to build as opposed to you build it and you don't know if they want it. So that's a learning.
    (0:07:40)
  • Unknown A
    Airchat was a lot of fun. For those of you who don't know, it was kind of like a social media network where you could ask a question and then people could respond and it was like an audio based Twitter. Would you say that was the best way to describe it?
    (0:08:27)
  • Unknown B
    Audio, Twitter, asynchronous AI, transcripts and all kinds of AI to make it easier for you. Translation. Really good way for kind of trying to make podcasting type conversations more accessible to everybody. Because honestly, one of the reasons I don't go on podcasts, I don't like being intermediated, so to speak. Right where you sit there and someone interviews you and then you go back and forth and you go through the same old things. I just want to talk to people. I want peer relationships, kind of like you guys have running here.
    (0:08:40)
  • Unknown C
    Naval. What happened when you went through that phase? There was a period where it just seemed like something had gone on in your life and you just knew the answers. You were Just so grounded. It's not to say that you're not grounded now, but you're less active posting and writing. But there was this period where I think all of us were like, all right, what does naval think?
    (0:09:07)
  • Unknown B
    Oh, really? Oh, okay, that's news to me.
    (0:09:27)
  • Unknown C
    I would say it would be like the late teens, the early 20s. Jason, you can correct me if I'm getting the dates wrong, but it's in that moment where, like, these naval isms and this sort of philosophy really started to. I think people had a tremendous respect for how you were thinking about things. I'm just curious, like, what? Were you going through something in that moment? Or like, oh, yeah, yeah, yeah. That's.
    (0:09:30)
  • Unknown B
    No. Very insightful. Yeah.
    (0:09:49)
  • Unknown D
    20.
    (0:09:51)
  • Unknown B
    So I've been on Twitter since 2007 because I was an early investor, but I never really tweeted. I didn't get featured. I had no audience. I was just doing the usual techie guy thing, talking to each other. And then I started AngelList in 2010. The original thing about matching investors to startups didn't scale. It was just an email list that exploded at early on, but then just didn't scale. So we didn't have a business. And I was trying to figure out the business, and at the same time, I got a letter from the securities and Exchange Commission saying, oh, you're acting as an unlicensed broker dealer. And I'm like, what? I'm not making any money. I'm just making intros. I'm not taking anything. It's just a public service. But even then, they were coming after me, and I'd raised a bunch of money from investors, so I was in a very high stress period of my life.
    (0:09:51)
  • Unknown B
    Now, looking back, it's almost comical that I was stressed over it, but at the time, it all felt very real. The weight of everything was on my shoulders. Expectations, people, money, regulators. And I eventually went to D.C. and got the law changed to legalize what we do, which ironically enabled a whole bunch of other things like ICOs and incubator days and on demo days. But in that process, I was in a very high stress period of my life. And I just started tweeting whatever I was going through, whatever realizations I was happening, it's only in stress that you sort of are forced to grow. And so whatever internal growth I was going through, I just started tweeting it, not thinking much of it. And it was a mix of there are three things that I kind of always kind of are running through. One is, I love science.
    (0:10:34)
  • Unknown B
    I'm an amateur. Love physics. Let's just leave it at that. I love reading a lot of philosophy and thinking deeply about it. And I like making money. Right. Truth loving money. That's my joke on my Twitter bio. Those are the three things that I keep coming back to. And so I just started tweeting about all of them. And I think before that, the expectation was that someone like me should just be talking about money. Stay in your lane. And people have been playing it very safe. And so I think the combination, the three sort of caught people's attentions because every person thinks about everything. We don't just stay in our lane. In real life, we're dealing with our relationships. We're dealing with our relationship with the universe. We're dealing with what we know to be true and, you know, with science and how we make decisions and how we figure things out.
    (0:11:17)
  • Unknown B
    And we're also dealing with the practical, everyday, material things of how to deal with our spouses or girlfriends or wives or husbands and how to make money and how to deal with our children. So I'm just tweeting about everything. I just got interested in everything. I'm tweeting about it. And a lot of it, my best stuff was just notes to self. It's like, hey, don't forget this.
    (0:12:04)
  • Unknown C
    How to get Rich. Remember that one, how to get Rich. That was like one of the first threads.
    (0:12:23)
  • Unknown A
    And that was a super banger. Viral super banger.
    (0:12:27)
  • Unknown B
    Yeah, yeah, yeah. I think that is still the most viral thread ever on Twitter. I like timeless things. I like philosophy. I like things that are still apply in the future. I like compound interests, if you will, in ideas. Obviously, recently X has become so addictive that we're all checking it every day. And Elon's built the perfect for you. He's built TikTok for nerds. And we're all on it. But normally I try to ignore the news. Obviously last year things got real. We all had to pay a lot of attention to the news. But I just like to tweet timeless things. I don't know. I mean, people pay attention. Sometimes they like what I write, sometimes they. They go non linear on me. But yeah, the how to get rich feedstorm was a big one.
    (0:12:30)
  • Unknown C
    Is it problematic when people now meet you because the hype versus the reality. There's like, it's discordant now because people, if they absorb this content, they expect to see some quasi dating. Yeah. Floating in the air. You know what I mean?
    (0:13:10)
  • Unknown A
    Yes.
    (0:13:25)
  • Unknown B
    Yeah. Like many of you have stopped drinking, but I used to like, have the occasional glass of wine. And there was a moment there where I went and met with an information reporter back when I used to meet with reporters. And she said, where are we going to meet? So I said, oh, let's meet at the Wine Merchant and we'll get over the last one. She's like, what you drink? It was like, a big deal for her.
    (0:13:25)
  • Unknown A
    I'm so disappointed.
    (0:13:45)
  • Unknown B
    I was like, I'm an entrepreneur. Most of them are alcoholics or psychedelics or.
    (0:13:47)
  • Unknown A
    Yeah, for sure.
    (0:13:51)
  • Unknown B
    Doing whatever it takes to manage hot tub.
    (0:13:52)
  • Unknown D
    Yeah. Right.
    (0:13:55)
  • Unknown B
    Yeah. When they say I'm on therapy, you know what that that's code for?
    (0:13:56)
  • Unknown A
    Yeah.
    (0:14:00)
  • Unknown B
    So, yes, it is highly distorting. Yeah. I'm almost reminded of that line in the Matrix where that agent is about to, like, shoot one of the Matrix characters and say, it's only human. Right. So that's kind of what I say to everybody. Like, only human.
    (0:14:02)
  • Unknown C
    Yeah, yeah, yeah.
    (0:14:15)
  • Unknown A
    You did a recently a podcast with Tim Ferriss on parenting. This was out there. I love this. And I bought the book from this guy.
    (0:14:16)
  • Unknown B
    Yeah.
    (0:14:25)
  • Unknown A
    Just give a brief overview of this philosophy of parenting.
    (0:14:26)
  • Unknown C
    Oh, I didn't listen to this. I have to write this down. Tell us.
    (0:14:30)
  • Unknown A
    You're going to love this. This spoke to me, but it was a little crazy.
    (0:14:33)
  • Unknown B
    Yeah. So I'm a big fan of David Deutsch. David Deutsch, I think, is basically the smartest living human. He's a scientist who.
    (0:14:37)
  • Unknown C
    He's very brutal.
    (0:14:42)
  • Unknown B
    Yeah. Quantum Computation. And he's written a couple of great books, but it's about the intersection of the greatest theories that we have today, the theories with the most reach, and those are epistemology, the theory of knowledge, evolution, quantum physics, and computation.
    (0:14:44)
  • Unknown A
    This is the beginning of Infinity guy. That's the book that you always reference. Yeah.
    (0:14:58)
  • Unknown B
    Correct. Yes. The Fabric of Reality is another book. I've spent a fair bit of time with him, done some podcasts with him, hired and worked with people around him, and I'm just really impressed because it's like the framework that's made me smarter. I feel like because we're all fighting aging, our brains are getting slower, and we're always trying to have better ideas. So. So as you age, you should have wisdom. That's your substitute for the raw horsepower of intelligence going down. And so scientific wisdom, I take from David. Not take, but I learned from David. And one of the things that he pioneered is called taking children seriously. And it's this idea that you should take your children seriously like adults. You should always give them the same freedom that you would give an adult if you wouldn't speak that way with your spouse, if you wouldn't force your spouse to do something.
    (0:15:03)
  • Unknown B
    Don't force. Force a child to do something. And it's only through the latent threat of physical violence. Hey, I can control you. I can make you go to your room, I can take your dinner away or whatever that you intimidate children. And it resonated with me because I grew up very, very free. My father wasn't around when I was young. My mother didn't have the bandwidth to watch us all the time. She had other things to do. And so I kind of was making my own decisions from an extremely young age. From the age of five, nobody was telling me what to do. And from the age of nine, I was telling everybod everybody what to do. So I'm used to that. And I've been homeschooling my own kids. So the philosophy resonated. And I found this guy, Aaron Stupple, on air chat, and he was an incredible expositor of the philosophy.
    (0:15:45)
  • Unknown B
    He lives his Life with it. 99% as extreme as one can go. So his kids can eat all the ice cream they want and all the Snickers bars they want. They can play on the iPad all they want. They don't have to go to school if they don't feel like it. They dress how they want. They don't have to do anything they don't want to do. Everything is a discussion, negotiation, explanation, just like you would with a roommate or an adult living in your house. And it's kind of insane and extreme, but I live my own home life in that arc in that direction. And I'm a very free person. I don't have an office to go to. I try really not to maintain a calendar. If I can't remember it, I don't want to do it. I don't send my kids to school. I really try not to coerce them.
    (0:16:29)
  • Unknown B
    And so obviously, that's an extreme model. But I was.
    (0:17:09)
  • Unknown C
    Sorry, sorry, sorry, hold on a second. So, yeah, your kids, if they. If they were like, I want Haagen dazs, and it's 9:00pm you're like, okay.
    (0:17:13)
  • Unknown B
    Two nights ago, I did this. I ordered the Haagen Dazs. It wasn't Haagen Dazs. It was a different brand. I ordered it.
    (0:17:24)
  • Unknown C
    I'm just gonna go through a couple of examples.
    (0:17:29)
  • Unknown B
    We do it actually. I scream at 9pm and we all eat.
    (0:17:31)
  • Unknown D
    Yeah.
    (0:17:33)
  • Unknown C
    So they're like, dad, I want and they're happy.
    (0:17:33)
  • Unknown B
    They're happy.
    (0:17:35)
  • Unknown C
    I want to be on my iPad, I'm playing Fortnite. Leave me alone. I'll go to sleep when I want.
    (0:17:36)
  • Unknown B
    You're like, okay, my oldest probably plays iPad nine hours a day.
    (0:17:40)
  • Unknown C
    Okay, so then your other kid pees in their pants because they're too lazy to walk to the bathroom.
    (0:17:45)
  • Unknown B
    They don't do that because they don't like peeing their pants.
    (0:17:51)
  • Unknown C
    No, I understand, but I'm just saying, like, there's a spectrum of all of these things, right?
    (0:17:52)
  • Unknown B
    Yeah.
    (0:17:56)
  • Unknown C
    And your point of view is 100% of it is allowed. And you have no judgments.
    (0:17:56)
  • Unknown B
    No, that's not where I am. That's where Aaron is. My rules are a little different. My rules are they got to do one hour of math or programming, plus two hours of reading every single day. And the moment they've done that, they're free creatures. And everything else is a negotiation. We have to persuade them. It's a persuasion, I should say, not even a negotiation. And even the hour of math and two hours of reading, really? You get 15 to 30 minutes of math, maybe an hour if you're lucky, and you get half an hour to two hours of reading.
    (0:18:01)
  • Unknown C
    And what do you think the long term consequences of that are? And then also, what is the long term consequences, let's say on health, if they're making decisions you know are just not good, like the ice cream thing at 9pm how do you manage that in your mind?
    (0:18:30)
  • Unknown B
    I think whatever age you're at, whatever part you're at in life, you're still always struggling with your own habits. I think all of us, for example, still eat food and feel guilty or want to eat something that we shouldn't be eating. And we're still always evolving. Our diets and kids are the same. So my oldest has already. He passed on the ice cream last time, and he said, I want to eat healthier. Because finally I managed to get through to him and persuade him that he should be healthier. My younger kids will eat it, but they'll eat a limited amount. My middle kid will sometimes eat some of the others.
    (0:18:45)
  • Unknown C
    So if they say something, you'll enable it, but then you'll guide. You'll be like, hey, listen, this is not the choice I would make, I don't think. But if you want it, I do it.
    (0:19:13)
  • Unknown B
    Yeah, I'll try it. But you also have to be careful. But you don't want to intimidate them and you don't want to be so overbearing that then they just view dad as like, controlling.
    (0:19:21)
  • Unknown C
    I find this so fascinating. And so what do you think happens to these kids? Like, I'm sure you have a vision of what they'll be like when they're fully formed adults. Like, what is that vision?
    (0:19:29)
  • Unknown B
    I try not to. They're going to be who they're going to be. This is kind of how I grew up. I kind of did what I wanted. I would rather they have agency than turn out exactly the way I want. Because agency is the hardest thing, right? Having control over your own life, making your own decisions. I want them to be happy. I have a very happy household.
    (0:19:38)
  • Unknown C
    What is the Plato. What's Plato's goal? Eudaimonia, Right?
    (0:20:01)
  • Unknown B
    Like eudaimonia. Yeah, the happy Aristotle.
    (0:20:04)
  • Unknown C
    Or like the, the fulfillment. This concept. Is that what you want for them?
    (0:20:06)
  • Unknown B
    I don't really want anything for them. I just want them to be free and their best selves. God, I want them.
    (0:20:11)
  • Unknown A
    Chamath was worrying about details. He's got like 17 kids now. I don't know if you know, but Chamath has got like a whole punch list of things. But I love this interview because the guy made a really interesting point, which was they're going to have to make these decisions at some point. They're going to have to learn the pros and cons, the upside, the downside to all these things, eating iPad. And the quicker you get them to have agency to make these decisions for themselves, with knowledge to ask questions, the more secure they'll be. I found it a fascinating discussion. I like cause and effect, especially in teenagers. Now that I have a teenager, it's really good for them to learn. Hey, if you don't do your homework, you have a problem and then you got to solve that problem. How are we going to solve that problem?
    (0:20:20)
  • Unknown A
    So I like to present it as what's your plan? Anytime they have a problem, 8 year old kids, 15 year old kids, I just say, what's your plan to solve this? And then I like to hear their plan and let me know if you want to brainstorm it. But I thought it was a very interesting, super interesting discussion.
    (0:21:05)
  • Unknown B
    I would say overall, my kids are very happy, the household is very happy. Everybody gets along, everybody loves each other. Yeah, some of them are way ahead of their peers. Nobody's behind in anything that matters. Nobody seems unhealthy in any obvious way. No one has aberrant eating habits. I haven't even found really an aberrant behavior. That's out of line. So it's all good Self correcting.
    (0:21:20)
  • Unknown D
    It's like a. I worry a lot about this, like, iPad situation. I see my kids on an iPad and it's almost like unless they're doing an interactive project, if they end up watching.
    (0:21:44)
  • Unknown B
    Says the guy who has a video game theme in the background that was interactive. Right. And who probably grew up playing video games nonstop and probably spends nine hours a day on his screen, just called a phone. So, yeah, yeah, it's the same thing, man.
    (0:21:57)
  • Unknown D
    Well, I mean, I feel like watch is it. But do they watch shows?
    (0:22:10)
  • Unknown B
    There's a hypocrisy to picking up your phone and then saying to your kid, no, you can't use your iPad. I grew up playing video games nonstop and video games when I was older and I was an avid gamer until just a few years ago.
    (0:22:15)
  • Unknown D
    Well, no, I mean, I'm not criticizing the iPad. I was obviously on a computer since I was 4 years old, so I totally get it. And I think the question for me is like, but I didn't have the ability to play a 30 minute show and then play the next 30 minute show and the next 30 minute show and then sit there for two hours and just have a show playing the whole time. I was, you know, interacting on the computer and doing stuff and building stuff.
    (0:22:26)
  • Unknown B
    Yeah. Which was a little different for me.
    (0:22:51)
  • Unknown D
    From a use case perspective.
    (0:22:52)
  • Unknown B
    We did used to control their YouTube access, although now we don't do that. The only thing I ask them is that they put on captions when they're watching YouTube. So it helps their reading. They learn to read.
    (0:22:54)
  • Unknown A
    That's a good tip. Yeah, I like that one.
    (0:23:06)
  • Unknown B
    I will say that one of my kids is really into YouTube. The other two are not. Like they just got over it. And to the extent that they use YouTube, it's mostly cause they're looking up videos on their favorite games. They want to know how to be better at a game.
    (0:23:07)
  • Unknown A
    All right, let's keep moving through this docket. We have David Sachs with us here. So, David, give us your philosophy of parenting. Okay, next item on the docket. Let's go.
    (0:23:19)
  • Unknown E
    Let's talk about some real issues.
    (0:23:30)
  • Unknown D
    This is not a parenting show.
    (0:23:33)
  • Unknown E
    A parenting show?
    (0:23:35)
  • Unknown B
    Yeah.
    (0:23:36)
  • Unknown A
    I asked David, what's your parenting philosophy? He said, oh, well, I set up their trust four years ago. So he said, he's good. Trust is set up, Everything's good.
    (0:23:36)
  • Unknown C
    Fox G R A T. Check.
    (0:23:45)
  • Unknown A
    You're all set, guys. Let me know how it works out. All right. Speaking of working out, we've got a vice president who isn't cuckoo for Cocoa Puffs. And who actually understands what AI is. JD Vance gave a great speech. I watched it myself. He talked about AI in Paris. This was on Tuesday at the AI Action Summit, whatever that is. And he gave a 15 minute banger of a speech. He talked about overregulating AI and America's intention to dominate this. And we happen to have with us Naval the czar. The czar of AI So before I go into all the details about the speech, I don't want to steal your thunder, Sacks. This speech had a lot of verbiage, a lot of ideas that I've heard before that maybe we've all talked about. Maybe tell us a little bit about how this all came together and how proud you are.
    (0:23:50)
  • Unknown A
    I mean, gosh, having a vice president who understands AI is just. It's mind blowing. He could speak on a topic that's topical. Credibly. This was an awesome moment for America, I think.
    (0:24:42)
  • Unknown E
    What are you implying there, Jake? Cal?
    (0:24:55)
  • Unknown A
    I'm implying you might have workshopped it with him.
    (0:24:57)
  • Unknown E
    No.
    (0:24:59)
  • Unknown A
    Or that he's smart. Both of those things.
    (0:25:00)
  • Unknown E
    The vice President wrote the speech or at least directed all of it. So the ideas came from him. I'm not going to take any credit whatsoever for this.
    (0:25:02)
  • Unknown A
    Okay, well, it was on point. Maybe you could talk about that.
    (0:25:10)
  • Unknown E
    Yes, I agree. It was on point. I think it was a very well crafted and well delivered speech.
    (0:25:12)
  • Unknown A
    He made four main points about the Trump administration's approach to AI. He's going to ensure this is point one, that American AI continues to be the gold standard. Fantastic. Check. Two, he says that the administration understands that excessive regulation could kill AI just as it's taking off. And he did this in front of all the EU elites who love regulation, did it on their home court. And then he said, number three, AI must remain free from ideological bias, as we've talked about here on this program. And then number four, the White House, he said, will maintain a pro worker growth path for AI so that it can be a potent tool for job creation in the US So what are your thoughts on the four major bullet points in his speech here in Parrot?
    (0:25:16)
  • Unknown E
    Well, I think that the vice president, you knew he was going to deliver an important speech as soon as he got up there and said that. I'm here to talk about not AI safety, but AI opportunity. And to understand what a bracing statement that was. And really almost like a shot across the bow, you have to understand the history and context of these events. For the last couple of years, the last couple of these events have been exclusively focused on AI safety. The last in person event was in the UK at Bletchley park, and the whole conference was devoted to AI safety. Similarly, the European AI regulation obviously is completely preoccupied with safety and trying to regulate away safety risks before they happen. Similarly, you had the Biden EO which was based around safety, and then you have just the whole media coverage around AI, which is preoccupied with all the risks from AI.
    (0:26:03)
  • Unknown E
    So to have the Vice President get up there and say right off the bat that there are other things to talk about in respect to AI besides safety risks, that actually there are huge opportunities. There was a breath of fresh air and like I said, kind of a shot across the bow and yeah, you could almost see some of the Eurocrats, they needed their fainting couches after that hero Cratch Trudeau looks like his dog just died. So I think that was just a really important statement right off the bat to set the context for the speech, which is, AI is a huge opportunity for all of us because really that point just has not been made enough. And it's true there are risks, but when you look at the media coverage and when you look at the dialogue that the regulators have had around this, they never talk about the opportunities.
    (0:26:57)
  • Unknown E
    It's always just around the risk. So I think that was a very important corrective. And then, like you said, he went on to say that the United States has to win this AI race. We want to be the gold standard, we want to dominate.
    (0:27:45)
  • Unknown A
    That was my favorite part.
    (0:27:57)
  • Unknown E
    Yeah, and by the way, that language about dominating AI and winning the global race, that is in President Trump's executive order from week one. So this is very much elaborating on the official policy of this administration. And the Vice President then went on to say that he specified how we would do that. Right. We have to win some of these key building block technologies. We want to win in chips, we want to win in AI models, we want to win in applications. He said we need to build, we need to unlock energy for these companies, and then most of all, we just need to be supportive towards them as opposed to regulating them to death. And he had a lot to say about the risk of overregulation, how often it's big companies that want regulation. He warned about regulatory capture, which our friend Bill Gurley would like.
    (0:27:58)
  • Unknown E
    And he said that so basically, having less regulation can actually be more fair, can create a more level playing field for small companies as well as big companies. And then he said to the Europeans that we want you to be partners with us, we want to lead the world. But we want you to be our partners and benefit from this technology that we're going to take the lead in creating. But you also have to be a good partner to us. And then he specifically called out the overregulation that Europeans have been engaged in. He mentioned the Digital Services act, which has acted as like a speed trap for American companies. It's American companies who've been over regulated and fined by these European regulations because the truth of the matter is that it's American technology companies that are winning the race. And so when Europe passes these onerous regulations, they fall most of all on American companies.
    (0:28:43)
  • Unknown E
    And he's basically saying we need you to rebalance and correct this because it's not fair and it's not smart policy and it's not going to help us collectively win this AI race. And that kind of brings me just to the last point is I don't think he mentioned China by name, but clearly he talked about adversarial countries who are using AI to control their populations, to engage in censorship and thought control. And he basically painted a picture where it's like, yeah, you could go work with them or you could work with us. And we have hundreds of years of shared history together. We believe in things like free speech, hopefully, and we want you to work with us. But if you are going to work with us, then you have to cooperate and we have to create a reasonable regulatory regime.
    (0:29:35)
  • Unknown A
    Naval, did you see the speech and your thoughts just generally on J.D. vance and having somebody like this representing us and wanting to win?
    (0:30:19)
  • Unknown B
    American exception, Very surprising, very impressive. I thought he was polite, optimistic and just very forward looking. It's what you would expect an entrepreneur or a smart investor to say. So I was very impressed. I think the idea that America should win, great. I think that we should not regulate. I also agree with, I'm not an AI doomer. I don't think AI is going to end the world. That's a separate conversation. But there's this religion that comes along in many faces which is that, oh, climate change is going to end the world. AI is going to end the world. Asteroids going to end the world. COVID 19 is going to end the world. And it just has a way of fixating your attention. It captures everybody's attention at once. That's a very seductive thing. And I think in the case of AI, it's really been overplayed by incentive bias, motivated reasoning by the companies who are ahead and they want to pull up the ladder behind them.
    (0:30:29)
  • Unknown B
    I think they genuinely believe it. I think they genuinely believe that there are safety risks, but I think they're motivated to believe in those safety risks. Risks. And then they pass that along. But it's kind of a weird position because they have to say, oh, it's so dangerous that you shouldn't just let open source go at it and you should let just a few of us work with you on it. But it's not so dangerous that a private company can't own the whole thing. Because if it was truly the Manhattan Project, if they were building nuclear weapons, you wouldn't want one company to own that. Sam Altman famously said that AI will capture the light cone of all future value. In other words, like all value ever created at the speech speed of light from here will be captured by AI. So if that's true, then I think open source AI really matters, and little tech AI really matters.
    (0:31:17)
  • Unknown B
    The problem is that the nature of training these models is highly centralized. They benefit from supercomputer clustered compute. So it's not clear how any decentralized model can compete. So to me, the real issue boils down to is how do you push AI forward while not having just a very small number of players control the entire thing? And we thought we had that solution with the original OpenAI, which was a nonprofit and was supposed to do it for humanity. But now, because they want to incentivize the team and they want to raise money, they have to privatize at least a part of it, although it's not clear to me why they need to privatize the whole thing. Like, why do you need to buy out the nonprofit portion? You could leave a nonprofit portion and you could have the private portion for the incentives. But I think that the real challenge is how do you keep AI from naturally centralizing, because all the economics and technology underneath are centralizing in nature.
    (0:31:59)
  • Unknown B
    Really think you're going to create God? Do you want to put God on a leash with one entity controlling God? That, to me, is the real fear. I'm not scared of AI. I'm scared of what a very small number of people who control AI do to the rest of us for our own good. Because that's how it always works.
    (0:32:50)
  • Unknown A
    So well said. Probably should go with the Greek model having many gods and heroes as well. Freeberg, you heard the J.D. vance speech, I assume. What are your thoughts on overregulation and maybe to Naval's point, one person owning this versus open source?
    (0:33:07)
  • Unknown D
    I think that there's this kind of big definition of social balance right now on what I would call Techno optimism and techno pessimism. Generally, people sort of fall into one of those two camps, generally speaking. Techno optimists, I would say, are folks that believe that accelerating outcomes with AI, with automation, with bioengineering, manufacturing, semiconductors, quantum computing, nuclear energy, et cetera, will usher in this era of abundance by creating leverage, which is what technology gives us. Technology will make things cheaper, and it will be deflationary, and it will give everyone more, so it creates abundance. The challenge is that people who already have a lot worry more about the exposure to the downside than they desire the upside. And so the techno pessimists are generally like the EU and large parts, frankly, of the United States are worried about the loss of X, the loss of jobs, the loss of this, the loss of that.
    (0:33:23)
  • Unknown D
    Whereas countries like China and India are more excited about the opportunity to create wealth, the opportunity to create leverage, the opportunity to create abundance for their people. You know, GDP per capita in the EU, $60,000 a year. GDP per capita in the United States, like 82,000. But GDP per capita in India is 2,500 and China is 12,600. There's a greater incentive in those countries to manifest upside than there is for the United States and the eu, who are more worried about manifesting downside. And so it is a very difficult kind of social battle that's underway. I do think, like over time, those governments and those countries and those social systems that embrace these technologies are going to become more capitalistic, and they're going to require less government control and intervention in job creation, the economy, payments to people and so on. And the countries that are more techno pessimistic are unfortunately going to find themselves asking for greater government control.
    (0:34:27)
  • Unknown D
    Government intervention in markets, governments creating jobs, government making payments to people, governments effectively running the economy. My personal view, obviously, is that I'm a very strong advocate for technology acceleration because I think in nearly every case in human history, when a new technology has emerged, we've largely found ourselves assuming that the technology works in the framework of today or of yesteryear. The automobile came along, and no one envisioned that everyone in the United States would own an automobile. And therefore you would need to create all of these new industries like mechanics and car dealerships, roads, all the people servicing building roads, and all the other industry that emerged, motels. And it's very hard for us to sit here today and say, say, okay, AI is going to destroy jobs. What's it going to create and be right? I think we're very likely going to be wrong.
    (0:35:27)
  • Unknown D
    Whatever estimations we give. The area that I think is most underestimated is large technical projects that seem technically infeasible today that AI can unlock, for example, habitation in the oceans. It's very difficult for us to envision creating cities underwater and creating cities in the oceans, or creating cities on the moon or creating cities on Mars or finding new places to live. Those are like. Technically, people might argue, oh, that sounds stupid. I don't want to go do that. But at the end of the day, like, human civilization will drive us to want to do that. But those technically are very hard to pull off today. But I can unlock a new set of industries to enable those transitions. So I think we really get it wrong when we try and assume the technology as a transplant for last year or last century, and then we kind of become techno pessimists because we're worried about losing what we have.
    (0:36:17)
  • Unknown A
    Are you a techno pessimist? Are you optimist? Because you bring up the downside of an awful lot here on the program. But you are working every day in a very optimistic way to breed, you know, better strawberries and potatoes for folks. So you're a little bit of a.
    (0:37:03)
  • Unknown D
    No, I have no techno pessimism whatsoever. I try and point out why the other side is acting the way they are.
    (0:37:17)
  • Unknown A
    Okay. Putting it in full context.
    (0:37:22)
  • Unknown D
    And what I'm trying to highlight is I think that that framework is wrong. I think that that framework of trying to transplant new technology to the old way of things operating is the wrong way to think about it. And it creates this, you know, because of this manifestation about worrying about downside, it creates this fear that creates regulation like we see in the eu. And as a result, China's GDP will scale while the EU's will stagnate. If that's where they go, that's my assessment or my opinion on what will happen.
    (0:37:24)
  • Unknown A
    Chamath, you want to wrap this up for us? What are your thoughts on jd?
    (0:37:49)
  • Unknown C
    I'll give you two.
    (0:37:52)
  • Unknown A
    Okay.
    (0:37:54)
  • Unknown C
    The first is, I would say this is a really interesting moment where I would call this the tale of two vice presidents. Very early in the Biden administration, Kamala was dispatched on an equally important topic at that time, which was illegal immigration. And she went to Mexico and Guatemala. And so you actually have a really interesting AB test here. You have both vice presidents dealing with what were, in that moment, incredibly important issues. And I think that JD was focused, he was precise, he was ambitious. And even the part of the press that was very supportive of Kamala couldn't find a lot of Very positive things to say about her. And the feedback was. It was meandering. She was ducking questions. She didn't answer the questions that she was asked very well. And it's so interesting because it's a bit of a microcosm then to what happened over these next four years and her campaign, quite honestly, which you could have taken that window of that feedback.
    (0:37:54)
  • Unknown C
    And unfortunately for her, it just continued to be very consistent. So that was one observation I had, because I heard him give the speech, I heard her, and I had this kind of moment where I was like, wow, two totally different people. The second is on the substance of what JD said. I said this on Tucker. And I'll just simplify all of this into a very basic framework, which is if you want a country to thrive, it needs to have economic supremacy and it needs to have military supremacy. In the absence of those two things, societies crumble. And the only thing that underpins those two things is technological supremacy. And we see this today. So on Thursday, what happened with Microsoft? They had a $24 billion contract with the United States army to deliver some Whiz Bang thing, and they realized that they couldn't deliver it.
    (0:38:58)
  • Unknown C
    And so what did they do? They went to Anduril. Now, why did they go to Anduril? Because Anduril has the technological supremacy to actually execute. A few weeks ago, we saw some attempts at technological supremacy from the Chinese with respect to Deep Seq. So I think that this is a very simple existential battle. Those who can harness and govern the things that are technologically superior will win, and it will drive economic vibrancy and military supremacy, which then creates safe, strong societies. That's it. So from that perspective, JD nailed it. He saw the forest from the trees. He said exactly what I think needed to be said and put folks on notice that you're either on the ship or you're off the ship. And I think that that was really good.
    (0:39:54)
  • Unknown A
    Yeah. And there was like, a little secondary conversation that emerged, Sachs, that I would love to engage you with, if you're willing, which is this Civil War quote unquote, between maybe MAGA 1.0, MAGA 2.0 techies in the MAGA party like ourselves and maybe the core MAGA folks. We can pull up the Tweet here in JD's own word. And he's been engaging people in his own words. It's very clear that he's writing these tweets. A distinct difference between other politicians and this administration. And they just tell you what they think here. It is I'll try and write something to address this in detail, says JD Vance's tweet. But I think this civil war is overstated though. Yes, there are some real divergences between the populace. I would describe that as MAGA and the techies. But briefly, in general I dislike substituting American labor for cheap labor.
    (0:40:42)
  • Unknown A
    My views on immigration and offshoring flow from I like growth and productivity gains and this informs my view on tech and regulation. When it comes to AI specifically, the risks are number one, overstated to your point naval or two, difficult to avoid. One of my many real concerns, for instance, is about consumer fraud. That's a valid reason to worry about safety. But the other problem is much worse if a peer nation is six months ahead of the US on AI. Again, I'll try and say more and this is JD going right at, I think think one of the more controversial topics, Sachs, that the administration is dealing with and has dealt with when it comes to immigration and tech because these two things are dovetailing each other. If we lose millions of driver jobs, which we will in the next 10 years, just like we lost millions of cashier jobs, well that's going to impact how our nation and many of the voters look at the border and immigration.
    (0:41:40)
  • Unknown A
    We might not be able to let as many people immigrate here if we're losing millions of jobs to AI and self driving cars. What are your thoughts on him engaging this directly, Sachs?
    (0:42:39)
  • Unknown E
    Well, the first point he's making there is about wage pressure, which is when you throw open our borders or you throw open American markets to products that can be made in foreign countries by much cheaper labor that's not held to the same standards, the same minimum wage or the same union rules or the same safety standards that American labor is and has a huge cost advantage, then you're creating wage pressure for American workers. And he's opposed to that. And I think that is an important point because I think the way that the media or neoliberals like to portray this argument is that somehow MAGA's resistance to unlimited immigration is somehow based on xenophobia or something like that. No, it's based on bread and butter kitchen table issues, which is if you have this ridiculous open border policy, it's inevitably going to create a lot of wage pressure for people at the bottom of the pyramid.
    (0:42:49)
  • Unknown E
    So I think JD is making that argument. But and this is point two, he's saying I'm not against productivity growth. So technology is good because it enables all of our workers to improve their productivity and that should result in better wages because workers can produce more. The value of their labor goes up if they have more tools to be productive. So there's no contradiction there. And I think he's explaining why there isn't a contradiction. A point I would add. He doesn't make this point in that tweet, but I would add is that one of the problems that we've had over the last, I don't know, 30 years is that we have had tremendous productivity growth in the U.S. but labor has not been able to capture it. All that benefit has basically gone to capital or to companies. And I think a big part of the reason why is because we've had this largely unrestricted immigration policy.
    (0:43:40)
  • Unknown E
    So I think if you were to tamp down on immigration, if you were to stop the illegal immigration, then labor might be able to capture more of the benefits of productivity growth. And that would be a good thing. It'd be a more equitable distribution of the gains from productivity and from technology. And that I think would help tamp down this growing conflict that you see between technologists and the rest of the country, or certainly the heartland of the country.
    (0:44:28)
  • Unknown A
    Naval this is. Okay. You want to add anything else?
    (0:44:57)
  • Unknown C
    David?
    (0:44:59)
  • Unknown A
    Sorry.
    (0:45:00)
  • Unknown E
    Well, I think just the final point he makes in that tweet is that he talks about how we live in a world in which there are other countries that are competitive. And specifically he doesn't mention China, but he says we have a pure competitor and it's going to be a much worse world if they end up being six months ahead of us on AI rather than six months behind. That is a really important point to keep in mind. I think that the whole Paris AI summit took place against the backdrop of this recognition, because just a few weeks ago we had deep sea and it's really clear that China is not a year behind us. They're hot on our heels, or only maybe months behind us. And so if we hobble ourselves with unnecessary regulations, if we make it more difficult for our AI companies to compete, that doesn't mean that China is going to follow suit and copy us.
    (0:45:00)
  • Unknown E
    They're going to take advantage of that fact and they're going to win.
    (0:45:45)
  • Unknown A
    All right, Naval this seems to be one of the main issues of our time. Four of the five people on this podcast right now are immigrants. So we have this amazing tradition in America. This is a country built by immigrants, for immigrants. Do you think that should change now? In the face of job destruction, which I know you've been tracking self driving pretty acutely, we Both have an interest there, I think over the years. You know, what's the solution here? If we're going to see a bunch of job displacement, which will happen for certain jobs, we all kind of know that. Should we shut the border and not let the next naval Chamath, Sachs and Friedberg into the country?
    (0:45:47)
  • Unknown B
    Well, let me declare my biases up front. I'm a first generation immigrant. I moved here when I was nine years old, rather my parents did. And then I'm naturalized citizen. So obviously I'm in favor of some level of immigration. That said, I'm assimilated. I consider myself an American first and foremost. I bleed red, white and blue. I believe in the Bill of Rights and the Constitution, first and second and fourth and all the proper amendments. I get up there every July 4th and I deliberately defend the Second Amendment on Twitter, at which point half my followers go bananas because they're not supposed to. I'm supposed to be a good immigrant, right, and carry the usual set of coherent leftist policies, globalist policies. So I think that legal, high skill immigration with room and time for assimilation makes sense. You want to have a brain drain on the best and brightest coming to the freest country in the world to build technology and to help civilization move forward.
    (0:46:28)
  • Unknown B
    And as Chamath was saying, economic power and military power is downstream of technology. In fact, even culture is downstream of technology. Look at what the birth control pill did, for example, to culture, or what the automobile did to culture, or what radio and television did to culture, and then the Internet. So technology drives everything. And if you look at wealth, wealth is a set of physical transformations that you can affect. And that's a combination of capital and knowledge. And the bigger input to that is knowledge. And so the US has become the home of knowledge creation thanks to bringing in the best and brightest. You could even argue deep seeking. Part of the reason why we lost that is because a bunch of those kids, they studied in the US but then we sent them back home.
    (0:47:29)
  • Unknown C
    Right.
    (0:48:08)
  • Unknown B
    So I think you absolutely.
    (0:48:08)
  • Unknown A
    Is that actually accurate that they were.
    (0:48:10)
  • Unknown C
    Yeah, yeah, yeah. Some.
    (0:48:12)
  • Unknown B
    A few of them.
    (0:48:13)
  • Unknown A
    Really? Oh my God, that's like exhibit A. Wow, I didn't know that.
    (0:48:13)
  • Unknown B
    So I think you absolutely have to split skilled, assimilated immigration, which is a small set, and it has to be both. They have to both be skilled and they have to become Americans. That oath is not meaningless. Right. It has to mean something. So skilled, assimilated immigration, you have to separate that from just open borders. Whoever can wander in, just come on in. That latter part makes no sense.
    (0:48:17)
  • Unknown E
    If the Biden administration had only been letting in people with 150 IQs, we wouldn't have this debate right now.
    (0:48:38)
  • Unknown B
    Absolutely.
    (0:48:44)
  • Unknown E
    The reason why we're having this debate is because Kamala would have won. They just opened the border and let millions and millions of people in.
    (0:48:45)
  • Unknown B
    It was to their advantage to conflate legal and illegal immigration. So every time you'd be like, well, we can't just open the borders, just say, well, what about Elon? What about this? And they would just parade.
    (0:48:50)
  • Unknown E
    If they were just letting Elon's and the Jensens and Freebergs, we wouldn't be having the same conversation today.
    (0:49:00)
  • Unknown C
    The correlation between open borders and wage suppression is irrefutable. We know that data. And I think that the Democrats, for whatever logic, committed an incredible error in basically undermining their core cohort. I want to go back to what you said, because I think it's super important. There is a new political calculus on the field, and I agree with you. I think that the three cohorts of the future are the asset, light, working, and middle class. That's cohort number one. There are probably 100 to 150 million of those folks. Then there are patriotic business owners, and then there's leaders in innovation. Those are the three. And I think that what MAGA gets right is they found the middle ground that intersects those three cohorts of people. And so every time you see this sort of left versus right dichotomy, it's totally miscast and it sounds discordant to so many of us because that's not how any of us identify.
    (0:49:07)
  • Unknown C
    And I think that that's a very important observation, because the policies that we adapt will need to reflect those three cohorts. What is the common ground amongst those three? And on that point, naval is right. There's not a lot that those three would say is wrong with a very targeted form of extremely useful legal immigration of very, very, very smart people who agree to assimilate and be a part of America. I mean, I'm so glad you said it the way you said it. I remember growing up where my parents would try to pretend that they were in Sri Lanka. And sometimes I would get so frustrated. I'm like, if you want to be in Sri Lanka, go back to Sri Lanka. I want to be Canadian because it was easier for me to make friends. It was easier for me to have a life. I was trying my best.
    (0:50:15)
  • Unknown C
    I wanted to be Canadian. And then when I moved to the United States 25 years ago, I wanted to be American. And I feel that I'm American now and I'm proud to be an American. And I think that's what you want. You want people that embrace it doesn't mean that we can't dress up in a showar kameese every now and then. But the point is, what do you believe and where is your loyalty?
    (0:51:00)
  • Unknown A
    Freeberg, we used to have this concept of a melting pot of this assimilation and that was a good thing. Then it became cultural appropriation. We kind of made a right turn here. Where do you stand on this? Recruiting the best and brightest and forcing them to assimilate, Making sure that they're down.
    (0:51:21)
  • Unknown C
    Jason, Find the people that care to be here.
    (0:51:40)
  • Unknown A
    Yeah. Let me restate that.
    (0:51:43)
  • Unknown E
    I reject the premise of this whole conversation.
    (0:51:45)
  • Unknown A
    Wait, wait, hold on.
    (0:51:48)
  • Unknown E
    Look, I'm trying to. Look, I'm a first generation American who moved here when I was five and became a citizen when I was 10. And yes, I'm fully American and that's the only country I have any loyalty to. But the premise that I reject here is that somehow an AI conversation leads to an immigration conversation because millions of jobs are going to be lost. We don't know that that's also true.
    (0:51:49)
  • Unknown C
    I agree.
    (0:52:11)
  • Unknown E
    You're making a huge assumption.
    (0:52:11)
  • Unknown B
    I completely agree.
    (0:52:13)
  • Unknown E
    Buying into the doomerism that AI is going to wipe out millions of jobs. That is not. I think it's going to create evidence.
    (0:52:14)
  • Unknown D
    I think it's going to create more.
    (0:52:19)
  • Unknown E
    Jobs than any of us are. Have any jobs been lost by AI? Let's be real. We've had AI for two and a half years and I think it's great. But so far it's a better search engine and it helps high school kids cheat on their essays.
    (0:52:20)
  • Unknown A
    I mean, you don't believe that self driving is coming. Hold on a second. Sachs, you don't believe that. Millions.
    (0:52:32)
  • Unknown B
    But hold on, those driver jobs weren't even there 10 years ago. Uber came along and created all these driver jobs. Doordash created all these driver jobs.
    (0:52:38)
  • Unknown D
    So what?
    (0:52:45)
  • Unknown B
    Technology does. Yes, technology destroys jobs, but it replaces them with opportunities that are even better. And then either you can go capture that opportunity yourself or an entrepreneur will come along and create something that allows you to capture those opportunities. AI is a productivity tool. It increases the productivity of a worker. It allows them to do more creative work and less repetitive work. As such, it makes them more valuable. Yes, there is some retraining involved, but not a lot. These are natural language computers. You can talk to them in plain English and they talk back to you. In plain English. But I think David is absolutely right. I think we will see job creation by AI that will be as fast or faster than job destruction. You saw this even with the Internet, like YouTube came along. Look at all all these YouTube streamers and influencers that didn't used to be a job.
    (0:52:46)
  • Unknown B
    New jobs, really opportunities. Because job is a wrong word. Job implies someone else has to give it to me and it's sort of like they're handed out. It's a zero sum game. Forget all that. It's opportunities. After Covid, look at how many people are making money by working from home in mysterious little ways on the Internet that you can't even quite grasp.
    (0:53:29)
  • Unknown E
    Here's the way I categorize it, okay? Is that whenever you have a new technology, you get productivity gains, you get some job disruption, meaning that part of your job may go away, but then you get other parts that are new and hopefully more elevated, more interesting, and then there is some job loss. I just think that the third category will follow the historical trend, which is that the first two categories are always bigger and you end up with more net productivity and more net wealth creation. And we've seen no evidence to date that that's not going to be the case. Now it's true that AI is about to get more powerful. You're going to see a whole new wave of what are called agents this year. Agentic products are able to do more for you, but there's no evidence yet that those things are going to be completely unsupervised and replace people's jobs.
    (0:53:49)
  • Unknown E
    So, you know, I think that we have to see how this technology evolves. And I think one of the mistakes of, let's call it the European approach, is assuming that you can predict the future with perfect accuracy, where such good accuracy that you can create regulations today that are going to avoid all these risks in the future. And we just don't know enough yet to be able to do that. That's a false level of certainty.
    (0:54:34)
  • Unknown C
    I agree with you. And the companies that are promulgating that view is what Naval said. Those that have an economic vested interest in at least convincing the next incremental investor that this could be true. Because they want to make the claim that all the money should go to them so they could hoover up all the economic gains. And that is the part of the cycle we're in. So if you actually stratify these reactions, there's the small startup companies in AI that believe there's a productivity leap to be had and that there's going to be prosperity, everybody on the sidelines watching and then a few companies that have an extremely vested interest in them being a gatekeeper because they need to raise the next 30 or 40 billion dollars trying to convince people that that's true. And if you view it through that lens, you're right, Sacks.
    (0:54:57)
  • Unknown C
    We have not accomplished anything yet that proves that this is going to be cataclysmically bad. And if anything right now history would tell you it's probably going to be like the past which is generally productive and accretive to society.
    (0:55:42)
  • Unknown E
    Yeah. And just to bring it back to JD's speech, which is where we started, I think it was a quintessentially American speech in the sense that he said we should be optimistic about the opportunities here, which I think is basically right. And we want to lead, we want to take advantage of this. We don't want hobble it. We don't even fully know what it's going to be yet. We are going to center workers. We want to be pro worker. And I think that if there are downsides for workers then we can mitigate those things in the future. But it's too early to say that we know what the program should be. It's more about a statement of values at this point.
    (0:55:55)
  • Unknown A
    Do you think it's too early Freeberg, given Optimus and all these robots being created, what we're seeing in self driving, you've talked about the ramp up with Waymox to actually say we will not see millions of jobs and millions of people get displaced from those jobs. What do you think Freeberg? I'm curious your thoughts because that is the counter argument.
    (0:56:33)
  • Unknown D
    My experience in the workplace is that AI tools that are doing things that an analyst or knowledge worker was doing with many hours in the past is allowing them to do something in minutes. That doesn't mean that they spend the rest of the day doing nothing. What's great for our business and for other businesses like ours that can leverage AI tools is that those individuals can now do more. And so our throughput, our productivity as an organization has gone up and we can now create more things faster. So whatever the product is that my company makes, we can now make more things more quickly. We can do more development.
    (0:56:55)
  • Unknown A
    You're seeing that on the ground, correct?
    (0:57:34)
  • Unknown D
    At a hollow and I'm seeing it on the ground and I don't think that this like transplantation of how bad AI will be for jobs is the right framing as much as it is about an acceleration of productivity. And this is why I Go back to the point about GDP per capita and GDP growth. Countries, societies, areas that are interested or industries that are interested in accelerating output and accelerating productivity, the ability to make stuff and sell stuff are going to rapidly embrace these tools because it allows them to do more with less. And I think that's what I really see on the ground. And then the second point I'll make is the one that I mentioned earlier. And I'll wrap up with a third point which is I think we're underestimating the new industries that will emerge drastically, dramatically. There is going to be so much new that we are not really thinking deeply about right now that we could do a whole nother two hour brainstorming session on, on what AI unlocks in terms of large scale project that are traditionally or typically or today held back because of the constraints on the
    (0:57:35)
  • Unknown D
    technical feasibility of these projects. And that ranges from accelerating to new semiconductor technology, to quantum computing, to energy systems, to transportation, to habitation, et cetera, et cetera. There's all sorts of transformations in every industry that's possible as these tools come online and that will spur insane new industries. The most important point is the third one, which is we don't know the overlap of job loss and job creation, if there is one. And so the rate at which these new technologies impact and create new markets. But I think Naval is right. I think that what happens in capitalism and in free societies is that capital and people rush to fill the hole of new opportunities that emerge because of AI, and that those grow more quickly than the old bubbles deflate. So if there's a deflationary effect in terms of job need in other industries, I think that the loss will happen slower than the rush to take advantage of creating new things will happen on the other side.
    (0:58:37)
  • Unknown D
    So my bet is probably on the order of I think new things will be created faster than old things will be lost.
    (0:59:31)
  • Unknown A
    I think.
    (0:59:37)
  • Unknown B
    And actually as a quick side note to that, the fastest way to help somebody get a job right now, if you know somebody in the market who's looking for a job, the best thing you can do is say, hey, go download the AI tools and just start talking to them, just start using them in any way. And then you can walk into any employer in almost any field and say, hey, I understand AI and they'll hire you. Exactly.
    (0:59:37)
  • Unknown A
    Naval, you and I watched this happen. We had a front row seat to it. Back in the day when you were doing venture hacks and I was doing open angel forum, we had to like fight to find five or ten companies a month. Then the cost of running these companies went down. They went down massively from 5 million to start a company to 2, then to 250k, then to 100k. I think what we're seeing is like three things concurrently. You're gonna see all these jobs go away for automation, self driving cars, cashiers, et cetera. But we're gonna also see static team size at places like Google. They're just not hiring cause they're just having the existing bloated employee base learn the tools. But I don't know if you're seeing this. The number of startups able to get a product to market with two or three people and get to a million in revenue is booming.
    (0:59:58)
  • Unknown A
    What are you seeing in the startup landscape?
    (1:00:45)
  • Unknown B
    Definitely what you're saying in that there's leverage. But at the same time I think the more interesting part is that new startups are enabled that could not exist otherwise. My last startup, Airchat, could not have existed without AI because we needed the transcription, translation. Even the current thing I'm working on, it's not an AI company, but it cannot exist without AI. It is relying on AI. Even at AngelList we're significantly adopting AI. Like everywhere you turn it's more opportunity, more opportunity, more opportunity. And people like to go on Twitter or the Artist formerly known as Twitter and basically they like to exaggerate like oh my God, we've hit AGI. Oh my God, I just replaced my all my mid level engineers. Oh my God, I've stopped hiring. To me that's like moronic. The two valid ones are the one man entrepreneur shows where there's like one guy or one gal and they're like scaling up like crazy things to AI.
    (1:00:48)
  • Unknown B
    Or there are people who are embracing AI and being like I need to hire and I need to hire anyone who can even spell AI. Like anyone who's even used AI. Just come on in, come on in again. I would say the easiest way to see that AI is not taking jobs or creating opportunities is go brush up on your AI, learn a little bit, watch a few videos, use the AI, tinker with it and then go reapply for that job that rejected you. And watch how they pull you in.
    (1:01:39)
  • Unknown E
    In 2023, an economist named Richard Baldwin said AI won't take your job. It's someone using AI that will because they know how to use it better than you. And that's kind of become a meme and you see it floating around X. But I think there's a lot of truth in that. As long as you remain adaptive and you keep learning and you learn how to take advantage of these tools, you should do better. And if you wall yourself off from the technology and don't take advantage of it, that's when you put yourself at risk.
    (1:02:04)
  • Unknown B
    Another way to think about it is these are natural language computers. So everyone who was intimidated by computers before should no longer be intimidated. You don't need to program anymore in some esoteric language or learn some obscure mathematics to be able to use these. You can just talk to them and they talk back to you. That's magic.
    (1:02:29)
  • Unknown A
    The new programming language is English. Chamath, you want to wrap us up here on this opportunity? Slash displacement Chaos.
    (1:02:46)
  • Unknown C
    I was going to say this before, but I'm pretty unconvinced anymore that you should bother even learning many of the hard sciences and maths that we used to as underpinnings. I used to believe that the right thing to do was for everybody to go into engineering. I'm not necessarily as convinced as I used to be because I used to say, well, that's great first principles, thinking, et cetera, et cetera. And you're going to get trained in a toolkit that will scale. And I'm not sure that that's true. I think you can use these agents and you can use deep research and all of a sudden they replace a lot of that skill. So what's left over? It's creativity, it's judgment, it's history, it's psychology, it's all of these other sort of like software leadership, communication that allow you to manipulate these models in constructive ways.
    (1:02:54)
  • Unknown C
    Because when you think of the prompt engineering that gets you to great answers, it's actually just thinking in totally different orthogonal ways and non linearly. So that's my last thought, which is it does open up the aperture, meaning for every smart mathematical genius, there's many, many, many other people who have high eq. And all of a sudden this tool actually takes the skill away from the person with just the high IQ and says if you have these other skills now, you can compete with me equally. And I think that that's liberating for a lot of people.
    (1:03:43)
  • Unknown A
    I'm in the camp of more opportunity. I got to watch the movie industry a whole bunch when the digital cameras came out and more people started making documentaries, more people started making independent film shorts. And then of course the YouTube revolution, people started making videos on YouTube or podcasts like this. And if you look at what happened with like the special effects industry as well, we need far fewer people to make a Star wars movie, to make a Star wars series, to make a Marvel series. As we've seen now, we can get the Mandalorian, Ashoka and all these other series with smaller numbers of people and they look better than obviously the original Star wars series or even the prequels. So there's going to be so many more opportunities. We're now making more TV shows, more series, everything we wanted to see of every little character.
    (1:04:14)
  • Unknown A
    That's the same thing that's happening in startups. I can't believe that there is a app now naval called Slopes just for Skiing. And there are 20 really good apps for just meditation and there are 10 really good ones just for fasting. Like, we're going down this long tail of opportunity and there'll be plenty of million to $10 million businesses for us if people learn to use these tools.
    (1:05:02)
  • Unknown E
    I love how that's the thing that.
    (1:05:27)
  • Unknown B
    Tips you over which one.
    (1:05:29)
  • Unknown E
    You can get an extra Marvel movie or an extra Star wars show. So that tips you over.
    (1:05:33)
  • Unknown A
    I think for a lot of people it feels great that AI may take.
    (1:05:38)
  • Unknown E
    Over the world, but I'm going to get next to a Star wars movie, so I'm cool with that.
    (1:05:41)
  • Unknown A
    Are you not entertained?
    (1:05:46)
  • Unknown E
    One final point on this is, look, I mean, given the choice between the two categories of techno optimists and techno pessimists, I'm definitely in the optimist camp. And I think, and I think we should be. But I think there's actually a third category that I would submit, which is techno realist, which is technology is going to happen. Trying to stop it is like ordering the tides to stop. If we don't do it, somebody else will, China's going to do it, or somebody else will do it. And it's better for us to be in control of the technology, to be the leader, rather than passively waiting for it to happen to us. And I just think that's always true. It's better for businesses to be proactive and take the lead, disrupt themselves instead of waiting for someone else to do it. And I think it's better for countries.
    (1:05:47)
  • Unknown E
    And I think you did see this theme a little bit. I mean, these are my own views. I don't want to ascribe them to the Vice President. But you did see, I think, a hint of the techno realism idea in his speech and in his tweet, which is, look, AI is going to happen. We might as well be the leader. If we don't, we could lose in a key category that has implications for national security. For our economy, for many things. So that's just not a world we want to live in. So I think a lot of this debate is sort of academic because whether you're an optimist or a pessimist, just sort of glass half empty, half full. The. The question's just, is it going to happen or not? And I think the answer is yes. So then we want to control it. Let's just boil it down.
    (1:06:32)
  • Unknown E
    There's not a tremendous amount of choice in this.
    (1:07:17)
  • Unknown B
    I think I would agree heavily with one point and I would just tweak another. The point I would agree with is that it's going to happen anyway. And that's what Deepseek proved. You can turn off the flow of chips to them and you can turn off the flow of talent. What do they do? They just get more efficient. And they exported it back to us. They sent us back the best open source model when our guys were staying closed source for safety reasons.
    (1:07:18)
  • Unknown E
    Exactly.
    (1:07:40)
  • Unknown B
    It's going to come right back to us.
    (1:07:41)
  • Unknown A
    Safety of their equity.
    (1:07:42)
  • Unknown E
    Deep Seek exploded the fallacy that the US has a monopoly in this category and that somehow, therefore, we can slow down the train and that we have total control over the train. And I think what Deepseak showed is no, if we slow down the train, they're just going to win.
    (1:07:44)
  • Unknown B
    Yeah. The part where I try to tweak a little bit is the idea that we are going to win by we. When you say America, the problem is that the best way to win is to be as open, as distributed, as innovative as possible. If this all ends up in the control of one company, they're actually going to be slower to innovate than if there's a dynamic system. And that dynamic system, by its nature will be open. It will leak to China, it will leak to India. But these things have powerful network effects. We know this about technology. Almost all technology has network effects underneath. And so even if you are open, you're still going to win and you're.
    (1:07:59)
  • Unknown E
    Still going to control you look at the Internet. That was all true for the Internet.
    (1:08:33)
  • Unknown A
    Right.
    (1:08:36)
  • Unknown E
    The Internet's an open technology is based on. But who are the dominant companies? All the dominant companies are U.S. companies because they were in the league.
    (1:08:36)
  • Unknown B
    Exactly right. Exactly right.
    (1:08:44)
  • Unknown D
    We embraced the open Internet. We embraced the open Internet. Right. That was different.
    (1:08:45)
  • Unknown E
    So there will be benefits for all of humanity. And I think the vice President's speech was really clear that, look, we want you guys to be on board, we want to be good partners. However, there are definitely going to be winners Economically, militarily, and in order to be one of those winners, you have to be a leader.
    (1:08:48)
  • Unknown A
    Who's going to get to AGI first Naval. Is it going to be an open source? Who's going to win? Is it going to be open source or closed source? Who's going to win the day? If we're sitting here five, ten years from now and we're looking at the top three language models, which is going.
    (1:09:04)
  • Unknown B
    To be trouble for this. But I don't think we know how to build AGI. But that's, that's a much longer.
    (1:09:17)
  • Unknown A
    Okay, put AGI, who's going to have the best model? Five years.
    (1:09:20)
  • Unknown C
    Hold on. I 100% agree with you.
    (1:09:23)
  • Unknown B
    I just think it's a different thing. But what we're building are these incredible natural language computers. And actually, David, in a very pithy way, summarized the two big use cases. It's search and it's homework. It's paperwork. It's really paperwork. And a lot of research jobs that we're talking about disappearing are actually paperwork jobs. They're paperwork shuffling. These are made up jobs like the federal government, as we're finding out through Doge. A third of it is people digging holes in spoons and another third are filling them back up.
    (1:09:25)
  • Unknown E
    They're filling up paperwork and then burying it on a mineshaft.
    (1:09:51)
  • Unknown B
    They're burying a mineshaft in Iron Mountain? Yeah. So I think a lot of these made up jobs and then they got.
    (1:09:54)
  • Unknown E
    To go down the mine shaft to get the paperwork when someone retires and bring.
    (1:09:58)
  • Unknown A
    You know what? I'm going to get them some thumb drives. We can increase the throughput of the elevator with some thumb drives. It would be incredible.
    (1:10:01)
  • Unknown B
    What we found out is that the DMV has been running the government for the last 70 years. It's been a compounding.
    (1:10:07)
  • Unknown C
    Compounding.
    (1:10:12)
  • Unknown B
    That's, that's really what's going on. DMV's in charge.
    (1:10:13)
  • Unknown E
    I mean, if the world ends in nuclear war, God forbid, the only thing that's left will be the cockroaches and then a bunch of like, government account documents, TPS reports. TPS reports down in a mineshaft.
    (1:10:16)
  • Unknown A
    Basically, yeah. Let's take a moment, everybody, to thank our Czar. We miss him. We wish he could be here for the whole show.
    (1:10:28)
  • Unknown C
    And thank you, Czar.
    (1:10:38)
  • Unknown A
    Thank you to the Tsar. We miss you a little, buddy. I wish we could talk about Ukraine, but we're not allowed. Get back to work. We'll talk about it another time over coffee. I'll see you in the Commissary. Thanks for the invite. Bye. Man, I'm so excited. I'm naval. Sacks invited me to go to the military mess. I'm going to be in the commissary.
    (1:10:40)
  • Unknown D
    No, he didn't. Jake. Al, you invited yourself. Be honest.
    (1:11:00)
  • Unknown A
    I did. Yes, I did. I put it on his calendar to.
    (1:11:02)
  • Unknown B
    Keep the conversation moving. Let me segue a point that came up, that was really important into tariffs. And the point is, even though the Internet was open, the US Won a lot of the Internet. A lot of US Companies won the Internet. And they won that because we got there the firstest, with the mostest, as they say in the military. And that matters because a lot of technology businesses have scale economies and network effects underneath, even basic brand based network effects. If you go back to the late 90s, early 2000s, very few people would have predicted that we would have ended up with Amazon basically owning all of E Commerce. You would have thought it would have been a perfect competition and very spread out. And that applies to how we ended with Uber as basically one taxi service. Or we end up with Airbnb.
    (1:11:05)
  • Unknown B
    Meta Airbnb. It's just network effects. Network effects. Network effects rule the world around me. But when it comes to tariffs and when it comes to trade, we act like network effects don't exist. The classic Ricardian comparative advantage dogma says that you should produce what you're best at. I produce what I'm best at and we trade. And then even if you want to charge me more for it, if you want to impose tariffs for me to ship to you, I should still keep tariffs down because I'm better off. You're just selling me stuff cheaply. Great. Or if you want to subsidize your guys, great, you're selling me stuff cheaply. The problem is that is not how most modern businesses work. Most modern businesses have network effects. As a simple thought experiment, suppose that we have two countries. I'm China, you're the U.S. u.S. I start out by subsidizing all of my companies and industries that have network effects.
    (1:11:50)
  • Unknown B
    So I'll subsidize TikTok, I'll ban your social media, but I'll push mine. I will subsidize my semiconductors, which do tend to have winner take call in certain categories. Or I'll subsidize my drones and then byd. Exactly, byd Self driving. Whatever. And then when I win, I own the whole market and I can raise prices. And if you try to start up a competitor, then it's too late. I've got network effects or if I've got scale economies, I can lower my price to zero, crash you out of business. No one in their right mind will invest and I'll raise prices right back up. So you have to understand that certain industries have hysteresis or they have network effects or they have economies of scale. And these are all the interesting ones. These are all the high margin businesses. So in those, if somebody is subsidizing or they're raising tariffs against you to protect your industries and let them develop, you do have to do something.
    (1:12:36)
  • Unknown B
    You can't just completely back down.
    (1:13:25)
  • Unknown A
    What are your thoughts chamath about tariffs and network effects? It does seem like we do want to have redundancy in supply chain. So there are some exceptions here. Any thoughts on how this might play out? Because yeah, Trump brings up tariffs every 48 hours and then it doesn't seem like any of them land. So I don't know. I'm still on my 72 hour Trump rule, which is whatever he says, wait 72 hours and then maybe see if it actually comes to pass. Where do you stand on all these tariffs and tariff talk?
    (1:13:28)
  • Unknown C
    Well, I think the tariffs will be a plug. Are they coming? Absolutely. The quantum of them, I don't know. And I think that the way that you can figure out how extreme it will be, it'll be based on what the legislative plan is for the budget. So there's two paths right now. Path one, which I think is a little bit more likely is that they're going to pass a slimmed down plan in the Senate just on border security and military spending and then they'll kick the can down the road for probably another three or four months on the budget. Plan 2 is this one big beautiful bill that's irking its way through the House and there they're proposing trillions of dollars of cuts. In that mode, you're going to need to raise revenue somehow and especially if you're giving away tax breaks. And the only way to do that is probably through tariffs or one way to do it is through tariffs.
    (1:13:58)
  • Unknown C
    My honest opinion, Jason, is that I think we're in a very complicated moment. I think the Senate plan is actually on the margins more likely and better. And the reason is because I think that Trump is better off getting the next 60 to 90 days of data. I mean we're in a real pickle here. We have persistent inflation, we have a broken Fed. They are totally asleep at the switch. And the thing that Yellen and Biden did, which in hindsight now was extremely dangerous is they issued so much short term paper that in totality we have $10 trillion we need to finance in the next six to nine months. So it could be the case that we have rates that are like five, five and a quarter, 5.5%. But I think that that's extremely bad at the same time as inflation, at the same time as delinquencies are ticking up.
    (1:14:52)
  • Unknown C
    So I think tariffs are probably going to happen, but I think that Trump will have the most flexibility if he has time to see what the actual economic conditions will be, which will be more clear and three, four, five months. And so I almost think this big beautiful bill is actually counterproductive because I'm not sure we're going to have all the data we need to get it right.
    (1:15:52)
  • Unknown A
    Freiburg, any thoughts on these tariffs you've been involved in the global marketplace, especially when it comes to produce and wheat and all this corn and everything. What do you think the dynamic here is going to be? Or is it saber rattling in a tool?
    (1:16:18)
  • Unknown D
    For Trump, the biggest buyer of US ag exports is China. China ag exports are a major revenue source, major income source, and a major part of the economy for a large number of states. And so there will be, as there was in the first Trump presidency, very likely very large transfer payments made to farmers because China is very likely going to tariffs imports or stop making import purchases altogether, which is what happened during the first presidency. When they did that, the federal government, I believe, had transferred payments of north of $20 billion to farmers. This is a not negligible sum and it's a not negligible economic effect because there's then a rippling effect throughout the ag economy. So I think that's one key thing that I've heard folks talk about is the activity that's going to be needed to support the farm economy as the US's biggest ag customer disappears.
    (1:16:33)
  • Unknown D
    In the early 20th century, we didn't have an income tax and the federal revenue was almost entirely dependent on tariffs. When tariffs were cut, there was an expectation that there would be a decline in federal government revenue. But what actually happened is volume went up. So lower tariffs actually increased trade, increase the size of the economy. So this is where a lot of economists take their basis in hey guys, if we do these tariffs, it's actually going to shrink the economy, it's going to cause a reduction in trade. The counterbalancing effect is one that has not been tested in economics, which is what's going to happen if simultaneously we reduce the income tax and reduce the corporate income tax and basically increase capital flows through reduced taxation while doing the tariff implementation at the same time. So it's a grand economic experiment and I think we'll learn a lot about what's going to happen here as this all moves forward.
    (1:17:31)
  • Unknown D
    I do think ultimately many of these countries are going to capitulate to some degree and we're going to end up with some negotiated settlement that's going to hopefully not be too short term impactful on the economies and the people and the jobs that are dependent on trade.
    (1:18:23)
  • Unknown C
    Economy feels like it's in a very precarious place.
    (1:18:35)
  • Unknown B
    It does to asset holders.
    (1:18:39)
  • Unknown C
    Yeah, to assets.
    (1:18:41)
  • Unknown B
    And obviously they've left it in a bad place in the last administration and we shut down the entire country for a year over and the bill for that has come due and that's reflected in inflation. I think there are a couple other points in tariffs. First is it's not just about money, it's also about making sure we have a functional middle class with good jobs. Because if you have a non tariff world, maybe all the gains go to the upper class and an underclass and then you can't have a functioning democracy when the average person is on one of those two extremes. So I think that's one issue. Another is strategic industries. If you look at it today, probably the largest defense contractor in the world is dji. They got all the drones. Even in Ukraine, both sides are getting all their drone parts from DJI now they're getting it through different supply chains and so on.
    (1:18:42)
  • Unknown B
    But Ukrainian drones and Russian drones, the vast majority of them are coming through China through dji and we don't have that industry. If we have a kinetic conflict right now and we don't have good drone supply chain internally in the US we're probably going to lose because those things are autonomous bullets. That's the future of all warfare. We're buying F35 and the Chinese are building swarms of Nanz at scale. So we do have to re onshore those critical supply chains. And what is a drone supply chain? It's not just there's not a thing called drone. It's like motors and semiconductors and it's a lot of pieces and optics and lasers and just everything across the board. So I think there are other good arguments for at least reshoring some of these industries. We need them. And the United States is very lucky in that it's very autarkic.
    (1:19:25)
  • Unknown B
    We have all the resources, we have all the supplies. We can be upstream of everybody with all the energy to the extent we're importing any energy, that is a choice we made. That is not because fundamentally we lack the energy. Yeah. Because between all the oil resources and the natural gas and fracking combined with all the work we've done in nuclear fission and small reactors, we should absolutely be energy independent.
    (1:20:10)
  • Unknown A
    We should be running the table on it. We should. We should.
    (1:20:34)
  • Unknown B
    Absolutely.
    (1:20:36)
  • Unknown A
    A massive surplus. And hey, you know, if you, if you're worried about, you know, a couple of million of doordash Uber drivers losing their jobs to automation, like, hey, there's going to be factories to build these parts for these drones that we're going to need. So there's a lot of opportunity, I guess, for people.
    (1:20:37)
  • Unknown B
    And there is. And there is a difference between different kinds of jobs. Those kinds of jobs are better jobs, building difficult things at scale physically that we need for both national security and for innovation. Those are better jobs than, you know, paperwork, writing essays for other people to read.
    (1:20:54)
  • Unknown A
    Yeah.
    (1:21:11)
  • Unknown B
    Or even driving cars.
    (1:21:11)
  • Unknown A
    All right, listen, I want to get to two more stories here. We have a really interesting copyright story that I wanted to touch on. Thomson Reuters just won the first major US AI copyright case. And fair use played a major role in this decision. This has huge implications for AI companies here in the United States, obviously. OpenAI and the New York Times. Getty Images versus stability. We've talked about these, but it's been a little while because the legal system takes a little bit of time and these are very complicated cases. As we've talked about, Thomson Reuters owns Westlaw. If you don't know that, it's kind of like LexisNexis. It's one of the legal databases out there that lawyers use to find cases, et cetera. And they have a paid product with summaries and analysis of legal decisions. Back in 2020, this is two years before ChatGPT, Reuters sued a legal research competitor called Ross for copyright infringement.
    (1:21:12)
  • Unknown A
    Ross had created an AI powered legal search engine. Sounds great. But Ross had asked Westlaw if they would pay a license to its content for training. Westlaw said no. This all went back and forth. And then Ross signed a similar deal with a company called legalease. The problem is Legalese's database was just copied and pasted from a bunch of Westlaw answers. So Reuters Westlaw sued Ross in 2020, accusing the company of being vicariously liable for Legaleases direct infringement. Super important point. Anyway, the judge originally favored Ross in fair use. This week, the judge reversed this ruling and found Ross liable, noting that after further review, fair use does not apply. In this case, this is the first major win and we debated this. So here's a clip. You know, you heard it here first on the all in pod. What I would say is, you know, when you look at that fair use doctrine, I've got a lot of experience with it.
    (1:22:07)
  • Unknown A
    You know, the fourth factor test, I'm sure you're well aware of this is the effect of the use on the potential market and the value of the work. If you look at the lawsuits that are starting to emerge, it is Getty's right to then make derivative products based on their images. I think we would all agree stable diffusion when they use these open web. That is no excuse to use an open web crawler to avoid getting a license from the original owner of that. Just because you can technically do it doesn't mean you're allowed to do it. In fact, the open web projects that provide these say explicitly, we do not give you the right to use this. You have to then go read the copyright laws on each of those websites. And on top of that, if somebody were to steal the copyrights of other people, put it on the open web, which is happening all day long, you still, if you're building a derivative work like this, you still need to go get it.
    (1:23:01)
  • Unknown A
    So it's no excuse that I took some site in Russia that did a bunch of copyright violation and then I indexed them for my training model. So I think this is going to result.
    (1:23:47)
  • Unknown C
    Freeberg, can you shoot me in the face and let me know when this segment.
    (1:23:56)
  • Unknown B
    Okay.
    (1:23:59)
  • Unknown A
    Oh great.
    (1:24:01)
  • Unknown C
    I feel the same way.
    (1:24:02)
  • Unknown A
    Same way now. Exactly.
    (1:24:04)
  • Unknown D
    I know, me too. Okay, good segment. Let's move on.
    (1:24:06)
  • Unknown A
    Well, since these guys don't give a about copyright holdings, Naval, what do you think about. I'm so glad you're here Naval to actually talk about the topics these two other guys.
    (1:24:10)
  • Unknown B
    I'm going to go at an even thinner limb. I'm going to go at an even thinner limb and say I largely agree with you. I think it's a bit rich to crawl the open web, hoover up all the data, offer direct substitution for a lot of use cases. Because now you start and end with the AI models not even like you link out like Google did and then you just close off the models for safety reasons. I think if you trained on the open web, your model should be open source.
    (1:24:20)
  • Unknown A
    Yeah, absolutely. That would be a fine thing. I have a prediction here. I think this is all going to wind up wind up like the Napster Spotify case. For people who don't know Spotify pays. I think 65 cents on the dollar to the original underwriters of that content, the music industry. And they figured out a way to make a business and Napster is roadkill. I think that there is a non zero chance like it might be 5 or 10% that OpenAI is going to lose the New York Times lawsuit and they're going to lose it hard and they're going to be injunctions. And I think it's the settlement might be that these language models, especially the closed ones, are going to have to pay some percentage in a negotiated settlement of their revenue. Have you 2/3 to the content holders. And this could make the content industry have a massive, massive uplift and a massive resurgence.
    (1:24:40)
  • Unknown C
    I think that the problem, there's an example on the other side of this which is that there's a company that provides technical support for Oracle, third party company and Oracle has tried umpteen times to sue them into oblivion using copyright infringement as part of the justification. And it's been a pall over the stock for a long time. The company's name is Rimini Street. Don't ask me why it's on my radar, but I've been looking at it and they lost this huge lawsuit, Oracle one, and then it went to appellate court and then it was all vacated. Why am I bringing this up? I think that the legal community has absolutely no idea how these models work because you can find one case that goes one way and one case that goes the other. And what I would say should become standard reading for anybody bringing any of these lawsuits.
    (1:25:34)
  • Unknown C
    There's an incredible video that Karpathy just dropped, that Andrej just dropped, where he does like this deep dive into LLMs and he explains ChatGPT from the ground up. It's on YouTube, it's three hours. It's excellent. And it's very difficult to watch that and not get to the same conclusion that you guys did. I'll just leave it at that. I tend to agree with this.
    (1:26:25)
  • Unknown B
    There's also a good old video by Ilya Sutskever where he was, I believe, the founding Chief Scientist or CTO of OpenAI. And he talks about how these large language models are basically extreme compressors and he models them entirely as their ability to compress and they're lossy. Compression.
    (1:26:49)
  • Unknown C
    Exactly.
    (1:27:06)
  • Unknown B
    Compression.
    (1:27:08)
  • Unknown C
    Exactly, exactly.
    (1:27:08)
  • Unknown B
    And Google got sued for fair use back in the day. But the way they managed to get past the argument was they were always linking back to you.
    (1:27:10)
  • Unknown A
    They showed you, they provided some traffic.
    (1:27:16)
  • Unknown B
    You sent you the traffic.
    (1:27:19)
  • Unknown C
    This is lossy compression. It is Absolutely. I'm now on your pay. I. I hate to say this, Jason. I agree with you. You were. You were right.
    (1:27:20)
  • Unknown D
    That's all I wanted to hear all these years.
    (1:27:35)
  • Unknown B
    That's why I was shaking my head when I saw those videos, because I was like, oh, man. Jason was right.
    (1:27:39)
  • Unknown C
    Jason was right. Oh, my God.
    (1:27:44)
  • Unknown A
    No, I just. I've been through this so many times that these. I think this is. You know, Rupert Murdoch said we should hold the line with Google and not allow them to index our content without a license. And Google navigated it successfully, and they were able to not get him to stop. I think what's happened now is that the New York Times remembers that. They all remember losing their content and these snippets and the one box to Google, and they couldn't get that genie back in the bottle. I think the New York Times realizes this is their payday. I think the New York Times will make more money from licenses from LLMs than they will make from advertising or subscriptions. Eventually, this will renew the model almost.
    (1:27:46)
  • Unknown B
    I think New York Times content is worthless to an LLM, but that's a different story. I think the actual valuable content, political.
    (1:28:33)
  • Unknown A
    Reason, whatever, but I can tell you as a user, I loved the wire cutter. I think you knew Brian and everybody over the wire cutter that was, like, such an inventor.
    (1:28:39)
  • Unknown B
    Fair enough. Yeah. Wire cutter.
    (1:28:47)
  • Unknown A
    What a great product. I used to pay for the New York Times. I no longer pay for the New York Times. My main reason was I would go to the wire cutter.
    (1:28:48)
  • Unknown B
    Yeah.
    (1:28:54)
  • Unknown A
    And I would just buy whatever they told me to buy. Now I go to ChatGPT, which I pay for, and ChatGPT tells me what to buy based on the wire cutter. So it's it. And I'm already paying for it. So I stopped paying for it.
    (1:28:54)
  • Unknown D
    I philosophically disagree with all of your nonsense on this topic. All three of you are wrong. And I'll tell you why. Number one, if information is out in the open Internet, I believe it's accessible and it's viewable. And I view an LLM or a web crawler as basically being a human that's reading and can store information in its brain. Can. If it's out there in the open, if it's behind a paywall, 100%. If it's behind some protected password.
    (1:29:08)
  • Unknown B
    Wait, wait, wait, wait, David. In that case, can a Google crawler just crawl an entire site and serve it on Google? Why can't they do that?
    (1:29:34)
  • Unknown D
    So here's the fair use. The fair use is you cannot copy. You cannot Repeat the content. You cannot take the content and repeat it.
    (1:29:43)
  • Unknown B
    That is how the law is currently written. But now what I have is I have a tool that can remix it with 50 other pieces of similar content and I can change the words slightly and maybe even translate into different language. So where does it stop?
    (1:29:49)
  • Unknown D
    Do you know the musical artist Girl Talk? We should have done a Girl Talk track here today.
    (1:30:01)
  • Unknown A
    But he's got weird musical tastes. Good. Here we go.
    (1:30:06)
  • Unknown D
    He basically takes small samples of popular tracks and he got sued for the same problem. There was another guy named White Panda, I believe had the same problem. Ed Sheeran got sued for this. Yeah.
    (1:30:10)
  • Unknown B
    But there are entire sites like StackOverflow and WikiHow that are basically disappeared now because you can just swallow them all up and you can just spit it all back out in ChatGPT with slight changes. So I think that the. But I think the fair use is.
    (1:30:21)
  • Unknown D
    How much of a slight change is exactly the right question, which is how.
    (1:30:33)
  • Unknown B
    Much are you changing? Yeah. So that's the question. And it actually boils down to the AGI question. Are these things actually intelligent and are they learning or are they compressing and regurgitating? That's the question.
    (1:30:36)
  • Unknown D
    I wonder this about humans, and that's why I bring up the White Panda, the Girl Talk in audio, but also visual art, there was always artists, and even in classical music. I don't know if you guys are classical music people, but there's a demonstration of how one composer learned from the next and that you can actually track the music as kind of being standing on the shoulders of the prior. And the same is true in almost all art forms and almost all human knowledge and media communication.
    (1:30:45)
  • Unknown B
    It's very hard to figure that out.
    (1:31:11)
  • Unknown D
    Well, that's exactly right.
    (1:31:13)
  • Unknown B
    That's very hard to figure that out. Which is why I come back to there's only one of two stable solutions to this, and it's going to happen anyway. If we don't crawl it, the Chinese will crawl it. Right. Deepseek proved that. So there's only one of two stable solutions. Either you pay the copyright holders, which I actually think doesn't work. And the reason is because someone in China will crawl it and they just dump the weights, Right? So they can just crawl and compress and dump the compressed weights, or if you crawl, make it open, at least contribute something back to open source. Right? You crawled open data, contributed back to open source opened. And the people who don't want to be crawled, they're going to have to go to huge lengths to protect their Data. Now everybody knows to protect the data.
    (1:31:14)
  • Unknown A
    Yeah, well, not a clean solution. Well, the license is happening here. I have a book out from Harper Business on the shelf behind me, and I'm getting 2,500 smackaroos for the next three years for Microsoft indexing it. So they're going out and they're licensing this stuff and they're going $2,500.
    (1:31:52)
  • Unknown D
    So you're booked.
    (1:32:12)
  • Unknown A
    Literally, I'm getting $2,500 for three years, a bunch of Harper to go into an LLM, to go into Microsoft specifically. And you know what? I'm going to sign it. I decided because I just want to set the precedent. Maybe next time it's 10,000, maybe next time it's 250. I don't care. I just want to see people have their content respected. And I'm just hoping that Sam Altman loses this lawsuit and they get an injunction against it. Hey. Well, just because he's just such a weasel in terms of like, making stuff open air into a close thing. I mean, I like Sam personally, but I think what he did was like the super weasel move of all time for his own personal benefit. If he, if he. And this whole lying like, oh, I have no equity. I get health care.
    (1:32:13)
  • Unknown C
    He does.
    (1:32:53)
  • Unknown A
    And now I get 10%, bro.
    (1:32:54)
  • Unknown C
    He does it, but he does it for the love. But what was the statement? He does it for the. I do it for joy.
    (1:32:55)
  • Unknown A
    The benefit. The benefits. I think he got health care.
    (1:33:00)
  • Unknown B
    I think in OpenAI's defense, they do need to raise a lot of money and they got to incent their employees, but that doesn't mean they need to take over the whole thing. The nonprofit portion can still stay the nonprofit portion, get the lion's share of the benefits and be the board. And then he can have an incentive package and employees can have an incentive package.
    (1:33:03)
  • Unknown A
    Why don't they get a percentage of the revenue?
    (1:33:21)
  • Unknown B
    Just give them like, I don't know.
    (1:33:23)
  • Unknown A
    10% of the revenue goes to be.
    (1:33:24)
  • Unknown B
    Bought out right now for 40 billion, and then the whole thing disappears until closed system. That part makes no sense to me.
    (1:33:26)
  • Unknown A
    That's called a shell game and a scam.
    (1:33:32)
  • Unknown B
    Yeah, I think Sam and his team would do better to leave the nonprofit part alone, leave an actual independent nonprofit board in charge, and then have a strong incentive plan and a strong fundraising plan for the investors and the employees. So I think this is workable. It's just trying to grab it all just seems way off, especially when it was built on open algorithms from Google, open data from the west and on a nonprofit. Funding from Elon and others.
    (1:33:34)
  • Unknown A
    I mean, what a great proposal like we just workshopped here. What if they just. What do they make? 6 billion a year? Just take 10% of it. 600 million every year. And that goes into a bonus.
    (1:33:58)
  • Unknown C
    They're losing money, Jason, so they have to.
    (1:34:09)
  • Unknown A
    Okay, eventually.
    (1:34:11)
  • Unknown B
    No, but even equity, they could, they could give equity to the building, but they could still leave it in the control of the nonprofit. I just don't understand this conversion. I mean there was a. There was a board coup, right? The board tried to fire Sam and Sam took over the board. Now it's his hand picked board. So it also looks like self dealing, right? And yeah, they'll get an independent valuation. But we all know that game, you hire a valuation expert is going to say what you're going to say and they'll check if you're going to capture the light code of all future value or build super intelligence. You know what's worth a lot more. That's why ELON Just bid 100 billion.
    (1:34:12)
  • Unknown C
    Exactly. You're saying, you're saying the things that actually the regulators and the legal community have no insight because they'll see a fairness opinion and they think, oh, it says fairness and opinion. Two words side by side. It must be fair. And they don't know how all of this stuff is gamed.
    (1:34:42)
  • Unknown A
    So yeah, yeah, man, I got stories about 409 as that would.
    (1:34:56)
  • Unknown B
    Exactly.
    (1:35:02)
  • Unknown C
    Oh yeah, everything is game. 409 as are gamed. These fairness opinions are gamed. But the reality is I don't think the legal and the judicial community has any idea.
    (1:35:02)
  • Unknown A
    I mean, imagine if a founder you invested in just as just a total imaginary situation. Naval had a great term sheet at some incredible dollar amount, didn't take it, ran the valuation down to under a million, gave themselves a bunch of shares and then took it three months later. I don't know, what would that be called? Securities for all.
    (1:35:12)
  • Unknown C
    Can we wrap up?
    (1:35:34)
  • Unknown A
    Yeah, let's wrap on your story.
    (1:35:34)
  • Unknown C
    I had an interesting. Nick will show you the photo. I had an interesting dinner on Monday with Brian Johnson. The don't die guy came over to my house.
    (1:35:35)
  • Unknown A
    How's his erection doing overnight?
    (1:35:43)
  • Unknown C
    What we talked about is he's got three hours a night of nighttime erections. Wow, look at this by the way. First of all, I'll tell you, I think that he's Kuhn.
    (1:35:45)
  • Unknown B
    Wait, which one of those is giving him the erection?
    (1:35:57)
  • Unknown C
    No, no, no, he measures his nighttime erections.
    (1:35:59)
  • Unknown A
    I think Khun has given him the erection.
    (1:36:02)
  • Unknown C
    But he said that when he started. So by the way, he said he was 43 when he started this thing. He was basically clinically obese and in these next four years has become a specimen. He now has three hours a night of nighttime erections. But that's not the interesting thing at the end of this dinner, by the way. His skin is incredible. I was not sure because when you see the pictures online, but his skin in real life is like a porcelain doll's. Both my wife and I were like, we've never seen skin like this. And it's incredibly soft.
    (1:36:04)
  • Unknown A
    Wait, wait, wait. Whoa, whoa, whoa. How do you know his skin is soft?
    (1:36:35)
  • Unknown C
    You know, you brush your hand against his forearm or whatever you know, gives a hug at the end of the night. I'm telling you, the guy had supple skin, bro. It's the softest skin I've ever touched in my life. Anyways, that's not the point. It was really fascinating dinner. He walked through his whole protocol, but at the end of it, I think it was Nikesh CEO Palo Alto Networks he was just like, give me the top three things.
    (1:36:38)
  • Unknown A
    Top three.
    (1:37:02)
  • Unknown C
    And of the top three things, what I'll boil it down to is the top one thing, which is like 80% of the 80%. It's all about sleep. I was about to get sleep and he walked through his nighttime routine. And it's incredible. And it's straightforward, it's really simple. It's like how you do a wind down. Anyways, I have tried to explain the wind down briefly. Let's just say that because Brian goes to bed much earlier. So our normal time, let's just say 10, 10:30. So my time, I try to go to bed by 10:30. He's like, you need to be in bed. You need to first of all stop eating three or four hours before, right? And I do that. I eat at 6:30, so I have about three hours. You're in bed by 9:30 or 10, you deal with the self talk, right? Like okay, here's the active mind telling you all the things you have to fix in the morning.
    (1:37:03)
  • Unknown C
    Talk it out, put it in its place. Say I'm going to deal with this in the morning.
    (1:37:52)
  • Unknown A
    Write it down in a journal.
    (1:37:56)
  • Unknown C
    You're saying whatever you do so that you put it away, you cannot be on your phone.
    (1:37:57)
  • Unknown A
    That's got to be in a different room.
    (1:38:01)
  • Unknown C
    You just got to be able to shut it down and then read a book so that you're actually just engaged in something. And he said that he typically falls asleep within three to Four minutes of getting into bed and starting what? I tried it. So I've been doing it since I had dinner with him on Monday. Last night I fell asleep within 15 minutes. The hardest part for me is to put the phone away. I can't do it.
    (1:38:04)
  • Unknown A
    Of course, of course. What about you, Naval? Tell us your one down and.
    (1:38:28)
  • Unknown B
    Oh, yeah, I know. So I know Brian pretty well, actually, and I joke that I'm married to the female Brian Johnson because my wife has some of his routines, but she's like the natural version, no supplements, and she's intense. And I think when Brian saw my sleep score from my eight sleep, he was just like, you're gonna die. He's like, you're literally gonna die.
    (1:38:31)
  • Unknown A
    What are you guys, 70, 80?
    (1:38:54)
  • Unknown B
    No, it's. It's terrible. It's awful.
    (1:38:55)
  • Unknown A
    But it's like, what's your number? What's your number?
    (1:38:57)
  • Unknown B
    30S, 40s, but, you know. Yeah, but it's also because I don't sleep much. I only sleep a few hours a night and I also move around a lot in the bed and so on. But it's fine. I never have trouble falling asleep. But I. I would say that Brian's yes, skincare routine is amazing. His diet is incredible. He's a genuine character. I do think a lot of what he's saying, minus the supplements. I'm not a big believer in supplements. Does work. I don't know if it's necessarily going to slow down your aging, but you'll look good and you'll feel good. Yeah. Sleep is the number one thing in terms of falling asleep. I don't think it's really about whether you look at your phone or not. Believe it or not, I think it's about what you're doing on your phone. If you're doing anything that is cognitively stressful or getting your mind to spin, then, yes, you think you can scroll.
    (1:39:00)
  • Unknown C
    TikTok and fall asleep is fine.
    (1:39:43)
  • Unknown B
    Anything that's entertaining or that is like, you could read a book right, on your Kindle or in your iPad and I think it'd be fine falling asleep. Or you can listen to like some meditation video or some spiritual teacher or something and that'll actually help you fall asleep. But if you're on X or if you're checking your email, then, heck, yeah, that's going to keep you up. So my hack for sleep is a little different. I normally fall asleep within minutes. And the way I do it is you all have a meditation routine.
    (1:39:46)
  • Unknown C
    You have a set time. You have a set time.
    (1:40:13)
  • Unknown B
    No No, I sleep whenever I feel like. Usually around one in the morning, two in the morning.
    (1:40:15)
  • Unknown C
    God damn, I'm in bed by 10. Yeah, I need to sleep.
    (1:40:19)
  • Unknown B
    I'm an owl. But if you want to fall asleep, the hack I found is everybody has tried some kind of a meditation routine. Just sit in bed and meditate. And your mind will hate meditation so much that if you force it to choose between the fork of meditation and.
    (1:40:21)
  • Unknown A
    Sleeping, you will fall asleep.
    (1:40:37)
  • Unknown C
    Well, okay, so if you don't fall.
    (1:40:39)
  • Unknown B
    Asleep, you'll end up meditating, which is great too.
    (1:40:41)
  • Unknown A
    So just to do the body scan.
    (1:40:43)
  • Unknown C
    The coda to this story was a friend, a friend of mine came to see me from the uae and he was here on Tuesday. And I was telling him about the dinner with Brian. And he told me the story because he's friends with Khabib, the UFC fighter. And he says, you know, when Khabib goes to his house, he eats anything and everything, fried food, pizzas, whatever. But he trains consistently. And my friend Abdallah says, how are you able to do that and how does it not affect your physiology? He goes, I've learned since I was a kid, I sleep three hours after I train in the morning and I sleep 10 hours at night. And I've done it since I was like 12 or 13 years old.
    (1:40:47)
  • Unknown A
    That's a lot of sleep.
    (1:41:23)
  • Unknown C
    It's a lot of sleep.
    (1:41:24)
  • Unknown A
    The direct correlation for me is, is if I do something cognitively like, you know, big heavy duty conversations or whatever. So no heavy conversations at the end of the night, no existential conversations in the night. And then if I go rucking, I have the, you know, on the ranch, I put on a 35 pound weight vest. I watch you do that at night.
    (1:41:26)
  • Unknown C
    Before you go to bed.
    (1:41:46)
  • Unknown A
    No, no, no. If I do it anytime during the day, I typically do it in the morning or the afternoon. But the one to two mile ruck with the 35 pounds, whatever it is, it just tires my whole body out so that when I do lay down.
    (1:41:47)
  • Unknown C
    Is that why you don't prepare for the pod?
    (1:41:59)
  • Unknown A
    You know, I mean, this pot is the top 10 pod in the world. Shamath, do you think it's an accident?
    (1:42:03)
  • Unknown C
    Freeberg, what's your sleep routine? Can you just go to bed? You just like, well, I take a.
    (1:42:09)
  • Unknown A
    Warm bath and I send Jay Cali a picture of my feet.
    (1:42:14)
  • Unknown D
    I'll wait till J Gal's done. I do take a nice warm bath.
    (1:42:18)
  • Unknown A
    I nailed it.
    (1:42:22)
  • Unknown C
    But you do it every night. A warm bath?
    (1:42:23)
  • Unknown D
    Yeah, I do a warm bath every night.
    (1:42:26)
  • Unknown A
    With candles too.
    (1:42:28)
  • Unknown C
    And do you do it right before you go to bed?
    (1:42:29)
  • Unknown D
    Yeah, I usually do it after I put the kids down and I'll basically start to wind down for bed. I do watch TV sometimes, but I do have the problem and the mistake of looking at my phone probably for too long before I turn the lights off.
    (1:42:31)
  • Unknown C
    So do you have a consistent time where you go to bed or.
    (1:42:44)
  • Unknown D
    No, usually 11 to midnight and then up at 6:30.
    (1:42:45)
  • Unknown C
    Man, I need eight hours, otherwise I'm a mess.
    (1:42:53)
  • Unknown A
    I'm trying to get eight. I hit between six and seven consistently. I try to go to bed that 11 to 1am window and get up the seven to eight window.
    (1:42:57)
  • Unknown D
    My problem is if I have work to do, I'll get on the computer or my laptop and then when I start that after in my evening routine, I can't stop. And then all of a sudden it's like three in the morning and I'm like, oh no, what did I just do? And then I still have to get up at 6:30. So that does happen to me.
    (1:43:04)
  • Unknown B
    So last night was unusual for me, but it was kind of funny anyway. I thought, oh, I should go to bed early because I'm an all in.
    (1:43:19)
  • Unknown A
    Yeah.
    (1:43:25)
  • Unknown B
    But I ended up eating ice cream with the kids late.
    (1:43:26)
  • Unknown A
    Wait, what was the brand? You said you went for another brand. I want to know the brand.
    (1:43:30)
  • Unknown B
    I think it's Van Leeuwen or something like that.
    (1:43:34)
  • Unknown D
    New York work in Brooklyn.
    (1:43:38)
  • Unknown B
    The holiday cookies and cream. Oh my God, so good.
    (1:43:40)
  • Unknown C
    Yeah, it's so good.
    (1:43:42)
  • Unknown B
    Polish that off. Then I was like, I probably ate too much to go to bed so I better work out. So I did a kettlebell workout.
    (1:43:44)
  • Unknown D
    You sound like Chamath.
    (1:43:52)
  • Unknown A
    What did you say?
    (1:43:53)
  • Unknown B
    I have eight kettlebells right here. Right, of course.
    (1:43:55)
  • Unknown D
    Freebird.
    (1:43:59)
  • Unknown A
    This is called working out, Freebird.
    (1:43:59)
  • Unknown B
    And then while I'm doing my kettlebells suitcase carry, I was texting with an entrepreneur friend so you can tell how intense my workout was. And he's in Singapore. So it was in the middle of the night for me and early for him. It was time to go to bed. I was like. I was like, okay, now I gotta get to bed. How do I get to bed? My body's all amped up. I've got food in my stomach. My brain is all. My brain is all amped up and all in. Podcast is tomorrow and what time? It's 1:30 in the morning. Morning. I better get to bed. So I put on like a little one of those spiritual videos to calm me down and then I Just. And then I got in bed and I was like, there's no way I'm falling asleep. And I started meditating, and five minutes later, I was asleep.
    (1:44:03)
  • Unknown A
    You know, actually, the Dalai Lama has these great. On his YouTube channel, he's got these great, like, two hour discussions. You get about 20, 30 minutes into that. You will fall asleep.
    (1:44:45)
  • Unknown B
    Well, yeah, but my learning is.
    (1:44:54)
  • Unknown D
    Yeah. Watch any Dharma lecture from the SS lecture.
    (1:44:57)
  • Unknown B
    Exactly. And my lesson is. My learning is that the mind will do anything to avoid meditation.
    (1:44:59)
  • Unknown C
    Yes.
    (1:45:06)
  • Unknown D
    By the way, did you guys see. Just before we wrapped, did you see all the confirmations? RFK Jr confirmed. Brooke Rollins confirmed. By the way, if you look at Poly Market, Polymarket had it. All right. A couple of weeks ago, I was.
    (1:45:07)
  • Unknown B
    Trying to Polymarket, there was a moment where free money fell to, like, 56%. There's a moment when RFK fell to 75%. But then they bounced back and it was done.
    (1:45:18)
  • Unknown A
    You could have all kind of sniped that, man. You could have made money.
    (1:45:27)
  • Unknown D
    Yeah, polymarket had it. And the media. The media was like, no way he's getting confirmed. This is not going to happen. But polymarket knows. It's so interesting, huh?
    (1:45:29)
  • Unknown B
    Well, I saw a very insightful tweet and I forget who wrote it, so I'm sorry, I can't give credit, but the guy basically said, look, Trump has a narrow majority in the House and the Senate, and he can get everything he wants as long as the Republicans stay in line. So all the pressure and all the anger that all the mega movement is doing against the left is pointless. It's all about keeping the right wing in line. So it's all the people saying to the senators, hey, I'm going to primary. It's Nicole Shanahan saying, I'm going to primary you. It's Scott Pressler saying, I'm moving to your district. That's the stuff that's moving the needle and causing the confirmations to go through. That's how you get Cash Patel. That's how you get Tulsi Gabbard, dni. That's how you get rfq.
    (1:45:37)
  • Unknown A
    Do you worry about any of these? Do you think any of them are too spicy for your taste? Or you just like the whole burn it down, put in the crazy, like.
    (1:46:20)
  • Unknown D
    Outsiders And Jason, that's such a bad characterization. That's not a fair characterization.
    (1:46:28)
  • Unknown A
    I mean, the outsider, honestly, it's like.
    (1:46:33)
  • Unknown B
    I never thought I'd see it, but I think between Elon and Sachs and people like that, we actually have builders and doers and financially intelligent people and economically intelligent people in charge. And, you know, despite all the craziness, Elon's not doing this for the money. He's doing it because he thinks it's the right thing to do.
    (1:46:34)
  • Unknown C
    And having he moved into the Roosevelt.
    (1:46:50)
  • Unknown B
    I think for many months I had bought into the great forces of history mindset where it's just like, okay, it's inevitable. This is what's happening. Government always gets bigger, always gets slower, and we just have to try and get stuff built before they just shut everything down. We turn to Europe. But the thing that happened then was Caesar crossed the Rubicon. The great man theory of history played out and we're living in that time and it's an inspiration to all of us, despite Sam Altman and Elon's current fighting. I know Sam was inspired by Elon at one point and I think all of us are inspired by Elon. I mean, the guy can be the Diablo player and do doge and run SpaceX and Tesla and Boring and Neuralink. I mean, it's incredibly impressive. Passive. That's why I'm doing a hardware company now. It makes me want to do something useful with my life.
    (1:46:53)
  • Unknown B
    Elon always makes me question, am I doing something useful enough with my life? It's why I don't want to be an investor. Peter Thiel, ironically, he's an investor, but he's inspirational in that way too because he's like, yeah, the future just doesn't just happen. You have to go make it. So we get to go make the future. And I'm just glad that Elon and Doge and others are making the future.
    (1:47:38)
  • Unknown A
    That I going on here.
    (1:47:57)
  • Unknown B
    Maybe I'll Reveal podcast in a couple of months, but it's really hard. It's really difficult. I'm not sure I can pull it off. So let me try. Let me just make sure it's viable.
    (1:47:59)
  • Unknown C
    Is it drone related?
    (1:48:06)
  • Unknown A
    Is it self driving?
    (1:48:07)
  • Unknown B
    Drones are cool, but no, it's not.
    (1:48:09)
  • Unknown D
    Maybe all of the podcast should be an angel investor.
    (1:48:11)
  • Unknown A
    Oh yeah, let's do it.
    (1:48:13)
  • Unknown D
    No syndicate, Jason, just our money.
    (1:48:17)
  • Unknown A
    Talk about, you know, how I learned about syndicates. Was Naval the first syndicate I ever did on Angel List, I think is still the biggest. I don't know, 5%. And Naval's my partner on this for com dot com.
    (1:48:19)
  • Unknown B
    I think you'll love what I'm working on. If I pull it off, I think you guys will love it. I'd love to show you a demo.
    (1:48:31)
  • Unknown A
    Let us know where to send the check. Get the cherry Chip Van Leeuwen.
    (1:48:36)
  • Unknown C
    I love you guys.
    (1:48:39)
  • Unknown A
    What have we learned?
    (1:48:40)
  • Unknown C
    I gotta go. Okay, big shout out to Bobby and to Tulsi. That's a huge, huge, huge win for America.
    (1:48:41)
  • Unknown A
    I'm stoked about both of them. Yeah.
    (1:48:46)
  • Unknown B
    Congratulations.
    (1:48:48)
  • Unknown A
    I love me.
    (1:48:50)
  • Unknown D
    Thanks for coming, Bobby Kennedy. Let's get Bobby Kennedy back on the podcast.
    (1:48:51)
  • Unknown A
    Bobby. Hey, Bobby, come back on the pod 4 Lazar David Sacks, your sultan of science. David Freeberg, the chairman dictator Chamath Polihapita, and Namaste Navarre. I am the world's greatest moderator. And I'll see you next time on the Oren Pod.
    (1:48:55)
  • Unknown C
    Namaste. Bye Bye.
    (1:49:17)
  • Unknown B
    Thanks, guys.
    (1:49:18)
  • Unknown A
    Let your winners ride Rain man David Sachs.
    (1:49:20)
  • Unknown E
    And it said we open source it to the fans and they've just gone crazy with it.
    (1:49:27)
  • Unknown A
    Love you, Queen of Quinoa. Besties are gone.
    (1:49:31)
  • Unknown B
    That is my dog taking it out of your driveway.
    (1:49:43)
  • Unknown E
    Oh man.
    (1:49:48)
  • Unknown C
    My habit will meet me at. We should all just get a room and just have one big huge orgy. Cuz they're all just useless. It's like this like sexual tension that they just need to release somehow.
    (1:49:49)
  • Unknown A
    We need to get merch.
    (1:50:04)
  • Unknown B
    I'm doing all it.
    (1:50:13)