Support for this show comes from E-Trade for Morgan Stanley. When the markets are a mess, you need a team that can help you invest with confidence with each raids
easy-to-use platform. + access
to Morgan, Stanley's, in-depth, research and guidance, you'll have the information you need to make informed investment decisions. No matter. The market conditions. Get started today at etrailer.com /, Vox E-Trade Securities, LLC member sipc a subsidiary of Morgan Stanley.
Support for on, with Kara, Swisher comes from Polestar at Polestar. Every inch of every vehicle, they design is thoughtfully made, they're made to transform otter. Performance accelerating from zero to 60 in less than four point. Two seconds with fully electric all-wheel drive. They're made to elevate the driving experience with LED lights and a panoramic glass roof and they're made to uphold a greater responsibility to the planet. Using sustainable materials and energy saving systems. The result is a car that combines the best of today with the technology of tomorrow.
Her performance, pure design, Polestar design, yours and book a test drive today at pollstar.com.
Hi, everyone. From New York Magazine and the VOX media podcast Network. This is nonprofit, open AI, which is now very much for profit and 100% scarier, just kidding. Actually, I'm not kidding. This is on with Kara swisher, and I'm Kara swisher. And I'm named arase, it's amazing how an open source nonprofit has moved to being a closed Source, private company with the big deal with Microsoft, are you shocked? No.
No, not even slightly. It's a huge opportunity. I'm in San Francisco now and it's really jumping with AI crypto didn't quite work out in any of those people move to Miami. And so it's very a oriented right now, everybody's thinking about a start-up in AI, you more bullish on AI than web 3 well, so that's kind of a low bar. So yeah, and I've always been bullish on a. I've talked about a lot over the years and, you know, this is just a version of it as it becomes more and more sophisticated and useful to people. So, I've always thought it was important and I think
The key technologists in Silicon Valley have always thought it was important. A grade I was talking to a VC yesterday though about how so many things that are not AI or being billed as a high-tech companies now and they're really not a. I they might have like a large learning model but they're not quite AI. Yeah, but last episode, we had Reid Hoffman on talking about what was possible with AI. And now, we have one of breeds many mentees Sam Altman. Sam is the CEO of open Ai and he leads the team that has given us chat, GPT and GPT for
Or he actually burst onto the scene, as a young Stanford drop out. I think in 2005 with the startup looped, right? Is that when you met him? Yes. When he hit looped, I visit him in a small. He was a little startup and it didn't do very well. It was a location-based kind of thing. I don't even remember. Social network greatly Geo, social not why don't? You know, it was not Facebook. Let's just say, so he was one of these many, many startup people that sort of were all over the valley, very smart. But the company didn't quite work out. Yeah, it kind of went bust. I think not many years later, but he became super important.
In the valley especially in my generation. He's about my age because of Y combinator, you led the startup accelerator that has incubated and launch stripe Airbnb. Coinbase. He got there later, it was working before he got there. But yeah, really let it to new heights. I think in a lot of ways was a very gay man in 2014. So I don't remember, I remember when he took over but he really invigorated it and was very involved in the startup scene. It was a great role for him. He was a great cheerleader and, you know, he's good at eyeing. Good startups. Do you see him as I kind of
One of the Elon Musk Peter teal, read Hoffman's of his generation kind of. Yeah, there's a lot of really smart people, but yeah, he's definitely special and he really did, you know, he had a bigger mentality more like read than the others, although they had it initially, not Peter teal, but he was thinking of big things with the startups that Ai and I really like him. I've gotten to know him pretty well over the years and so I've always enjoyed talking to his very thoughtful. He's got a lot of interesting takes on things and this is a really big deal. Now that
Sort of landed on taking open a.i. to these heights. Yeah, he has. He wants like you. He wants entertained the notion of running for office in California. He thought about running for governor something. I think you've talked to him about. Yeah, we talked about it, but he went on to revolutionize AI. So you think that's better or worse for Humanity? I don't know. We'll see, you know, California is probably easier to fix than what we're going to do about AI once it gets fully deployed. Although you know what the whole issue is, there's lots of great things and there's lots of bad things. And so we want to focus on both because it's like I say it's like when the internet started
and we didn't know what it was going to be. I think a lot of people are being very creative around what this could be and problems, it could solve it. At the same time, the problems it could create. Do you think that the fear is overblown like this, our jobs are at risk? AI is going to, you know, on those stories? Yes. Yes. It's like saying what is, you know, the car done for us or lights, or something like that, you know, things will change as they always do. And so I've always thought most of the fears are overblown, but as I say in the book, I'm working on right now, which is why I'm in San Francisco. Is everything that
Be digitized will be digitized, that's just inevitable and that's where it's going. So this will soon be two Bots talking to each other. No, no, but searches so Antiquated when you think about a typing words into a box, it's really Neanderthal in many ways. And this is, this is an upright Homo Sapien. Well it's been interesting because critics have kind of swarmed about Chachi PT earlier on and, and Sam was coming back on Twitter. Saying, just wait for the next iteration, right? We now have and GPT for we couldn't pick the interview with him until CPT for was out, but the
Model still has many issues and he himself has noted this. He tweeted that it's still flawed still limited and it still seems more impressive on first use than it does after you spend more time with it. This was about GPT for. Yep I would agree but that's a very interesting thing because the fact that it's more impressive on first blush than it is after you use it as part of the problem because I've been using my GP t plus and it pulls up all kinds of interesting like write me a research paper and then it will it will look really good and it will have a bunch of
False information on it. So this time compound. The misinformation problem when something looks slick well, but isn't informed right data in data out crappy dog. Crap in crap out. I mean, it's just the same. It's that's a very simplistic way of saying it, but I think you know, it's like the early internet really sucked too and now it kind of doesn't it sort of does and there's great things about it. But if you looked at early Yahoo or Google or Google was a much later but early Yahoo and others, it was a lot of bubble gum and baling wire. All right, well, let's see what Sam Altman has to say, and if he feels
He's confident in the choice of having done open AI versus running for governor of California. Will take a quick break and be back with the interview.
Fox creative. This is Advertiser content brought to you by Lenovo Lenovo sending people off to a desert island. Hi, I'm Mark, Hearst. I'm a Solutions manager for hybrid Cloud for Lenovo workstations. I get to work with lots of awesome companies. Everywhere from like Formula One to healthcare to Media entertainment. The goal from Mark and Lenovo workstations is to help our users.
Connected with their work from anywhere. We have to be able to spot the people in the cities but also the people in remote locations. So the question then becomes, how do you work from a desert island first identify the challenges one is power. Another one is Cooling and big. One is network connectivity. Right now, you've got this workstation, the better thing. It lacks is flexibility.
The alternative to that is. If you were able to connect to that system, just using a satellite and connect back to have all your data somewhere else. It's going to be a lot easier by using lenovo's, remote, workstation Solutions with tgx software teams can connect from anywhere. The Lenovo P6 20th includes the AMD, Rison thread, Ripper Pro processor. You got this? Amazing power that can be accessed from anywhere and everywhere on the planet to learn more visit.
Vodacom think station P
620.
Support for on with Kara. Swisher comes from Polestar, pole stars and electric vehicle company driven by sustainable design. Every inch of their vehicles are built to maximize form function and our future designed to accelerate from 0 to 60 in less than four point. Two seconds with a fully electric all-wheel drive, system design with a Sleek exterior using frameless mirrors and a panoramic. Glass roof and design with a carefully. Crafted cabin utilizing completely sustainable materials. This is an electric vehicle, unlike any other
Pure Performance, pure design, Polestar design, yours and book a test drive today at pollstar.com.
Sam. It's great to be in San Francisco rainy. San Francisco to talk to you in person. We need the rain. It's I know this atmospheric. River is not kidding. I got some soaked away here. I miss San Francisco. I'm here for a comeback. I'm going to, I'm going to, I'm trying to convince my every moment here. I agree. It's time to come back. I love San Francisco. I've never really left in my heart. So you started looped. That's where I met you explain. What it was. It was a
location-based social app for mobile phones,
right?
What happened the market wasn't there? I'd say is the number one thing.
Yeah. Because well I think
like you can't force a market, like you can have an idea about what something are what people are going to like as a start-up part of your job is to be ahead of it, and sometimes you're right about that and sometimes you're not, you know, sometimes you make Loop. Sometimes you make opening.
I yes. Right, right, exactly, right. But you started in 2015, after being at Y, combinator, and last late last year, you launch chat GPT talk about that transition you had
When you reinvigorated y combinator in a lot of ways,
I was handed such an easy task with Y combinator. I mean, like, I don't know if I reinvigorated, it was sort of a super great thing by the time I took over or
CMEs. I think it was it got more prominence, you change things around. I didn't mean to say it was fail.
Yeah, not at all. Not at all. I I think I scaled it more and we sort of took on longer-term more ambitious projects opening. I actually sort of got that was like something nice.
Helped start wallet YC and we did we found it other companies, some of which I'm very closely involved with like Haley on the nuclear fusion company. They were going to take a long time so I definitely like how to thing that I was passionate about and we did more of it, but I kind of just tried to like keep PG in Jessica's Vision going there because it's called grant
program. Jessica, you had shifted open a.i., why was that? When you're in this position, which is a high-profile position in Silicon Valley, sort of king of startups, essentially, why go off is that you wanted to be an entrepreneur again? No, I did.
I don't I am not a natural fit for CEO like an investor really I think suits me very well. I got convinced that AG I was going to happen and be the most important thing I could ever work on. I think it is going to like transform our society in many ways and you know, I won't pretend that as soon as we started opening I was sure it was going to work but it became clear over the intervening years and certainly by 2018 2019 that we had. We had a
Chance here. What was it that made? You think that
a number of things like hard to point to just a single one but by the time we made GPT to which was still weak in a lot of ways but you could look at the scaling laws and see what was going to happen. I was like this can go very very far and I got super excited about it. I've never never stopping. Super excited. Was
there something you saw that? That it just scaled or what was the
yeah. It was the like looking at the data of how predictably better, we could make the system with more compute with more
data.
And there's already been a lot of stuff going on at Google with the mind. They had bought that earlier right around.
Yeah, there has been a bunch of stuff but somehow like it wasn't quite the trajectory that has turned out to be the one that really works.
But in 2015, if you wrote that superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. Explain still think so. Okay. All right, we're gonna get into that. Why did you write that then? And yet you also called it the greatest technology ever.
Still believe both of those things. Okay. I think at this point more of the world would agree on that the time it was considered a very extremely like crazy position. So
explain roll it out that you wrote was probably the greatest threat to continue to exist in humanity. And also one of the greatest technologies that could improve Humanity. All those two things
out. Well I think we're seeing finally little previews of this with John GPT and especially like put GDP T4 out and people can see this Vision where just to pick one example, out of the thousands, we could talk about everyone in the
World can have an amazing, a I tutor on their phone with them all the time for anything they want to learn. That's really, we need that. I mean, that's, that's wonderful. That will make the world, a much better. The creative enhancement that people are able to get from using these tools to do whatever their creative work is, that's fantastic. The economic empowerment, all of these things. And again, we're seeing this only in the most limited primitive larval way. But at some point it's like, well now we can use these things to cure disease.
So what is the threat? Because I'm try, when I try to explain it to regular people.
People who don't quite a reach that person. No, you're not. You're not a regular. I'm so offended. I'm not a regular person but when the internet started nobody knew what it was going to do. When you thought superhuman machine and tells us why the greatest threat, what did you mean by
that? I think there's levels of threats. So today we can, we can look at these systems and say, all right, no imagination required. We can see how this can contribute to
Computer security exploits or disinformation or other things that can destabilize Society. Certainly there's going to be economic transition and those those are not in the future of those are things. We can look at now,
In the medium term. I think we can imagine if these systems get much much more powerful. Now what happens if a really bad actor gets to use them and tries to like figure out how much Havoc they can wreak on the world or harm they can inflict. Yeah and then we can go further to all of the sort of traditional sci-fi. What happens with the kind of runaway AGI scenarios or anything like that? Now the reason we're doing this work is because we want to minimize those downsides while still letting Society get the big
Upsides and we think it's very possible to do that. But it requires in our belief, this continual deployment in the world where you let people gradually. Get used to this technology where you get, give institutions Regulators policymakers, time to react to it, where you let people feel it. Find the exploits, find the Creative Energy of the world will come up with use cases. We and all the red teamers we could hire would never imagine. And so we want to see all of
The good and the bad and figure out how to continually minimize the bad and improve the benefits, and you can't do that in the lab. And this idea that we have that, we have an obligation in society will be better off for us to build in public, even if it means, making some mistakes along the way. Right? I think that's really
important. When people critique chechi BTU essentially, said, wait for GPT for now that it's out, hasn't met expectations.
A lot of people seem really happy with it. There's plenty of thing that your patient's.
Yeah, I'm proud of it, again, very long way to go, but it's a step forward and I'm proud of
it. So, you tweeted that at first glance that gbt for seems quote more impressive than it actually is. Why is that? Well, I
think that's been an issue with every version of these systems, not particularly GPT for you, find these like, flashes of Brilliance before you find the problems. And so, I think that someone used to say about GPT through that has really stuck with me, is it is the world's greatest demo Creator because you can tolerate a lot of mistakes there. But if you
A lot of reliability for a production system. It wasn't as good at that. Now, GPT for makes less mistakes. It's more reliable more robust but still a long way to
go. When the issues is hallucinations are called who's it was just kind of a creepy word. I have to say that you
what is your call instead
mistakes, mistakes, or something like, who's Jason feels like it's sentient.
It's interesting hallucinations that word doesn't trigger for me as sentient, but I really try to make sure we're picking words that are in the tools Camp. Not the creatures Camp because I think it's tempting to anthropomorphize.
This. That's correct bad
way. That's correct. And as you know there were a series of reporters wanting to date G, PT 3. But anyway, sometimes a bot just makes things up out of thin air, and that's hallucinations happen. Now, it'll say research papers or news articles that don't exist. You said GPD for does this less than gbd3? We shouldn't give them actual names but it's
still how I would be Anthem. I think it's good. That it's letters plus a
number that like Barbara anyway but it still happens. Why is that? So the
these systems are trained to do something which is
The next word in a sequence, right? And so it's trying to just complete a pattern and given its training set, this is the most likely completion. That said the decreased from 3 to 3.5 to 4, I think is very promising. We have we track this internally and every week, we're able to get the number lower and lower and lower. I think it'll require combinations of model scale new
ideas. A lot of user model scale is more data
not as something more data but more compute, throw another problem. Human feedback people like flag in the air.
As for us, developing new techniques of the model can tell when it's about to kind of go off the
rails saying this is a mistake. Yeah, one of the issues is that it obviously compounds a very serious misinformation
problem. Yes. So we don't, we pay experts to flag to go through anything about enough for us. Not just bounties, but we employ people, we have contractors, we work with external firms. We say, we need experts in this area to help us go through and improve things. You don't just want to rely totally on, you know, random users, doing whatever trying to troll you or anything like that.
So humans more
compute, what else to reduce the? Yeah, I think that there is going to be a big new algorithmic. I'm idea that, you know, a different way that we train or use or tweak these models, different architecture, perhaps. So I think we'll find that at some
point. Meaning, what? For the non-techie, the different architecture? Oh, it was, it could be a lot
of things. But you could say, like a different algorithm, but just some different idea of the way that we create our use these models.
Hmm that encourages during training or inference time when you're when you're using it that it encourages the models to really ground themselves in truth, not be able to cite sources. Microsoft has done some good work. They're working on some things.
So talk about the next steps. How does this move
forward?
I think we're sort of on this very long term, exponential. And that's, and I don't mean that just for AI, although ai2, I mean, that is like, cumulative human technological progress.
And it's very hard to calibrate on that and we keep adjusting our expectations. I think, if we told you five years ago, we'd have GPT for today, you'd maybe be impressed, but if we told you four months ago after use chat GPT we'd have GPD for today, probably not that impressed.
And yet it's the same continued exponential, so maybe where we get to a year from now, you're like, yeah, you know, it's better but sort of the new iPhones, always a little better to write, but if you look at where will be in 10 years, then I think you'd be pretty
impressed, right? Right. Actually, the old iPhones were not as impressive as the new one
for sure. But it's been such a gradual process. Correct. It unless you hold that original one and this one back to
back, right, right. I had I just saw a family in the other day. Actually interestingly enough, that's a very good comparison. You're getting criticism for being secretive.
You said competition and safety require that you do that critics. Say that's a cop-out it's just about competition. What's your
response?
I mean, it's clearly not the the we we make no secret of like, we would like to be a successful effort and I think that's fine and good and we try to be clear but also, we have made many decisions over the years in the name of safety that have been widely ridiculed time that are later. You know, people come to appreciate when we, when we even in the early versions of GP T, when we talked about not releasing model, weights or releasing them gradually because we wanted people to have time to
Apt got ridiculed for that and I totally stand by that decision. Would you like us to like push a button and open source GPT for and drop those weights into the world? Probably not. Probably
not. One of the excuses that Echo is uses. You don't understand it. We need to keep in the back box. It's often about competition. Well, for
us, it's the opposite. I mean, we've said all along and this is different than what most other AGI efforts have thought, is
everybody needs to know about this like a GI should not go be built in a secret lab with only the people who are like, you know, privilege and smart enough to understand it. Part of the reason that we deploy, this is I think we need the input of the world and the world needs familiarity with what is in the process of happening. The ability to weigh into shape this together, like we want that. We need that input and people people deserve it. So I think we're like not the secret of company where
we're quite the opposite. Like we put this. We put the most advanced AI in the world in an API that anybody can use. I don't think that if we hadn't started doing that a few years ago, Google or anybody else would be doing it. Now they would just be using it secretly to
make certain cells. So you think you're forcing it out? Well, what are you? But you are in competition and let me let me go back to someone who was your one of the original funders Elon Musk. He's been openly critical of open II especially as it's gone to prophets. He
Opening. I was created as an open source, which is why I named it. Open a.i., non-profit company to serve as a counterweight to Google but now has become closed Source maximum profit company, effectively controlled by Microsoft. Not what I intended at all. We're talking about open source versus closed, but what about his critique that you're too close to the big guys?
I mean, most of that is not true. Okay? I think let's go through on those that we're not controlled by Microsoft. Microsoft, doesn't even have a board seat on us. We are an independent
A company. We have an unusual structure where we can make very different decisions than what most companies do. I think a fair part of that is we don't open source everything anymore. We've been clear about why we think we were wrong there originally we still do open source a lot of stuff. You know open-sourcing clip was something that kicked off this whole generative image world. We recently open source whisper, we open source tools, will open source more stuff in the future but I don't think it would be good.
It right now for us to open source GPT. For, for example, I think that would cause some degree of havoc in the world, or at least there's a chance of that, we can't be certain that it wouldn't, and by putting it out behind an API. We are able to get many. Not all many of the benefits. We want of broad access to this. Society being able to understand that update and think about it, but when we find some of the scarier downsides were able to then fix
them, how do you
Respond to what he's saying. You're a close Source, maximum profit company. I'll leave out the control by Microsoft, but in part in strong partnership with my we have a cat was against what he said. I remember years ago, when he talked about this, this was something he talked about a lot and was well hard, oh, we don't want these big companies to run it if they run it, we're doomed, you know, he was much more dramatic than most
people, so we're capped profit company. Yeah, we invented this new thing where we
started as a non-profit, explain that explain what a cat profit is.
We are shareholders, can
I make us, which is our employees and our investors can make a certain return, like they're their shares have a certain price that they can get to. But if open AI goes and becomes a multi-trillion dollar company, whatever, almost all of that flows to the nonprofit that controls us, not like people had a cap, and then they don't want to cap. It continues to varies. We have to raise more money, but it's like much, much much and will remain much smaller than like any tech company. What?
The in terms of like a number, I truly don't know.
But it's not a significant. The nonprofit gets the significant chunk of the
revenue. Well well it gets know, it gets everything over a certain amount so if we're not very successful, the nonprofit might not what gets a little bit along the way but it won't get any appreciable amount. The goal of the cap profit is in in the world where we do succeed at making a GI and now we have a significant leader for real soon, you know, that could become much more valuable. I think than maybe any company out there today.
That's when you want almost all of it to flow to a
non-profit, I right? I want to get back to it. You know, I was talking about he was very adamant at the time. And again overly dramatic, that Google and Microsoft, and Amazon, we're going to kill us. I think he had those kind of words that they need there needed to be an alternative which changed in your estimation of do that to change from that
idea. It was very simple. Like, when we realized the level of capital, we were going to need
To do this scaling, turned out to be far more important than we thought. And we even thought it was going to be important. Then we tried for a while to raise to find a path to that level of capital as a non-profit. There was no one that was willing to do it. So we didn't want to become a fully for-profit company. We wanted to find something that would let us get the access to, in the, the power of capitalism to finance what we needed to do. But still be able to
fulfill and be governed by the nonprofit Mission. So having this nonprofit that governs, this cap the prophet LLC, given the Plainfield that we saw at the time. And I still think that we see now was the way to get to the best of all worlds, we could see in a really well functioning Society. I think this would have been a government
project. That's correct. I was just going to make that plant. The government would have been your
funder. We talk to them.
That was not, it wouldn't have not just been that they would have been our funder but they would have started the project. We done things like this before in this
country. Right? Sure.
But the answer is not to just say, oh well the government doesn't do stuff like this anymore so we're just going to sit around and you know let other countries run by us and get an AGI and do whatever they want to us. It's we're going to like look at what's possible on this Plainfield, right? So Ilan used to be the co-chair and you have a lot of respect from
He thought deeply about his critiques. Have you spoken to him directly? Was there a break or what? You two were very close as I was walking directly recently. Yeah. And what do you make of the critiques? How when you hear them from him I mean can be quite in your face about
this, he's got his style. Yeah I don't say that positive thing about you Ilan. Yeah, I like he really does care about a good future. He does say gee I that is correct and he's
I mean he's a jerk, whatever else you want to say about him, he has a style that is not a style that I'd want to have for myself, has changed. But I think he
He does really care and he is feeling very stressed about what the future is going to look like for Humanity for Humanity.
Yeah, he did apply that. Both to when we did an interview at Tesla, he's like, if this doesn't work, we're all doomed which was sort of centered on his car. But nonetheless who is correct and the same thing with this and this was something he talked about almost incessantly the idea of either a I taking over and killing us or maybe it doesn't really care, then he decided it was like anthills, you remember that? I don't have an answer.
He said, we're like, you know how we think when we're building a highway anthills are there and we just go over them without thinking about it. So, they don't, it doesn't really care. And then he said, we're like a cat and maybe they'll feed us and bail us, but they don't really care about us. It went on and on it when you change an iterated over time, but I think the most critique that I would agree with them is that these big companies would control this and there couldn't be innovation in the space. Well, I wish they were
evidence against
that except Microsoft and that's right there.
Like a big investor, but again,
Yeah, not even a board
member so when you
think truthful independence from
them so you think you are a start-up in comparison with a giant partner?
Yeah, I think we're start up with a giant. I mean, we're a big startup at this point
so and there was no way to be a nonprofit that would work.
I mean if you know, someone wants to give us tens of billions of dollars of nonprofit Capital. Yeah, coming down
where the government, which they're not. We tried now he and others are are working on different things. He hasn't Aunt I woke I play Greg Bachmann also
Said you guys made a mistake by creating a i with a left-leaning political bias, how do you, what do you think of the substance of those critiques?
Well, I think that
this was your co-founder. Yeah, I
think that the reinforcement learning from Human feedback on the first version of Chad GPT was pretty left biased, but that is now no longer true. It's just become an internet meme. There are people some people who are intellectually honest about this, if you go look at
Cat-like, gbt for and tested honest. It's a relatively neutral not to say we don't have more work to do. The main thing though, is I don't think you ever get to two people agreeing to any one system is unbiased and every topic and so giving users more control and also teaching people about like how these systems work that there is some Randomness and response that the worst screenshot you see on Twitter is not representative what these things do. I think it's
important. So when you said it had a left-leaning bias, what did that mean to you? And of course it will. They will run with that.
That they'll run with that quite far.
People would give it these tests that score you on, you know, the political Spectrum in America or whatever. And like, one to be all the way on the right time would be all the way on the left, you know, I would get like a 10 on all of those tests. The first version. Why? Because of, what was a number of reasons? But largely, because of the reinforcement learning for human feedback stuff,
We'll be back in a
minute.
Support for on with Kara. Swisher comes from NerdWallet it feels like the moment you start thinking about signing up for a new credit card. Your mail becomes about 95% junk offers there's so many options it be hard to find the cash back credit card, that's right for you nerd. Wallet can help you make smart decisions. By comparing top Financial products side by side to find a winner, your wallets team of nerds use their expertise to help you make smart financial decisions. NerdWallet can help you turn that infinite.
Array of offers into a few top options with objective reviews and side-by-side comparisons NerdWallet can help you find a cash back card with bonus percentages in categories. You spend the most in like gas or groceries. All that cash back would be perfect for anyone looking to plan a few more road trips. Next year, ready to make a financial decision, compare and find top credit cards, savings accounts, and more at NerdWallet.com NerdWallet, the smartest decision for all your financial decisions.
Is it possible to be an optimist anymore with so much difficult news? Some might say optimism requires a set of rose-colored glasses going around pretending. Everything's just fine technology, leader, Barbara Hampton, she CEO of Siemens, USA offers a different perspective, optimistic she says isn't about looking away from problems. It's about looking right at them while believing, you can find Solutions which is exactly what Barbara does in the optimistic Outlook podcast.
Take the climate crisis, the podcast details Technologies. We can use today to decarbonise, Industry and infrastructure, addressing three, quarters of all Global carbon emissions.
That's the optimistic
Outlook, subscribe wherever you listen to podcasts.
What do you think of the most viable threat to open a? I, as I hear, you're watching Claude very carefully. It's is the Bots from anthro pick a company. That's founded by former open a.i., folks, and back by alphabet, is that it? We're recording this on Tuesday, barred launched today. I'm sure you've been discussing it internally, talk about those to
start honestly. I mean, I try to pay some attention to what's happening with all these other things. I, it's going to be an unbelievably competitive space, like this is the first new technological platform in
Long period of time. The thing I worry about the most is not any of those because I think we can, you know, their conditions room for a lot of people and also I think we'll just continue to offer the best the best product. The thing I worry about the most is that Were Somehow missing a better approach and that this idea like everyone's chasing us right now. On large language models, kind of trained in the same way, I don't worry about them. I worry about the person that has some very different idea about a, make a more useful system
like a
Facebook to probably your first look, to be honest. The like a Facebook 2000. They're not like Facebook, not Facebook. No, Facebook's not going to come up with anything unless Snapchat does and then they'll copy it. I'm teasing. Sort of. But you don't feel like these other efforts that they're sort of in your same Lane, you're all competing and they say it's the one that is
not what I would worry about Maria. Like the people that are trying to do exactly what we're doing but you know,
scrambling a muscling it like. But is there one that you're watching more carefully?
Not, especially really
I kind of don't believe you but
real, I mean know, the things I was going to say the things that I pay the most attention to are not like language, model, startup, number 217. It's when I hear about someone. It's like these are like three smart people in a garage with some very different theory of how to build a GI. And that's when I pay
attention, is there one that you're paying attention to now?
There is one. I don't want to
say okay you really don't want to throw it on. Okay. What's the plan for making money?
So we're sort of like we have a platform which is this API the nature can use to the model and then we have like a consumer product on top of it, right? And the consumer product, 20 bucks a month for the sort of Premium version and the API, you just pay us per token. Like basically like a
meter businesses would do that depending on what they're using. If you're if they decide to deploy at a hotel or
wherever the more you use it, the more you
pay more than we use it. You pay. One of the things that someone
Said to me, I thought was very smart is if the original internet started on a more pay subscriber basis, rather than an advertising basis, it wouldn't be quite so
evil. I am excited to see if we can really do a mass scale. Subscription funded not add funded business here.
Do you see ads funding this? That to me is the original set of the internet.
We've made the bet not to do that, right? I'm not opposed to it, maybe it look like it. I don't know, we haven't thought like it's going great with our current model. We're happy
about it. You've been also
Impeding against Microsoft for clients are trying to sell your software through their Azure Cloud business as an
add-on actually, that I don't like. That's fine. I don't
care. It's mine. But you're also trying to sell directly. Sometimes the same clients, you don't care about the end of care about. I don't care. How does it work? Does it affect your bottom line? That
way.
Again, we're like an unusual company here. We're not like, we don't need to squeeze out every
dollar, former Google Tris, John Harris, who's become a Critic of how Tech is sloppily developed presented to a group in d.c. of regulators. I was there among the points you made is that you've essentially kicked off an AI arms race. I think that's what struck me the most meta, Microsoft, Google by do a rushing to ship generative. AI Bots. When the tech industry is shedding jobs, Microsoft recently laid off ethics and Society team within its a or that's not your issue.
Shoo, but are you worried about a profit-driven arms
race? I do think we need regulation and we need industry Norms of this. I am disappointed to see people like we spent many, many months and actually really the years that it's taken us to get good at making these models, getting them ready before you put them out. You know, people it obviously became somewhat of an Open Secret in Silicon Valley that we had gbt for done for a long time. And there were a lot of people who are like you got to release this. Now you're holding this back from
You know, exists your clothes, day, I whatever. But like we just wanted to take the time to get it right. And there's a lot to learn here and it's hard and in fact, we try to release things to help people get it right, even competitors. I am nervous about the shortcuts that other companies. Now seem like they want to take such as oh, just rushing out these models without all the safety features
built without saving cheese. So they're just, this is an art that they want to get in here and get ahead of you because you've had the front
seat.
Maybe they do, maybe they don't, they're certainly making some noise like, you know, there are going to.
So when you say worried, what can you do about it? Nothing.
Well, we can and we do try to talk to them and explain. Hey, here's some pitfalls and, you know, here's some things we think you need to get, right? Yeah, we can continue to push for regulation. We can try to set industry Norms. We can release things that we think help other people get towards safer systems faster.
Can you can you prevent that? There's let me read you this passage from the story about Stanford doing it. They did one of their own models.
Six hundred dollars. I think it costs them to put a train, a model for 600. Yeah, yeah, they did. It's called Stanford alpaca just so, you know, it is. It's a cute name. I'll send you the story. But so, what's to stop? Basically anyone from creating their own pet AI now, for 100 bucks, or so, and training it? However, they choose will open a eyes terms of service. Say, you may not use output from Services. Develop models, they compete with open II and meta says, it's only letting academic researchers, use llama under a non-commercial license at this stage. Although that's a moot point since the entire llama.
A model was leaked onto 4chan with an H or yeah. And this is a 600 dollar version of
yours.
One of the other reasons that we want to talk to the world about these things. Now is, this is coming. This is totally Unstoppable. Yeah. And they're going to be a lot of very good open source versions of this in coming years and it's going to come with, you know, Wonderful benefits and some problems by getting people used to this. Now, by getting Regulators to begin to take this seriously and think about it now, I think that's our best path
forward. All right, two things I want to talk about societal impact in regulation you've said,
I told you this will be greatest technology. Humanity has ever developed in almost every interview. Do your asked about the dangers of releasing a, i products and you say it's better to test a gradually, an open quote, while the stakes are relatively low. Can you expand on that wire the stakes low? Now, why aren't they high right now?
Relatively is the key
word. Okay, what happens to the stakes we, if it's not controlled now. Well, these
systems are now much more powerful than they were a few years ago and we are much more cautious than we were a few years ago in terms.
As of how we deploy them. We've tried to learn what we can learn. We've made some improvements. We found ways that people want to use this, you know, in this interview and I totally get why. And in many of these topics were, I think we're mostly talking about all of the downsides, but I'm going to ask you about the upside, okay? But we've also found ways to like improve the upsides by learning to sew mitigate. Downsides maximize upsides that sounds good. And it's not that the stakes are that low anymore. In fact, I think we're in a different world than we were
Two years ago, I still think they are relatively low to wear will be a few years from now. These systems are still they have classes of problems but there's things that are totally out of the Reach Out Of Reach there who know they'll be capable of and the learnings we have now the feedback we get now seeing the ways people hack jailbreak, whatever that's super high but I'm curious how you think we're doing. I know you're,
I think you're saying the right things, you're
absolutely awesome saying, like, how you think we're doing as you look up, Jackie
Gleason.
The reason people are so worried and I think it's legitimate worry. Is because the way the early internet rolled out it was gee whiz. Almost the whole time. Yeah. Almost up into the right. Gee whiz look at these rich guys. Isn't this great? Doesn't this help you and they missed every single consequence. Never thought of them was I remember seeing Facebook live? And I mentioned, I said, what about, you know, people who kill each other on it. What about, you know, murders what about suicides women? And they call me a bummer.
Bummer in this room. And I'm like, yeah, I'm a bummer. I'm like, I don't know. I just noticed that when people get ahead of tools, they tend and, you know, this is Brad Smith thing. It's a tool or a weapon weapon, seemed to come up a lot. And so, I always think same thing happen with the Google Founders. And they're trying to buy Yahoo many years ago. And I said, at least Microsoft knew they were Thugs and they, they called me. And they said that's really hurtful, really nice. I said, I'm not worried about you. I'm worried about the next guy. Like I don't know who runs your company in 20 years with all that information on.
D. And so I think you know I am a bummer and so if you don't know what it's going to be well you can think of all the amazing things that's going to do and it probably be a net positive for society. Net positive isn't so great either sometimes right it's a net positive, the internet's, a net positive like electricity's and net positive but every time it's a famous quote every time you when you invent electricity invent the electric chair when you invent this and that. And so that's that's what would be the thing here. That would be the greatest thing. Does it outweigh?
Some of the dangers,
I think that's going to be the fundamental tension that we face that we have to wrestle with at the field, as a whole, has to wrestle with Society has to wrestle
with, especially in this world, we live in now, which I think we can. All agree is not gotten gone forward, it's spinning backwards. A little bit in terms of authoritarians using this. You
know, I am super nervous
about. Yeah. What is the greatest thing you can think? Now you're not you and I are not creative, have to think of all the things you're not going to cut, not even what from your perspective. And you know, don't do term papers, don't do dad jokes
So, what do you think? That's fine? How are you thought? I would say for the greatest, but I'm getting tired of that. I don't care that it can write a press release. I don't care. Fine, sounds fantastic. I hate. I don't want them
anyway. Personally, most excited about is helping us greatly expand. Our scientific knowledge. Okay. I am a believer that a lot of our forward progress comes from increasing scientific discovery. Over a long period of time in any area. All the areas I think that's just what's Driven Humanity forward and if these
Systems can help us in many different ways, greatly increase the rate of scientific understanding you know curing diseases an obvious example. There's so many other things we can do with faster knowledge and better
understand already moved in that are folding proteins and things.
So that's the one that I'm personally most excited about, I think science. Yeah but there will be many other wonderful things to you. Just you asked me what my one was
and is there. One unusual thing that you think will be great that you've seen already that you're like, that's pretty cool
using some of these
New AI tutor like application. This is like I wish I had this when I was growing up. I could have learned so much so much better and faster. And when I think about what kids today will be like by the time they're finished with their formal education and how much smarter and more capable and better educated and they can be than us today, I'm excited to be
using these two, yes. Using these tools. I would say health information to people who can't afford it is probably the one I think is
most product is going to be transformative. We've seen
People who can't afford it, this in some ways. I'll just be most better. Yeah,
exactly. It's
100% pendant. And the come and the work we're seeing there from a bunch of early companies on the platform, I think it's
remarkable. So the last thing is regulation because one of the things that's happened is the internet was ever regulated by anybody. Really except maybe in Europe but in this country absolutely not there's not a privacy bill, there's not an antitrust Bill Etc. It goes on and on they did nothing but the EU is considering leaving chat. Gbt high risk, if it happens it will lead to significant restrictions on its use in Microsoft.
Often Google or lobbying against it, what do you think should happen
with a regulation in general with the
are this one? The high-risk one
I have followed the development of the eu's a I act but it is changed. It's you know, obviously still in development. I don't know enough about the current version of it's a if I think this way like this definition of what high risk is in this way of classifying it and this is what you have to do. I don't know if I would say that's like good or bad. I
I think like totally Banning this stuff is not the right answer, and I think that not
regulated Tick-Tock, but go ahead.
And I think not regulating the stuff at all, is not the right answer either, and so the question is like, is that going to end in the right balance? Like I think that you saying, you know, no one in Europe gets to use, chat GPT probably know what I would do but the EU saying here's the restrictions on chatty between any service like it. There's plenty of versions that I could imagine me all right supersensible,
alright? So after the as Silicon Valley, non bail out bail out
You tweeted we need more regulation of banks but what sort of Regulation I know and then someone tweeted at you, now he's going to say we need a Monday. I and you said we need a money. I but
I mean I do think that s VB was an unusually bad case but also if the regulator's aren't catching that, what are they doing?
They did catch it. Actually they were giving warnings were given
warnings, but like there's often an audit, you know, this thing is not quite like that's
different than saying I'm pretty significant. You don't need to do something, they just didn't do any well. They
could have.
I mean The Regulators could have taken over like, yes
months ago. So this is what happens, a lot of the time even in well regulated areas which banks are compared to the internet what sort of regulations does a, I need in America, lay them out. I know you've been meeting with regulators and lawmakers, I haven't done that many. Well, they call me when you do there, they want to say they've seen you again. What do they say? Well, you're like the guy now. So they like to say I was with Sam Oldman.
I did one. I
think it's nice. I going to tell you,
I did like a three-day trip to d.c. Yeah. Earlier this
year. So what tell me what you think?
Regulations weren't what are you telling them and do you find them Savvy as a group? I think they're savvier than people think some of
them are quite exceptional. Yeah, I think the thing that I would like to see happen immediately, it's just much more insight into what companies? Like ours are doing. You know, companies that are training above a certain level of capability, at a minimum, like a thing that I think could happen now, is the government should just have insight into the capabilities of our latest stuff, really?
Store, not what our internal audit procedures and external audits. We use look like how we collect our data. How we're red, teaming these systems, what we expect to happen, which we may be totally wrong about. We get it while anytime, but like, our internal road map documents, when we start a big training run, I think there could be government insight into that, and then if that can start, now, I do think, good regulation. Takes a long time to develop, it's a real process. They can figure out how they want to have oversight.
A reed had secrete Hoffman.
Just a blue ribbon panel. So they learn up on this stuff, which I mean, panels are
fine. We could do that too. But what I mean, is, like government Auditors sitting in our
buildings congressman, Ted lieu said, there needs to be an agency dedicated specifically to regulating AI. Is that a good idea?
I think there's two things you want to do. This is way out of my area of expertise but you're asking. So I'll try I think people like us that are creating these very powerful systems that could become something proper
Early called a GI at some
point. This is explain what that
is, artificial general intelligence. But what people mean? It's just like above some threshold where it's righted, right. Those efforts probably do need a new regulatory effort and I think it needs to be Global body, your guitar body and then people that are using a, I like we talked about the medical advisor. I think FDA can give probably very great medical regulation, but they'll have to update it for the inclusion of a. I, but I would say like
Creation of the systems and having something like an iaea. That regulates that is one thing, and then having existing industry Regulators still do their
regulations. So, people do react badly to that, because the information bureaus, that's that's always been a real problem in Washington yet. Not, everyone is who should head that agency in the US, I don't know. Okay, alright, so, one of the things that's going to happen though, is the less intelligent ones of which there are many are going to
Seize on things. Like they've done with Tick-Tock. Probably possibly deservedly but other things like Snap released a chatbot powered by GPT that reportedly told a fifteen-year-old, how to mask the smell of weed and alcohol and thirteen-year-old how to set the mood for sex. With an adult, they're going to seize on this stuff and the question is, who's liable? If this is true, when a teen uses those instructions and section 230 doesn't seem to cover generative AI. Is that a problem?
I think we will need a new law for use of the stuff and I think the liability will
Need to have a few different Frameworks of someone's tweaking. The models themselves. I think it's going to have to be the last person that touches, it has the liability. That's, and
that's that there, be liability, it's not full. The immunity that the
platform's guy do that. We should have full immunity. Now, that said, I understand why you want limits on it. Why you do want companies to be able to experiment with this? You want users to be able to get the experience they want, but the idea of like, no one having any limits for a generative AI for AI in general, that feels
super-wrong last thing.
Trying to quantify the impact you personally will have on society as one of the leading developers of this technology. Do you think about that? Do you think about your impact? Do you
like me? Open enemies him? You Sam. I mean, hopefully I'll have a positive impact. Like,
do you think about the impact on Humanity? The level of power that also comes with
it,
Yeah, I don't I think about like what opening I is going to do a lot in the impact opening. I will
have, I think it's out of your
hands. No. No. But it is very much a like the responsibility is with me at some level, but it's very much a team
effort. And so, when you think about the impact, what is your greatest hope and was your greatest worry.
My greatest hope is that we are we create this thing. We are one of many people that is going to contribute to this movement will create an AI other people create an AI and that this we will be a participant in this technological Revolution that I believe will be far greater in terms of impact and benefit than any. Before my view of the world is it's this like one big long technological Revolution, not a bunch of smaller ones but we'll play our part. We will be one.
Of several on this moment and that this is going to be really wonderful. This is going to elevate Humanity in ways, we still can't fully Envision and our children. Our children's children are going to be far better off than the, you know, the best of anyone from this time and we're just going to be in a radically improved world. We will live healthier more interesting. More fulfilling lives will have material abundance for people and you know we will be
A contributor and, you know, we'll put in our your part are part of that do
sound alarmingly like the people. I met 25 years ago. I have to say, if you were not I don't know how old you are but you weren't you were young. You were probably very
young 37. Yeah.
So you won't fall and they did talk like this, many of them did and some of them continue to be that way. A lot of them didn't unfortunately and then the greed seeped in the money seeped in the power seat in and it got it got a little more complex, I would say not not totally and again,
Net, it's better but I want to focus on you on my last question there seem to be two caricatures of you. One that I've seen in the Press is a boy is genius, who helped defeat Google and Usher and Utopia. The other is that you're an irresponsible woke Tech Overlord. Icarus that will lead us to our demise.
I have to pick one,
is it? No. I don't. How
old do I have to be before? I can like drop the boyish
qualifier. Oh, you can be boyish. Tell him Hanks is still boyish.
Yeah. And what was the second one?
You know, Icarus Overlord? Takeover, Lord woke something.
Yeah, yeah,
whatever your chorus part is that I like, boys, don't you that? I'm
I think we feel like adults
know you may be adults but boys always gets put on you. I don't ever call you boys. I think you're
adults Icarus. Meaning like I'm s work, we are messing around with something that we don't fully understand yet. What we are messing around with something, we don't fully understand. Yeah, and we are trying to do our part in contributing to the responsible path through it. All right. But I don't think either of
those care either. I mean by yourself, then describe what you
are.
Technology brother. Oh wow you're gonna go for Zak you know I just think that's such a funny meme. I don't know how to describe myself. I think that's what you would call me now. I wouldn't know 100% narrow
because it's an insult. Now I call you technology sister.
I'll take that and leave it on that
sleeve on that. All right I do have one more quick question. We last time we talked you were thinking of running for governor. I was thinking of running for mayor. I'm not going to be running for mayor. You can still run for governor. No,
no I
I think I am doing like the most amazing. I can imagine I really don't do anything else. You want to do anything? It's tiring but I love
it. Yeah, okay, Sam Altman. Thank you so much. Thank you.
You said he sounded a lot. Like a lot of Founders generation before him? Yes. What are the lessons you would impart to Sam as someone who has so much impact on Humanity? You know, I think what I said is that they were hopeful and they were they had great ideas and one of the things that I think people get wrong is to be a tech critic, means you love Tech. Like, you know, you really love it. You do have course and you don't want it to fail, you want it to create betterment for Humanity and that, if that's your goal, when you see it being warped and misused it
Really sad and disappointing. And I think one of the things early internet, people had all these Amazing Ideas the world talking to each other will get along with Russia, will be able to communicate over vast distances. And again just like I talked about with Reid, Hoffman, it's a Star Trek vision of the universe and that's what it was and boy the money and the power and the and the bad people that came in were really significantly shifted it, not completely by any means. I love my Netflix, you know, I just do
But the unintended or intended consequences, ultimately are very hard to Bear even if it's a net positive. So it's just the money and the power that's corrupting is what you're saying? It's inevitable. No, not inevitable, but often often. Yeah, not him. Not a lot of people, but let's see, the standing the test of time, right. You're saying about Reid, Hoffman, and Max, levchin versus say Peter, teal and Elon Musk. Well, I think Peter was always like that, you know, I don't think he's changed one bit and so,
You're not my right not in my estimation, he's been very consistent in how he looks at the world, which is not a particularly positive light. I think that a lot of them do, stay the same and they do stay true to what they're like and I don't know why that is over certain people and others get sucked into it in a way. That's really. I'm thinking about this a lot because it my books about yeah, of course how people change and why? And whether that's a good thing or a bad thing? Because you know, one of the things about tech is
Only changing one of the poems I'm using in the book is a poem by Maggie Smith called good bones and I'll just read you. The last part life is short and the world is at least half terrible for an every kind stranger. There is one who would break you though? I keep this from my children. I'm trying to sell them the world. Any decent realtor walking through a real shithole chirps of on about good bones. This place could be beautiful, right? You could make this place. Beautiful, mmm. And that's how I feel about this. They could make this place beautiful. And I think
Things that to, yeah, it's not just a lie. You tell your children right? Well, know this but it is, you can't tell that terrible things all the time. They would be like just lying on the ground. Yeah. But sometimes it's so idealistic. Like when he said Global regulatory body to regulate a I'm like, oh man, we're fucked. That's never gonna happen. Like, what was the last good? Global regulatory. Nobody would work. It could work. There has to be, this has to be Global. This has to be Global but how there's no infrastructure to set up a sustainable? I guess, Nobles in medicine. There is
Is what you think the World Health Organization has been effective. And I think there's stuff around, cloning around all kinds of stuff, it's never going to be perfect. But boy, there's a lot of people that that Hugh to those ethics. I mean, I think it depends how bought in state governments are including China but the regulation thing is particularly tricky because it can also become a moat, right? It's writer incumbents like Facebook's like regulate them. It's like, well, you can afford the regulation in a way that new competitors. Maybe can't, I think the government's can play a lot of roles here. They do it in nuclear non-proliferation. It's never
Perfect. But we still haven't set one off halfway. I think that's largely the deterrent power, and not because of Any effective regulation. I I'm, I'm a great believer in nuclear non-proliferation. And so to, I think there's lots of examples of it work. And I think the most significant thing that he said here was about the government's role, the US government's role. It shouldn't give this all over to the private sector. It should have been the one to give them money and to fund them and that is 100%. We've talked to Mariana mods, ocado about that. Yeah. And many
You people that to me, is the big shame as the government abrogating its role in really important things that are important globally and important for the us. But even when the government has played that kind of like, let's call it kindling role for industry, whether it be Elon Musk slow and for Tesla whether it be what DARPA was doing that became, you know, parts of Siri and and Echo and whatnot. The government here is bad at retaining like a windfall from that, that would be reinvested into taxpayer, used to. It used to just do it because it was the right thing to do.
We would research and investment by the government, you know, highway system seems to have worked out pretty good, the telephone system seems good, you know I mean we always tend to like talk about what they do wrong but there's so much stuff that the government contributed to that matters today. It used to be a cultural. So people would want to go into government and civil service and my father was in that generation. Like, you know, and I think that it's interesting to hear. Sam say no, he won't run for governor and back to you think sometimes, well, it would be so great. If some of these bright Minds, you know,
Went up to use more effective, where he is. Why would he do that? When he's more effective where he is, arguably the right regulator for. This is a person who could have built it. Yeah. Or conceived building it may be. Did you find his answers to the moderation questions and this idea of hallucination and overly impressive at first glance? Did you find those satisfying? Yeah, I thought if he doesn't have answers, I think one of the things I like about Sam is, if he doesn't have an answer, I don't think he's hiding it. I don't think he knows. And I think one of the strengths of certain entrepreneurs is I don't really
No. And I think the and a lot around AI right now. Anyone that's going to give you a certainty is lying to you. Well, they had experimented with using these, you know, these low-wage workers and Africa through Sama and Outsource. Well, it's not, I think it was that it was exposed are paying them less than two dollars an hour and training them to build up like, but what was reported, a Content, moderation, AI layer, which is ironic, when you think about it. So, there were workers in Africa being paid less than two dollars. An hour to train machines to replace them for that job. Well, have you been to an Amazon warehouse lately?
Lee, there's a lot of machines doing everything. That's the way it's going. It doesn't. That's like you're telling me something that happens in every other industry. Yeah I know. And yet we're going to grow smarter. Do you think that's true? AI to I do, is everyone's going to be smarter, I do. I think we do a lot of wrote idiotic work that we shouldn't be doing and we have to be more creative of what are the greatest use of our time is my great. Hope for AI is actually that it takes out the Rope B and all of a sudden creative industry flourishes because those are the parts that can't be replicated. And though I think you know sad reality of technology in the last generation has been
Kids maybe don't read as well or as much or as fast as or as early as you used to but they make video right? What if they're spoken to smarter? Like the idea of education on these things or information or Healthcare in an easy way is really the these phones are just are just getting started and they will not just be phones, they will be wrapped around us in more good information. You get in the more communication you get. That's a good thing. They might just be getting started but we are ending. Do you want to read us our credits today? Yes. Remember you can make this place.
Beautiful or ugly, not good bones, got a couple, got good bones. Today's show was produced by name or rasa Blakeney. Shit, Cristian Castro cell and Raphael is see, were special. Thanks to Haley, Milliken. Our Engineers are Fernando, are Udo and Rick Quan. Our theme music is by track Adam X. If you're already Following the show, you get the red pill. If not Rick, Deckard is coming after you go, wherever you, listen to podcasts search for on with Kara swisher and hit follow.
Whoa. Thanks for listening to on with Kara Swisher from New York Magazine, the VOX media podcast Network and us will be back on Friday. That's tomorrow with a special bonus episode.