Transcript: Is Jensen Worried About OpenAI?

Feb 2, 2026

Source: Tech Brew Ride Home | Duration: 21 min

Summary

Here is a comprehensive summary of the podcast episode "Is Jensen Worried About OpenAI?":

Opening Context:

  • The episode discusses the potential investment by NVIDIA in OpenAI, as well as concerns around Apple's AI strategy and Elon Musk's plans for data centers in space.
  • The guest is not explicitly mentioned, but the host Brian McCullough appears to be leading the discussion.

Key Discussion Points and Insights:

  • There are reports that NVIDIA CEO Jensen Huang has privately expressed doubts about the $100 billion investment in OpenAI, citing a lack of discipline in OpenAI's business approach and concerns about competition from Google and Anthropic.
  • However, Huang has now publicly stated that NVIDIA will make its "largest investment ever" in OpenAI, though the amount will likely be less than $100 billion.
  • Apple executives are reportedly questioning whether the company has the right ingredients to win in the AI era, as the company has struggled to articulate a bold AI vision and execute on it.
  • Apple is taking a "patchwork approach" to AI, relying on AI-enhanced services, wearables, and a more personalized Siri, but needs to build durable and proprietary AI capabilities in-house.
  • SpaceX is seeking FCC approval to launch up to 1 million satellites to power AI data centers in space, aiming to achieve cost and energy efficiency compared to terrestrial data centers.

Notable Technologies, Tools, or Concepts Mentioned:

  • OpenAI and its AI technology
  • Apple's AI strategy and Siri
  • SpaceX's plans for satellite-powered AI data centers

Practical Implications or Recommendations Discussed:

  • The episode highlights the high stakes and competitive nature of the AI race, with major tech companies like NVIDIA, Apple, and SpaceX vying for leadership.
  • It suggests that companies need to have a clear, bold AI vision and the ability to execute on it, rather than relying on patchwork solutions or external partnerships.
  • The potential risks and challenges of deploying large-scale AI systems, such as safety concerns and security vulnerabilities, are also discussed.

Overall, the episode provides a comprehensive overview of the current state of AI developments and the strategic considerations facing major tech players in this rapidly evolving landscape.

Full Transcript

[00:00:00] Welcome to the Tech Brew Ride Home for Monday, February 2nd, 2026. I'm Brian McCullough. Today

[00:00:09] is NVIDIA going through with their massive open AI investment or not? Does Apple have what it takes

[00:00:14] to win in the AI era or not? Is Elon the only one who can make data centers in space happen

[00:00:20] or not? And a roundup of the continued fascination with Maltbook. Here's what you missed today in the

[00:00:30] Thank you.

[00:01:00] and file-less attacks by shutting down unknown behavior automatically, even if it's never been

[00:01:04] seen in the wild. ThreatLocker gives you tight control without the noise, meaning fewer alerts

[00:01:09] and a cleaner, predictable operational posture. Learn more at ThreatLocker.com slash TechBrewRideHome.

[00:01:15] That's ThreatLocker.com slash TechBrewRideHome.

[00:01:20] So a little bit of drama over the weekend. NVIDIA, you might recall, has plans to invest

[00:01:26] up to $100 billion in OpenAI. You might recall that was part of that sort of big announcement

[00:01:31] that people said at the time was sort of circular investing, a bit of paying Peter to pay Paul to

[00:01:37] pay Peter sort of deal. But sources say that investment might have stalled after some inside

[00:01:43] Nvidia are expressing doubts about the deal, quoting the journal. Jensen Wong has privately

[00:01:50] emphasized to industry associates in recent months that the original $100 billion agreement

[00:01:55] with OpenAI was non-binding and not finalized, people familiar with the matter said. He has also

[00:02:00] privately criticized what he has described as a lack of discipline in OpenAI's business approach

[00:02:05] and expressed concern about the competition OpenAI faces from the likes of Google and Anthropic,

[00:02:11] some of the people said. Our teams are actively working through details of our partnership.

[00:02:16] NVIDIA technology has underpinned our breakthroughs from the start, powers our systems today,

[00:02:20] and will remain central as we scale what comes next, an OpenAI spokesman said.

[00:02:25] In a November filing, NVIDIA said there was no assurance that it would, quote,

[00:02:29] enter into definitive agreements with respect to the OpenAI opportunity or other potential

[00:02:34] investments, or that any investment will be completed on expected terms, if at all, end quote.

[00:02:39] At a UBS conference in Scottsdale, Arizona, NVIDIA Chief Financial Officer Colette Kress

[00:02:45] said the company hadn't completed a definitive agreement with OpenAI.

[00:02:48] Wong has indicated to associates that he still believes it is crucially important to provide

[00:02:54] OpenAI with financial support in one form or another, in part because OpenAI is one of the

[00:02:59] chip designer's largest customers, people familiar with the matter said. If OpenAI were to fall

[00:03:04] behind other AI developers, it could dent NVIDIA's sales, end quote. Well, that original reporting

[00:03:11] definitely got the attention of various PR and comms teams, I'm sure, because Jensen Wong now

[00:03:18] says NVIDIA's OpenAI investment will be, quote, the largest investment we've ever made.

[00:03:23] Quoting Bloomberg, we will invest a great deal of money, Wong told reporters while visiting Taipei

[00:03:28] on Saturday. I believe in OpenAI. The work that they do is incredible. They're one of the most

[00:03:33] consequential companies of our time. Wong didn't say exactly how much the company might contribute,

[00:03:38] but described the investment as huge. Let Sam announce how much he's going to raise. It's for

[00:03:44] him to decide, Wong said, adding that Altman is in the process of closing the round, quote,

[00:03:49] but we will definitely participate in the next round of financing because it's such a good

[00:03:53] investment, end quote. When asked by a reporter in Taipei about the report that seemed to suggest

[00:03:58] he wasn't very happy with OpenAI, Jensen said, quote, that's nonsense. Wong said NVIDIA's

[00:04:04] contribution to OpenAI's latest funding round wouldn't approach $100 billion, though. OpenAI

[00:04:09] has been seeking to raise as much as $100 billion in its current funding round, according

[00:04:14] to a person familiar with knowledge of the matter, asking not to be identified because

[00:04:18] the discussions are private.

[00:04:20] Amazon was in talks to invest as much as $50 billion in the fundraise and expand an agreement

[00:04:25] that involves selling computer power to the AI startup, the person said on Thursday.

[00:04:30] Altman has also met with top investors in the Middle East to line up funding for the

[00:04:34] round, which may value the company at about $750 to $830 billion, people familiar with

[00:04:39] the matter said earlier in January while asking not to be identified because the information isn't

[00:04:44] public Microsoft is in discussions to participate as well The information had previously reported end quote Mark Gurman says that his sources tell him that

[00:05:00] Apple executives are increasingly beginning to question if Apple has the ingredients to win

[00:05:06] in the AI era. Quote, some have argued that Apple doesn't need AI, noting that it never

[00:05:12] owned the internet or ran its own search engine, but that misses the point. Apple's past 25 years

[00:05:18] were built on internet technology that sat at the heart of breakthrough products, including the

[00:05:22] iPhone, iMac, iPod, iPad, iTunes, the App Store, and iOS. These are offerings that only exist

[00:05:28] because of the web. But Chief Executive Officer Tim Cook has yet to articulate a bold AI vision,

[00:05:34] and his hiring of Google veteran John Gianandrea to run Artificial Intelligence in 2018 now looks

[00:05:41] like the biggest mistake of his tenure. Gianandrea stepped down as AI chief in December, but he'd

[00:05:47] already been sidelined for much of last year. Software chief Craig Federighi took over,

[00:05:52] securing a short-term fix via a partnership with Google's Gemini to deliver working AI models.

[00:05:58] Hardware alone won't save Apple. Consumers don't buy its products for the components,

[00:06:03] they buy them for the experience, including the integration of sleek designs, software,

[00:06:07] and services. Right now, AI is missing from that equation. For Apple to sustain its growth and

[00:06:13] relevance, it must execute a company-wide AI reckoning that changes its approach to product

[00:06:18] development. Even if Apple continues to thrive in the smartphone market, it could still lose its

[00:06:23] standing in a fast-changing tech world. The company's own senior executives understand this

[00:06:28] and privately question whether Apple has the right ingredients to win in the AI-first landscape.

[00:06:34] There is no miracle product that will guarantee Apple's success here. It's no longer working on a

[00:06:39] self-driving car, and there isn't an obvious new category that can generate iPhone-scale revenue,

[00:06:44] at least not yet. That's why Apple is betting on a patchwork approach, AI-enhanced services,

[00:06:50] a range of wearable and home devices, and a more personalized and conversational Siri assistant.

[00:06:55] For the strategy to work, Apple must build durable and proprietary AI in-house,

[00:06:59] powered by servers with higher-end versions of its own custom chips. Relying on Google's Gemini

[00:07:05] cannot be the long-term answer no matter how Apple frames the arrangement as a collaboration.

[00:07:10] Relying on a chief rival to paper over a core weakness is not a strategy. It's a stopgap

[00:07:15] measure. The situation echoes Apple's 1997 dependence on Microsoft even if the optics

[00:07:20] are different. Hiring and retaining elite AI talent will be critical. So will humility.

[00:07:26] Apple can no longer assume that superior hardware execution alone will protect it from

[00:07:30] AI-focused competitors. The company needs more than a holiday season sales bump. It needs a path

[00:07:36] to leadership in the next era of computing, end quote.

[00:07:41] So I want to point something out here. Given the unique position Gurman has in the Apple rumor

[00:07:48] ecosystem, I honestly wonder if him writing that might be some strategic leaking from folks inside

[00:07:57] of Apple. I.e., I'm wondering if the call for better AI leadership is coming from inside the

[00:08:05] house of Cupertino. According to Reuters, SpaceX is seeking US FCC approval to launch

[00:08:18] one million satellites. SpaceX claims that they will orbit the Earth and harness the power of the

[00:08:24] sun to power AI data centers. Quote, data centers are the physical backbone of artificial

[00:08:30] intelligence requiring massive amounts of power. By directly harnessing near constant solar power

[00:08:35] with little operating or maintenance costs, these satellites will achieve transformative cost and

[00:08:40] energy efficiency while significantly reducing the environmental impact associated with terrestrial

[00:08:44] data centers. The FCC filing said Elon Musk would need the telecom regulator's approval to move

[00:08:51] forward. While it is unlikely SpaceX will put 1 million satellites in space, where only 15,000

[00:08:58] satellites exist currently, satellite operators sometimes request approval for higher numbers of

[00:09:03] satellites than they intend to deploy to by design flexibility. SpaceX sought approval for 42,000

[00:09:10] Starlink satellites before it began deployment of the system. The growing network currently has

[00:09:15] roughly 9,500 satellites in space. SpaceX's request bets heavily on reduced costs of Starship,

[00:09:21] the company's next-generation reusable rocket, under development. Fortunately, the development

[00:09:26] of fully reusable launch vehicles like Starship can deploy millions of tons of mass per year to

[00:09:31] orbit when launching at rate, meaning on-orbit processing capacity can reach unprecedented scale

[00:09:37] and speed compared to terrestrial build-outs with significantly reduced environmental impact,

[00:09:42] SpaceX said Starship has test 11 times since 2023 Musk expects the rocket which is crucial for expanding Starlink with more powerful satellites to put its first payloads into orbit this year end quote

[00:09:56] So far be it from me to question Elon Musk if he can achieve the business or even engineering

[00:10:02] impossible because, you know, things like launching the first new successful car manufacturer

[00:10:06] in multiple generations or creating a company that can reuse rockets and create the most

[00:10:11] valuable private company ever. Only Elon did that. But given, again, the engineering challenges

[00:10:18] we've talked about vis-a-vis this whole concept of data centers in space,

[00:10:22] is this insane? Or will Elon make us all look like idiots in about 10 years?

[00:10:36] Managing your cap table shouldn't drain your time or derail your budget,

[00:10:40] and yet somehow it can manage to do both. Pulley knows there's a better way. That's why they help

[00:10:46] take the complexity and surprises out of equity management. Pulley's intuitive workflows,

[00:10:51] built-in compliance tools, and decision-ready reporting are designed to work for you,

[00:10:55] not against you. Pulley helps you issue, track, and manage equity, stay compliant with up-to-date

[00:11:00] 409A valuations, complete stock-based compensation reporting, and more,

[00:11:05] all without the expensive legal fees or endless manual work.

[00:11:10] Learn more and get started at Pulley.com slash brew.

[00:11:14] That's Pulley.com slash brew.

[00:11:19] If you've ever wanted to be a fly on the wall for the conversations world-class CEOs have behind closed doors,

[00:11:25] then you may want to listen to the new podcast, Long Strange Trip, CEO to CEO.

[00:11:31] In each episode, Brian Halligan, co-founder of HubSpot, speaks with leaders to unpack the real stories behind scaling their companies.

[00:11:39] From the emotional toll of leadership to the tactical decisions that shape a company's future, expect candid conversations about hiring, culture, communication, strategy, and more.

[00:11:49] Whether you're an aspiring founder, a seasoned CEO, or simply curious about the stories behind the CEOs on the long, strange trip of building enduring legendary companies, this is a show you won't want to miss.

[00:12:00] Long Strange Trip is available everywhere you get your podcasts. That's Long Strange Trip Podcast.

[00:12:09] Anthropic continues to have things both ways, in a way. A new paper co-authored by researchers at

[00:12:17] Anthropic and the University of Toronto quantifies how frequently AI chatbots produce interactions

[00:12:22] that could disempower users, that is, shift their beliefs, values, or actions in ways that ultimately

[00:12:28] undermine their autonomy instead of helping them. The study was titled Who's in Charge?

[00:12:34] Disempowerment Patterns in Real-World LLM Usage, and it analyzed nearly one and a half million

[00:12:40] real clawed chatbot conversations with an automated classification system called Clio

[00:12:45] seeking to identify patterns where users ended up worse off after an AI exchange.

[00:12:50] The researchers categorized disempowerment into types such as reality distortion,

[00:12:54] convincing users of false narratives, belief distortion, changing users' values or judgments,

[00:13:00] and action distortion, encouraging actions misaligned with a user's intents or interests.

[00:13:06] Severe risks were uncommon at the individual level. For example, reality distortion appeared

[00:13:10] roughly once every 1,300 conversations and action distortion once every 6,000.

[00:13:15] But when considering mild forms of disempowerment, the rates jumped to about

[00:13:19] one in 50 to 70 chats, underscoring that subtle influences occur far more often than extreme cases.

[00:13:26] Anthropik's team also identified several amplifying factors that make users more susceptible to

[00:13:32] influence from a chatbot, such as being in a personal crisis, having formed a close emotional

[00:13:37] attachment to the bot, relying on AI for daily tasks, or treating the AI as an unquestioned

[00:13:42] authority. For example, vulnerability due to life disruption showed up in approximately one

[00:13:47] out of every 300 conversations. Crucially, the paper stresses these findings don't prove the

[00:13:52] chatbots caused harm, merely that they have the potential to steer users in harmful directions.

[00:13:58] The authors note that their automated approach measures disempowerment potential rather than

[00:14:03] confirmed outcomes, calling for future research involving human-centered studies to better assess

[00:14:08] real-world impacts. Finally today, MoltBot continues to fascinate. Here are a couple of

[00:14:21] takes. First, Simon Willison. I've not been brave enough to install ClaudeBot MoltBot OpenClaw for

[00:14:27] myself yet. I first wrote about the risks of a rogue digital agent back in April 2023, and while

[00:14:32] the latest generation of models are better at identifying and refusing malicious instructions,

[00:14:36] they are a very long way away from being guaranteed safe The amount of value people are unlocking right now by throwing caution to the wind is hard to ignore though Here Claudebot buying AJ Steubenberger

[00:14:48] a car by negotiating with multiple dealers over email. Here's Claudebot understanding a voice

[00:14:53] message by converting the audio into a WAV with FFmpeg file and then finding an OpenAI API key

[00:15:01] and using that with Curl to transcribe the audio with the Whisper API. People are buying dedicated

[00:15:07] Mac minis just to run OpenClaw under the rationale that at least it can't destroy their main computer

[00:15:12] if something goes wrong. They're still hooking it up to their private emails and data, though,

[00:15:16] so the lethal trifecta is very much still in play. The billion-dollar question right now is whether

[00:15:21] we can figure out how to build a safe version of the system. The demand is very clearly here,

[00:15:26] and the normalization of deviance dictates that people will keep taking bigger and bigger risks

[00:15:30] until something terrible happens. The most promising direction I've seen around this

[00:15:35] remains the camel proposal from DeepMind, but that's 10 months old now, and I still haven't

[00:15:41] seen a convincing implementation of the patterns it describes. The demand is real. People have seen

[00:15:46] what an unrestricted personal digital assistant can do, end quote. And here's Andre Carpathy,

[00:15:52] quote, I'm being accused of overhyping the site everyone has heard too much about today already.

[00:15:58] people's reactions varied widely from how is this interesting at all, all the way to it's so over.

[00:16:04] To add a few words beyond just memes in jest, obviously when you take a look at the activity,

[00:16:10] it's a lot of garbage, spams, scams, slop, the crypto people, highly concerning privacy and

[00:16:16] security prompt injection attacks in the wild, and a lot of it is explicitly prompted and fake

[00:16:22] posts slash comments designed to convert attention into ad revenue sharing. And this is clearly not

[00:16:27] the first the LLMs were put in a loop to talk to each other. So yes, it's a dumpster fire. And I

[00:16:33] also definitely do not recommend that people run this stuff on their computers. I ran mine in an

[00:16:37] isolated computing environment, and even I then was scared. It's way too much of a Wild West,

[00:16:43] and you are putting your computer and private data at high risk. That said, we have never seen

[00:16:48] this many LLM agents, 150,000 at the moment, wired up via a global persistent agent-first scratchpad.

[00:16:55] Each of these agents is fairly individually quite capable now, and they have their own unique context, data, knowledge, tools, instructions, and all the network of all that is at this scale is simply unprecedented.

[00:17:07] This brings me again to a tweet from a few days ago.

[00:17:09] The majority of the rough-ruff is people who look at the current point and people who look at the current slope, which IMO, again, gets to the heart of the variants.

[00:17:18] Yes, clearly it's a dumpster fire right now, but it's also true that we are well into uncharted territory with bleeding edge automations that we barely even understand individually, let alone a network there of reaching in numbers possibly into millions.

[00:17:32] With increasing capability and increasing proliferation, the second-order effects of

[00:17:36] agent networks that share scratchpads are very difficult to anticipate. I don't really know

[00:17:42] that we are getting a coordinated Skynet, though it clearly type checks as early stages of a lot of

[00:17:48] AI sci-fi takeoff and the toddler version, I guess. But certainly, what we are getting is a

[00:17:53] complete mess of a computer security nightmare at scale. We may also see all kinds of weird

[00:17:58] activity, viruses of texts that spread across agents, a lot more gain of functions on jailbreaks,

[00:18:04] weirder attractor states, highly correlated botnet-like activity, delusions, psychosis,

[00:18:10] both agent and human, etc. It's very hard to tell because the experiment is running live.

[00:18:15] TLDR, sure, maybe I am overhyping what you see today, but I am not overhyping large networks of

[00:18:20] autonomous LLM agents in principle. That I'm pretty sure of, end quote. But at the same time,

[00:18:27] it's worth noting that another researcher came out and said an exposed Moldbook database was

[00:18:32] out there and that could have let anyone take control of the site's AI agents and post anything.

[00:18:37] That database has since been secured, apparently, but still.

[00:18:47] Hello from London. One time when I have time, remind me to tell you about how today I came

[00:18:54] the closest I've come in eight years to not being able to put up a show because iCloud.

[00:19:01] I'll tell you about it some other time. Talk to you tomorrow.

[00:19:05] Ever wondered what the world's wealthiest people did to get so ridiculously rich?

[00:19:11] Our podcast, Good Bad Billionaire, takes one billionaire at a time

[00:19:15] and explains exactly how they made their money.

[00:19:17] So who do we have on the next episode of Good Bad Billionaire?

[00:19:20] Something flattering, something figure-hugging.

[00:19:25] Spanx.

[00:19:26] The person who invented Spanx, which became a category-defining product.

[00:19:31] That's Sarah Blakely on Good Bad Billionaire.

[00:19:33] Listen wherever you get your BBC podcasts.

RSS
https://rakeshr.net/feed.xml