Transcript: How To AI With WSJ's Chris Mims

Feb 12, 2026

Source: Tech Brew Ride Home | Duration: 52 min

Summary

Here is a comprehensive summary of the podcast episode "How To AI With WSJ's Chris Mims":

Opening context:

  • The guest is Chris Mims, a technology columnist for the Wall Street Journal.
  • The main topic is Mims' new book "How to AI: Cut Through the Hype, Master the Basics, Transform Your Work", which aims to provide a practical guide for non-technical people on how to effectively incorporate AI into their workflows.

Key discussion points and insights:

  • Mims prefers the term "simulated intelligence" over "artificial intelligence" to highlight that current AI systems, while capable, lack the full breadth of human intelligence.
  • He describes AI as akin to "a super intelligent toddler" - capable of remarkable feats, but still needing guidance and oversight.
  • Mims shares his own journey from AI skeptic to enthusiastic adopter, highlighting tools like Notebook LM that dramatically improved his research and writing workflows.
  • He emphasizes that AI tends to benefit experts the most, as they can better evaluate the AI's outputs and ask the right prompting questions.
  • Mims discusses examples of AI being used effectively in fields like law, construction, and consumer packaged goods - automating tedious tasks and augmenting human expertise.

Notable technologies, tools, or concepts mentioned:

  • Notebook LM, Copilot for Depositions, and other AI-powered research and writing assistants
  • The concept of "machine psychology" - understanding AI's behavior and quirks rather than just the underlying math
  • The "laws of AI" outlined in Mims' book, such as "experts benefit the most from AI" and "give it your least favorite tasks"

Practical implications and recommendations:

  • Start small by having AI handle your least favorite, most tedious tasks first to see the benefits
  • Embrace an experimental mindset and be willing to try new AI tools, rather than sticking to old workflows
  • Leverage AI's strengths to enhance your productivity, but be wary of cognitive offloading and losing essential domain expertise
  • Consider using AI to do "less" work by automating annoying tasks, rather than always striving for "10x" productivity

Overall, the episode provides a nuanced, experience-based perspective on the current state of AI and how non-technical professionals can effectively harness it to transform their work, while avoiding the pitfalls of hype and over-automation.

Full Transcript

[00:00:00] Welcome to another bonus episode of the Tech Brew Ride Home.

[00:00:09] I'm your host, as always, Brian McCullough.

[00:00:10] We are talking to a very old friend of the pod.

[00:00:14] This might be a five-timer thing like the SNL.

[00:00:17] We need to get you a jacket, but maybe even more than that, Chris.

[00:00:20] This is Chris Mims. Hi, Chris.

[00:00:22] How's it going?

[00:00:24] Good. You all know Chris from me quoting him on the show all the time,

[00:00:29] But also Chris has a new book out that is insanely well-timed, in my opinion, called How to AI, Cut Through the Hype, Master the Basics, Transform Your Work by Christopher Mims.

[00:00:40] You can see it if you're watching the video.

[00:00:43] Chris, is this a guide for what we are all experiencing right now, which is, OK, the hype has been going for several years now.

[00:00:53] But, hey, it is really time to see if this is useful for whatever it is that I do.

[00:00:59] How can I fit this into my daily workflow?

[00:01:02] Yeah.

[00:01:02] Yeah, this is the guide for, I say, for the rest of us.

[00:01:05] But the rest of us means the other 90% of us.

[00:01:08] I mean, I think it's easy for folks like you and me, Brian.

[00:01:10] Like, we're inside our bubble.

[00:01:11] We're scrolling X.

[00:01:13] We're reading about the latest Moldbot drama about how AI is upending coding.

[00:01:18] And we forget that, right, there's this enormous population of kind of early to mid to late adopters who are trying to apply AI to every other kind of knowledge work there is.

[00:01:32] And so that is what the that is who the book is for.

[00:01:35] And so I tried to really step back and explain, you know, kind of from first principles.

[00:01:39] This is what the fundamental architecture is here.

[00:01:43] You know, mostly transformer models and what that means.

[00:01:45] But I talk about other kinds of AI.

[00:01:46] and then how does that play out in fields where you wouldn't expect there to be rapid adoption or disruption?

[00:01:52] You know, the first chapter of the book is about the legal field, which is now absolutely being upended by this.

[00:01:59] But I also went and spent a lot of time with like Clorox, right?

[00:02:03] The consumer packaged goods, people who bring you obviously bleach, but also Hidden Valley Ranch.

[00:02:08] I spent a lot of time with the construction industry, which is just fascinating because here's an industry

[00:02:12] which the productivity has actually gone down since the 1970s.

[00:02:16] And here they have this opportunity to like digitize in ways that they just couldn't before.

[00:02:21] And AI actually makes that easier.

[00:02:24] We're going to get to some of those examples.

[00:02:25] But I feel like a lot of what you attempt in the book is like a sort of conceptual reframing, like helping people understand exactly what these tools are.

[00:02:38] Like you say it was a mistake to even call it artificial intelligence.

[00:02:42] your preferred term is something like simulated intelligence or something like that?

[00:02:47] Why does that distinction matter?

[00:02:50] Yeah.

[00:02:50] Yeah, I mean, I think the distinction matters

[00:02:52] because while the AI that we have now is capable of some amount of reasoning,

[00:03:00] and it is extraordinarily capable in ways that humans are not,

[00:03:04] there are tons of pieces of our intelligence which it is missing.

[00:03:09] And of course, that becomes apparent when you work with it really deeply.

[00:03:12] you know there's just so many ways

[00:03:15] there's levels of abstraction that it has yet

[00:03:17] to access yet

[00:03:18] and so

[00:03:20] I call it simulated intelligence because it will

[00:03:23] often do things where you're like you have those

[00:03:25] wow moments and you're like how did it do that

[00:03:27] and the answer sometimes is like

[00:03:29] well it's basically like

[00:03:30] a fuzzy semantic search plus

[00:03:33] a certain amount of basic

[00:03:35] reasoning and so it remixed

[00:03:37] the content that it had memorized

[00:03:38] and so what you're seeing is the intelligence

[00:03:41] of the humans that it just

[00:03:42] not some sort of innate spark of sentience.

[00:03:46] So I like simulated intelligence

[00:03:47] because it puts it in a little bit of a remove.

[00:03:50] It gives us a little bit of skepticism.

[00:03:52] But also, look, there's tons of simulated things

[00:03:54] that are enormously useful and transformative, right?

[00:03:58] This is not a book that's anti-AI.

[00:04:01] I'm not talking about it as a stochastic parrot

[00:04:04] or something like that.

[00:04:05] Simulated things are transformative.

[00:04:09] One of my favorite analogies,

[00:04:11] I can't remember who first said this one that I heard, is that like AI is like dealing with a toddler, like a super intelligent toddler.

[00:04:18] If you say to a three year old, hey, take this cup and put it on the sink.

[00:04:24] Sometimes that'll happen immediately. But a lot of times you've got to be like, OK, try again.

[00:04:29] OK, like it's it's herding cats. It's it's so that when right.

[00:04:34] You have the experience that AI stumbles across upon something and it's absolutely perfect.

[00:04:40] And you're like, OK, well, then this is going to happen every time.

[00:04:43] And we're still not at that point yet.

[00:04:45] There's still a lot of it's guiding.

[00:04:48] It's it's prompting.

[00:04:49] It's it's it's it's treating AI like a toddler that's really, really smart, but still has to be guided.

[00:04:55] Yeah. And you've just got to be careful, right?

[00:04:58] Because sometimes that toddler will unthinkingly try to cross the street.

[00:05:03] Right. Doesn't have a sufficient mental model of what a speeding car is.

[00:05:07] and you have to yank it back before it nukes your project or whatever.

[00:05:13] I feel like you and I have had similar journeys,

[00:05:16] and we've even texted about this offline and stuff,

[00:05:19] but you describe in the book your journey from an AI skeptic to a convert.

[00:05:25] So you have adopted AI, you think, effectively in your own workflow for what you do?

[00:05:34] Yeah, absolutely.

[00:05:35] I mean, and there are more layers to it that I could adopt, you know, and I think in the same way that, you know, I mean, look, I'm a journalist, but like, at the end of the day, we're all creators at this point.

[00:05:47] and you know in the same way that

[00:05:51] Cloud Code for example and to a lesser extent

[00:05:55] Codex and other sort of vibe coding tools are enabling designers to build apps

[00:05:59] things like Cloud Code or Cowork

[00:06:03] enable people like me to automate some

[00:06:07] of the work that is required to you know manage your own social media you know recut videos and assemble them for those various platforms There just so many ways that it is becoming more and more and more useful

[00:06:23] And then obviously there's just the basic way that it helps me as a journalist, right?

[00:06:28] Deep research tools are transformative for me when I'm coming cold to a subject.

[00:06:33] You know, Notebook LM is enormously helpful for gathering notes and materials and summarizing them for me.

[00:06:38] But also, importantly, the more I use these tools, the more I discover the boundary where if I do too much cognitive offloading, the quality of my work goes down.

[00:06:51] Because ultimately my job is to dig up information that nobody has heard before from real human beings or arrive at insights that are hopefully novel.

[00:07:02] And the AI is limited in its ability to do that.

[00:07:06] yeah the you don't want to offload understanding your job is to understand something and so that

[00:07:15] is not something that you want to outsource to ai because that's the whole point of your job

[00:07:19] but was there a specific breakthrough with ai some specific tool that was that changed your

[00:07:25] mind and turned you into a convert yeah i think it was notebook ellen and it was just kind of

[00:07:30] beautiful how they had you know it was the in the same way that slack made you know icq uh accessible

[00:07:38] right or or reddit uh made you know usenet brought it to the masses or something there's that moment

[00:07:46] where you take something and you turn it into a really friendly really easy to use product

[00:07:50] you really make hard design decisions about how you're going to limit its functionality

[00:07:55] and you and you really kind of handhold a little bit and you're like this is what it's for right i

[00:07:59] I mean, No Book LM was sort of co-designed internally, right, by a famous author.

[00:08:05] I think it was Steven Johnson was at Google for a couple of years or still there.

[00:08:09] And he was like, I want to create the ambient brain that I've always wanted when writing my books,

[00:08:17] his very brainy, deep books that are full of facts about science and technology.

[00:08:22] And I think they really succeeded at that.

[00:08:24] And then, of course, when you create a tool like that, that's very effective, people find ways to repurpose it.

[00:08:28] Right. So they're really pushing it for like education and stuff now. And I think it could have a lot of utility there.

[00:08:33] But I think, yeah, for everybody, it's a different moment. And it really is the moment when you're handed a tool that solves problems that are essential to your work.

[00:08:46] Right. So when I talk to people in the legal profession, they're like FileVines, Copilot for Depositions blew my mind.

[00:08:51] or LexisNexis' new ability to draft legal documents as well as to do legal research,

[00:08:58] but it's all grounded in real case law, blew my mind.

[00:09:01] Or the construction industry will say to me,

[00:09:03] hey, estimates are like the hardest thing that we have to do,

[00:09:07] figuring out just how much to charge when somebody wants a new building.

[00:09:11] Now we have AI that helps us do that estimating process.

[00:09:14] There's always that mind-blown moment where it's like,

[00:09:16] this thing has always been the most tedious and labor-intensive part of my job,

[00:09:20] and now this thing speeds me up.

[00:09:24] One of the concepts that you try to turn people onto

[00:09:28] is this idea of machine psychology as a better way to understand AI

[00:09:33] as opposed to trying to understand the math, which frankly none of us can.

[00:09:37] So what is machine psychology and how should non-technical people think about it?

[00:09:43] Yeah, so this comes from my training as a neuroscientist,

[00:09:47] which I never thought that I would use that,

[00:09:49] But I spent a few years being an invertebrate neuroscientist.

[00:09:52] Like, I was working with the bare metal of thought, right?

[00:09:55] I was literally an invertebrate neuroscientist poking little invertebrate neurons with tiny glass electrodes and, like, watching the traces on an oscilloscope and being like, oh, that's what happens when this calcium channel opens and the neuron fires.

[00:10:08] So what I learned from that whole process was that neuroscience is a really terrible way to try to understand psychology, right?

[00:10:17] Like it's almost like imagine you tried to understand the iPhone by just like watching the traces of the of transistors firing on a microchip.

[00:10:26] It's a level of complexity that you'll never be able to internalize.

[00:10:30] So machine psychology is what if we treated AI the same way we treat the human brain.

[00:10:36] Right. And we go up many layers of abstraction to like cognitive psychology or now even social psychology.

[00:10:43] now that you have the agents talking to each other.

[00:10:45] And when you do this, you get to approach AI in terms of its outcomes,

[00:10:50] but you are probing a little bit further down.

[00:10:53] You're watching its behavior.

[00:10:54] You're watching its quirks.

[00:10:56] And it's a little, I mean, this is a tiny bit inside baseball,

[00:10:59] but like Anthropic, for example,

[00:11:00] they love to literally probe the networks of artificial neurons inside their AIs

[00:11:05] and say, oh, we found this complex of weights or artificial neurons

[00:11:09] that are doing this thing.

[00:11:11] And if we alter that, you know, we can get this effect or we can make it obsessed with the San Francisco with the Golden Gate Bridge.

[00:11:20] And so machine psychology is my plea to be like, look, the same way that we are all constantly psychoanalyzing ourselves and other human beings, we can apply that skill to kind of observing the behavior of AI.

[00:11:32] And frankly, the thing that always impresses me the most is that people who are just totally naive about how AI works, but have deep domain knowledge and are very curious, they are often the most effective early adopters, right?

[00:11:45] They'll be in, you know, automation and robotics or medicine or the law.

[00:11:51] And they just dive in and they're just like, I don't know, I'm just sitting here exposing myself directly to the machine and kind of theorizing about how it's operating.

[00:11:59] so the the the book has uh got a series of laws like the first law of ai second law of ai one of

[00:12:07] them i can't remember what number it is is along those lines and and i completely have thought

[00:12:12] about this a lot which is the law says experts benefit the most from ai which is counterintuitive

[00:12:18] to a lot of people because people kind of assume ai is like a great equalizer but i agree with you

[00:12:24] I think that what it actually does it turbocharges again domain knowledge and expertise So walk me through why someone who already knows the subject deeply might get more out of AI than say a beginner

[00:12:38] Yeah.

[00:12:39] So no matter the domain, you know, if you're an expert, you can do two things with AI that, you know, an amateur cannot do.

[00:12:50] Number one, obviously, you can evaluate its work, right?

[00:12:54] You can fact check its output or you can read its code.

[00:12:58] Right. You know when it's wrong.

[00:12:59] Simple enough.

[00:13:00] Yes, exactly. That's number one.

[00:13:02] Number two is you know what questions to ask.

[00:13:05] So like when coders say, oh, I don't even touch a keyboard anymore.

[00:13:09] Like I literally just dictate into an AI this kind of long verbal essay about what it is I want it to do.

[00:13:18] and then it uses that plus all this other context that I've given it to start

[00:13:23] deploying a fleet of agents to build something.

[00:13:26] That is because if you're Andre Carpathia or whoever,

[00:13:30] you have so much knowledge in your head.

[00:13:32] You know what it is.

[00:13:33] You want the same thing, you know, for the law.

[00:13:36] Like you go in and like, you know, I've written a thousand wills in the state of Iowa.

[00:13:43] I know what the laws are here.

[00:13:45] Then, you know, somebody comes to you and they're like,

[00:13:47] I have this particular life situation, please do this for me. And you're like, okay, I know exactly

[00:13:51] what to ask for. And then I can just dictate that into a system that is going to dump out this

[00:13:55] document. And then I will go evaluate it because if I don't, I'll get disbarred.

[00:14:01] So that requires an expert, right? If an amateur goes in and tries to do either one of those

[00:14:04] activities, they're going to end up with code that's like buggy and insecure or doesn't actually

[00:14:09] accomplish the goal that they thought they had in mind, or they're going to end up with documents

[00:14:14] that have all these flaws in them or have hallucinations.

[00:14:17] So over and over and over again,

[00:14:19] you see AI making experts more productive.

[00:14:23] I mean, to the point that now we're getting some rumblings

[00:14:25] about they're getting burned out

[00:14:27] because they get into this flow state

[00:14:29] and they're like, I can deploy so much and work so quickly.

[00:14:32] And it's like, whoa.

[00:14:33] I'm going to come back to that at the end.

[00:14:35] Sorry.

[00:14:37] Sorry, there's a horn outside.

[00:14:39] I was going to say something else,

[00:14:41] but I was trying to hide the horn.

[00:14:42] You're right.

[00:14:42] Okay, keep going.

[00:14:43] I interrupted anyway.

[00:14:44] yeah well i mean i think that was kind of the completion of that thought anyway but but i think

[00:14:49] that that um you know there was this thought at the genesis of all this like okay this ai is going

[00:14:56] to make you know uh people who are just unfamiliar with a topic able to create things that they

[00:15:04] couldn't create before and there is an element of that right so you have designers creating apps but

[00:15:09] now you've kind of pushed that cognitive labor down so it's like now all your engineers have to

[00:15:14] We're doing code reviews all the time for these apps that these designers have created if you're actually going to deploy them.

[00:15:20] Or like if you have paralegals, you know, spitting out a bunch of documents.

[00:15:24] Again, if they're not capable of reviewing it on their own, you push that cognitive labor over to the experts.

[00:15:29] So you can create more work-like products as an amateur, but you cannot close the loop and finish that work unless you have the expertise required to completely evaluate the work,

[00:15:42] which you have delegated to your AI junior.

[00:15:47] Right.

[00:15:47] I think the first law is AI is an assistant, not a replacement,

[00:15:52] which is kind of like the thesis of the book.

[00:15:54] Although arguably, I mean, we should caveat everything by saying right now.

[00:15:57] I mean, talk to us in six months.

[00:16:00] But like, right.

[00:16:01] It's like since I'm not a designer or a developer,

[00:16:06] I mean, I could use AI to code something.

[00:16:10] But conceptually, if I've done it for 30 years, I know what's possible.

[00:16:16] What I'm trying to describe is the way AI functions best right now is you could already do what you're having the AI do.

[00:16:25] It's just that maybe you didn't have the tools or the time, right?

[00:16:28] It's not, okay, I'm doing something that would be completely impossible.

[00:16:32] It's better if at least you have a grounding in what it is you're trying to get the AI to create.

[00:16:38] Yeah, absolutely.

[00:16:40] I mean, it can mean that you that as an organization, you are doing things that you just couldn't do before, because as you said, it's just you're able to make folks more productive.

[00:16:51] I mean, this is sort of an odd example, but I was talking to an investor recently, a biotech investor.

[00:16:56] And he's like, the fact that AI research tools can make a recent grad who I want to hire as a research assistant so much more effective means that now I can afford to hire that person.

[00:17:12] Because they have just enough knowledge that I'm like, okay, go do due diligence on these 10 biotech companies.

[00:17:17] And with the deep research tools, they can be so much more productive that then he can justify their salary.

[00:17:23] So there are all these just weird kind of labor displacement effects where like in some ways it ends up being that, you know, people actually are out of a job.

[00:17:30] But in other ways, I think for savvy folks, it can mean more productivity, more labor.

[00:17:37] One of my favorite of your laws is give it your least favorite things to do.

[00:17:43] If like, if you're starting with how can I incorporate AI into what I do, don't from day one, try to have it take over everything.

[00:17:51] Literally start by, it sounds simple, but it's a key insight.

[00:17:55] Just have it get rid of some of the things that annoy you or you just frankly don't want to waste time doing.

[00:18:03] Yeah.

[00:18:04] And I mean, it's so funny.

[00:18:05] I mean, such a basic one, but it really is transformative.

[00:18:09] You know, I really think this is transformative on the level that like maybe even email was.

[00:18:16] No matter the field, everybody has to have meetings.

[00:18:19] They have to have conversations with other people.

[00:18:20] Somebody has to take notes.

[00:18:23] You know, we take it for granted now, but like perfect AI note takers, which can then summarize a meeting at the end, are transformative for everybody.

[00:18:32] I don't care what field you're in.

[00:18:34] And this is one reason why like that it's a Chinese company, Plod.

[00:18:39] their little pin thing.

[00:18:40] Did you know that that actually the best AI device in existence right now And no marketing other than pop and little ads and stuff

[00:18:51] And it's because it's so useful in that context.

[00:18:53] It's been transformative for me personally, but whether I'm talking to educators or doctors,

[00:18:57] I mean, doctors spend something like 20% of their time on data entry.

[00:19:02] So a thing that is listening in the entire time that they're doing an appointment

[00:19:05] and can at least try to pre-opulate the fields and the thing that they have to, you know, their patronage is huge for them.

[00:19:16] Okay.

[00:19:18] It sounds fine on my end, and since it's local uploading anyway, I'm going to trust it.

[00:19:23] Yep, yep.

[00:19:26] Here's what I think I'm going to do.

[00:19:28] By the way, I'm recording again, so.

[00:19:31] I think what I'm going to do is I'm going to say to the audience that we just had a recording issue.

[00:19:35] And one of the things that we talked about while we were trying to fix it was how – it's complicated to go into, but I was like, let's not stop recording because I can use AI to fix it.

[00:19:47] So one of the things that has happened to me over and over again is not thinking that AI can do things.

[00:19:54] Like you and I have been trained to work a certain way, use certain tools for our entire professional lives.

[00:20:00] And so a simple thing like, oh, could I just ask the AI to search for the part where I started the interview again and it does it?

[00:20:09] Or like I was doing something yesterday where it was like, wait, did I actually have a meeting with that person?

[00:20:15] And I'm doing the thing where it's like you put in the email into search and then you search through the email history.

[00:20:20] And you're like, wait, as I'm doing that, I'm like, wait, what if I just asked AI, did we ever have a meeting?

[00:20:25] And boom, it says, yes, you met on this date, you know, at this place, et cetera, et cetera.

[00:20:31] So it's funny that people are age, but a lot of people have to kind of rewire their brains.

[00:20:39] It's not like AI can do everything yet, but you have to remind yourself, hey, maybe AI can do this.

[00:20:44] Give it a try.

[00:20:45] Do you know what I mean?

[00:20:47] Yeah, absolutely.

[00:20:48] It really rewards experimentation.

[00:20:50] You know, there is a big what what Ethan Mollick, you know, the the Wharton professor who wrote the book Co-Intelligence and posts endlessly about new papers about AI has said is that it has a capability overhang.

[00:21:04] And the way this is usually formulated is if we froze the capabilities of the existing frontier LMs, AI models, whatever, VLMs, today, we would still spend the next decade or more figuring out new ways to apply them and realizing new capabilities they have.

[00:21:25] And now, to be clear, part of that is the way we're connecting those models to tools, right, APIs, software, things on the Internet, giving them direct access to our computers, as you can do now with Cloud Cowork, if you so dare, is a big source of those new capabilities.

[00:21:48] My favorite example of this is that historically, co-work in Office 365 is unreliable in Excel, but Claude is good at Excel.

[00:22:03] And you say, why is that right?

[00:22:04] Did they just make a more capable model?

[00:22:06] Sort of.

[00:22:07] But they actually were just like, look, when you're dealing with Excel, write the Python code and go through the formal mathematical logic because you're an LM.

[00:22:15] You're not great at math.

[00:22:16] but you can write code, you know, and you can take direction.

[00:22:21] So what I see is almost daily, I'll try something one week

[00:22:26] and it doesn't really work.

[00:22:28] And then the next week it's like, click.

[00:22:30] You know, it's like Google has added that capability,

[00:22:33] that little hook into personal intelligence, for example,

[00:22:36] or Claude or ChatGPT has gotten a little bit better.

[00:22:40] But then also sometimes you see these kind of shortcomings where it's like,

[00:22:44] oh, it hasn't occurred to a smart engineer to bring all of these different models to parity yet.

[00:22:48] A funny example is, you know, internally at the Wall Street Journal,

[00:22:52] if people want to transcribe a short call, they'll just dump it in Notebook LM.

[00:22:57] It's good at that.

[00:22:58] But guess what?

[00:22:59] If you do the same thing and you dump it into Gemini Pro and you ask it for quotes,

[00:23:03] it will start to hallucinate quotes at you.

[00:23:05] And it's like, why is that?

[00:23:07] This should be the same model.

[00:23:08] Right.

[00:23:09] it is funny the degree to which um i am using three using three different services and now

[00:23:16] two different you know on my computer models and because certain things are good at certain other

[00:23:21] things but let's get into some of the examples of people actually incorporating this as we're

[00:23:27] talking about um so like one of the um most memorable characters in the book is uh the the

[00:23:32] Texas lawyer.

[00:23:35] I think they're a personal injury lawyer.

[00:23:37] And she was using, what, AI deposition co-pilots monitoring her in real...

[00:23:43] How did she incorporate AI into her work?

[00:23:48] Yeah, this was such a fascinating example because it was a really early example of agentic

[00:23:52] AI because, I mean, you know, the lead times on books.

[00:23:55] So I interviewed her more than a year and a half ago of agentic AI outside of coding.

[00:24:01] And so what Filevine's deposition co-pilot does is it's listening to a lawyer interview the person who's on the other side of the case.

[00:24:12] So it's like a little mini courtroom drama.

[00:24:15] And they're saying, like, well, you know, did you depart from your lane, you know, and hit my client?

[00:24:19] You know, because they're talking about personal injuries.

[00:24:21] It's like always car accidents.

[00:24:23] And the deposition co-pilot is listening and in real time is transcribing the conversation.

[00:24:28] and it has been preloaded with the questions that the lawyer needs answered.

[00:24:32] And it will not check off that you have sufficiently answered that question

[00:24:36] until in its judgment, it has heard you get the answer that you said you needed

[00:24:42] from the person that you're speaking with.

[00:24:45] And it might seem like, oh, how hard is that to do really?

[00:24:47] But it turns out when you're a solo lawyer doing it in the moment

[00:24:51] and you have all the context of the case, you need people to say certain words sometimes.

[00:24:57] It's not enough for them to be like, yeah, [00:25:00] I departed my lane.

[00:25:01] You need them to say, yeah, I cut off your client or something like this, right?

[00:25:06] Which they have to say because it's perjury if they lie.

[00:25:10] And so to me, it was just such a fascinating example of this real-time agentic AI assistant.

[00:25:16] It's listening throughout this entire process.

[00:25:18] It's giving you real-time feedback.

[00:25:21] And I think we're going to get more and more of that where, you know, the AI is ambient and it's just like, I mean,

[00:25:28] the danger is the clippy effect, right?

[00:25:30] Like, it seems like you're writing a memo.

[00:25:33] Do you want this?

[00:25:34] But there are ways to intelligently insert that.

[00:25:36] I mean, I have been really blown away.

[00:25:39] I mean, Google, you know,

[00:25:40] Gemini's not the best model for any particular thing,

[00:25:42] but nobody knows more about you.

[00:25:45] And I have been blown away recently

[00:25:47] at some of the suggested responses to emails

[00:25:50] at work and on my personal email

[00:25:52] where, like, people will ask me, like, detailed questions,

[00:25:55] and it'll be like,

[00:25:56] here's all the answers to the questions they asked you.

[00:25:58] And I'm just like, that's incredible.

[00:26:01] I will lightly edit this and send it.

[00:26:03] And so that level of embedding just always on ambient AI in our daily lives, I think we're just at the beginning of it.

[00:26:15] Tell me about the Clorox story, because there are details in there that I don't think most people are aware of.

[00:26:23] but there has been a lot of advancement in using AI for product innovation,

[00:26:29] like jamming on how we can, and not even just software,

[00:26:33] but literal IRL products and things like that,

[00:26:37] designing new features and stuff,

[00:26:39] or even things like supply chain planning and stuff like that.

[00:26:42] So tell me a bit about the Clorox, what they're doing with AI right now.

[00:26:47] Yeah, Clorox is a funny example because they were super early adopters of having the AI.

[00:26:52] I mean, I think they were using Copilot. So it was basically early chat GPT in their brainstorming sessions for new products.

[00:26:58] And they have a very, you know, algorithmic series of steps.

[00:27:03] It takes months, months and months, you know, focus groups and everything else to come up with new ideas.

[00:27:08] And one of the funny stories they told me was, you know, the toilet bomb, if you see in it, they're like that was that happened because of the app.

[00:27:15] We were brainstorming about like, how do you clean products and stuff?

[00:27:17] And we had this kind of AI tool that like digests what we're seeing on social media talking about competitors products.

[00:27:24] And that gave us some insights about, you know, related products.

[00:27:29] And then we were kind of all, you know, basically talking with the AI, talking with each other.

[00:27:35] And it was like, well, what if you just like, like toilet grenade or something?

[00:27:39] Like, what if you bombed your toilet?

[00:27:40] And they were like, we would never think of that on our own.

[00:27:42] but in these sessions ai can be good at injecting randomness right like it's a way to take advantage

[00:27:49] of its like hallucination its stochastic nature because it can give you it can just help give you

[00:27:56] random ideas i mean it's funny i once did a session with um a professional songwriter like

[00:28:02] this guy's a legit musician this is his entire life he writes these beautiful uh like tom lear

[00:28:08] songs that are very funny. And he walked me through like different like songwriting techniques.

[00:28:13] And it's funny how many of them involve injecting randomness into your creative process. So that can

[00:28:21] be a way that people use it. And there's tons of studies in this that find that like a person plus

[00:28:25] an AI will come up with more ideas for new businesses if they're in a business school

[00:28:29] class than a person on their own. Teams with an AI added in will come up with new and objectively

[00:28:33] better ideas. You know, the key thing is that you're not just going to it and being like,

[00:28:39] hey, come up with a new idea for me. Like, you're just making it a contributor to your very human

[00:28:43] process. I mean, another thing is that there are so many ways in which classical AI kind of gets

[00:28:50] pushed to the side in this era of generative AI, but classic predictive as opposed to generative AI

[00:28:56] is enormously important if you're doing like supply chain planning. So basically, you're

[00:29:01] trying to like match production with demand and distribution if you're a company like Clorox.

[00:29:07] And, you know, they're just using classic machine learning techniques, getting incorporated into

[00:29:12] that process is still kind of a revolutionary idea. So you see that there's so many different

[00:29:17] tracks where there's different eras of AI that are still kind of yet to be incorporated into all

[00:29:23] the business processes that they could be in. Right. You talk about Allstate when you're talking

[00:29:28] about classic AI, right?

[00:29:29] Because they're running like dozens

[00:29:31] of different machine learning models

[00:29:34] to process claims and stuff like that.

[00:29:39] But that's the old school use of AI,

[00:29:43] but that's still actually being transformative

[00:29:46] right now as well.

[00:29:47] Yeah, hugely transformative.

[00:29:49] And I think that the popularity of generative AI

[00:29:52] has kind of opened the door for the data scientists

[00:29:56] and the sort of long suffering PhDs who been like stuck in the back to kind of get more traction internally and be like look there all these different ways that we can do things for you and contribute I mean another one of my favorite examples is that you know Facebook you know you

[00:30:09] look at their blowout revenue last quarter.

[00:30:12] I mean, so much of their ad targeting is just classic machine learning.

[00:30:15] Yes, I know generative AI is contributing in different ways there.

[00:30:18] But like when people like they're really killing it with AI, I'm like, yeah, they're

[00:30:22] killing a lot of the way they're killing it with AI is AI from five years ago.

[00:30:26] um you um you the current thing has been like oh this is the year of robotics like i saw a lot of

[00:30:37] robots at ces and whatever and jensen wong is is you know going on all the time about robotics being

[00:30:43] you i feel like you were skeptical that we're really on the verge of a robot revolution but

[00:30:48] Have you changed your mind on that as well?

[00:30:53] No, I don't think I've changed my mind on that.

[00:30:56] I mean, it is just really hard.

[00:30:58] I mean, obviously, the most important part of this robot revolution right now is self-trapping.

[00:31:07] And Waymo is doing amazing things there.

[00:31:10] The open question there is, can they solve all of the nitty gritty fleet management and other aspects of it and drive those costs down to make it an actually profitable business?

[00:31:25] Like, that is still a big question mark.

[00:31:28] Obviously, in terms of like warfare, what's going on with AI drones in Russia and Ukraine is absolutely astonishing.

[00:31:37] and obviously in logistics

[00:31:41] there's just tons and tons of stuff happening in terms of

[00:31:45] just making things get out the door of the warehouse

[00:31:48] so all of those things are kind of individually incrementally transformative but this idea

[00:31:53] that we're going to have humanoid robots which

[00:31:56] are going to have this takeoff moment, this chat GPT moment and are suddenly going to be

[00:32:01] doing a huge variety of tasks, I think it remains pretty silly

[00:32:05] one of the things that you say that i agree with people said you know um data at this point in the

[00:32:14] ai um revolution is is the new rare earth people say data is the new oil or whatever

[00:32:19] if that is the case um who do you think is sitting on like the the the huge oil field right now who's

[00:32:30] the Saudi Arabia, whether it be one of the

[00:32:33] FANG companies or

[00:32:34] one of the model companies or

[00:32:36] one of the, like, who is

[00:32:38] quietly right now, you think,

[00:32:40] sitting on the data that is going to

[00:32:42] be the most valuable sort of

[00:32:44] leverage point?

[00:32:47] Well, first off,

[00:32:49] if one of the big frontier model

[00:32:50] companies has licensed some

[00:32:52] huge volume of data

[00:32:54] that the others don't know about, then

[00:32:56] it's them. But none of us know

[00:32:58] what that is yet. I mean, it is very telling.

[00:33:00] that companies like OpenAI have done huge deals with my own employer and others

[00:33:05] where they're ingesting all of the journalism and other data that we produce.

[00:33:11] I think so far there's not some big megacorp data roll-up out there.

[00:33:18] What I find really amazing is that individual companies

[00:33:23] are the ones who are sitting on that huge reserve of data.

[00:33:26] So, you know, LexisNexis with all of their 30 years of database of case law is an incredible example.

[00:33:37] Goldman Sachs is an incredible example.

[00:33:40] If you look at how they're starting to use AI internally, right?

[00:33:43] Like who has more data that could be used fully mined than J.B. Morgan or Goldman Sachs?

[00:33:52] In other fields, you know, I don't know.

[00:33:54] I mean, it remains to be seen how much medical and pharmaceutical companies are able to mine that sort of thing.

[00:34:04] So, and I think there's a bit of a land grab now where people are trying to go and get that data.

[00:34:09] And you see it when the big model companies try to, where they recognize, oh, somebody's doing something with our model and their data.

[00:34:17] What if we just ate that thing?

[00:34:19] So Anthropic has started to make moves like that in the legal field.

[00:34:23] You're definitely going to see that in medicine.

[00:34:26] I think Google and OpenAI have made moves in that direction.

[00:34:32] We just talked about robotics and whether we're on the verge of a chat GPT moment.

[00:34:37] You talk about science and scientific research towards the end.

[00:34:42] How close do you think we are to waking up tomorrow and being like, oh, we just cured Alzheimer's?

[00:34:47] Maybe that's two, but how close do you think we are to seeing real scientific breakthroughs because of this AI era?

[00:34:55] Yeah, I think we're already there.

[00:34:56] They're happening every day.

[00:34:57] My favorite fact about this people don realize that without AlphaFold or maybe it was an earlier version of AlphaFold which is the protein folding model created by DeepMind you would not have the COVID mRNA vaccine

[00:35:12] The speed of that rollout was uniquely enabled by that.

[00:35:17] You cannot go to one of these scientific conferences and not see thousands,

[00:35:23] tens of thousands of presentations that are using basic technology that is unlocking the way

[00:35:30] that proteins fold or genes are translated into proteins.

[00:35:35] I mean, this is the basic mechanics of life

[00:35:38] on which all of molecular medicine,

[00:35:40] all future synthetic biology,

[00:35:43] so many pharmaceutical breakthroughs are based,

[00:35:46] they're all using this so heavily every day.

[00:35:50] And so we're absolutely there now,

[00:35:54] and I think we're going to see more of that.

[00:35:57] I mean, to me, it's one of these...

[00:36:00] We get so focused on revolutions that are very apparent or sexy or that we can understand.

[00:36:06] But in the same way that, you know, the technology to create microchips along the way created nanotechnology.

[00:36:14] And most people just don't know that, that we're surrounded, that our phones are full of like literally nanoscopic machines, not just microchips, that are all etched in the same way that microchips are etched.

[00:36:26] People just don't know that this is happening.

[00:36:30] Do you, in the course of writing this book, have you become more optimistic or more concerned about where AI is taking us, you know, big picture?

[00:36:41] Yeah, I wish I could remember who said this, but they said, you know, most concerns about AI are actually concerns about technology. And most concerns about technology are actually concerns about capitalism.

[00:36:55] and so I am very concerned about how we're going to apply these general purpose technologies like AI

[00:37:03] because of the systems in which we are deploying them like when Ring does a Super Bowl ad where

[00:37:10] they're like we can find your dog and I'm like my dog that's trying to evade ice do you mean

[00:37:16] like that is something we should all be concerned about I genuinely hope there's a really funny post

[00:37:22] on Blue Sky today where somebody was like,

[00:37:25] forget, they're like, you know, the old phrase,

[00:37:28] like you can't take apart the master's house

[00:37:31] with the master's tools.

[00:37:31] And they were like, forget that.

[00:37:33] They're like, the master's tools are awesome.

[00:37:35] You should absolutely use them to take apart their house.

[00:37:37] And if you don't, you should hope that somebody

[00:37:39] who's willing to is on your side.

[00:37:41] I feel like we're just at the dawn of sort of, you know,

[00:37:46] to be warm and fuzzy for a second, sort of AI for good.

[00:37:49] And I think that there absolutely is going to be this arms race between people who want to use it for surveillance and control and people who, in the spirit of the original Internet, want to and are able to use AI to fight against that and to hopefully empower everybody else.

[00:38:08] Okay, last question.

[00:38:10] And this is something I just have been thinking about this week, so it might not be fully formed.

[00:38:14] And it's not even a question.

[00:38:15] It's a thought.

[00:38:16] I want to see what your take on it is.

[00:38:19] We were talking at some point about using AI, and it's making people more productive,

[00:38:25] and, oh, I can do this, I can do this, but it's also burning them out.

[00:38:28] And this week, like I said to you, I'm using this model, this model over here.

[00:38:33] If I weren't recording right now, I'd have a process running on my machine in the background.

[00:38:38] What I experienced this week, where I was like, I'm going to turbocharge everything that I can with, okay, running AI in the background,

[00:38:48] So like you set it to do something and you're like, great, I can go off and do something else while that's happening.

[00:38:55] But you don't because you come back and you check on it.

[00:38:59] You come, oh, okay, now I got to fix it because it didn't get it exactly right.

[00:39:04] What I felt like this week was I wasn't getting burned out, but it was feeling to me like social media does in the sort of slot machine way.

[00:39:15] where you keep coming back to it to try to get that dopamine hit.

[00:39:19] And that dopamine hit is it did the job that you asked it to do.

[00:39:24] I'm curious, again, I haven't thought this out fully yet,

[00:39:27] but I wonder if there's some way how AI, how the user interface,

[00:39:34] how the process of using AI right now does feel addictive

[00:39:37] and does feel overwhelming as opposed to set it and forget it.

[00:39:43] yeah i i mean as as humans the trap that we are so that we fall into so easily especially

[00:39:52] with technology is the trap of intermittent reward right right if you're getting intermittent

[00:39:59] reward it doesn't matter what it is social media ai successfully completing a task

[00:40:03] you uh drugs of addiction obviously um you will get hooked on it you can get hooked on it um and And we just keep inventing new ways to hook us on intermittent reward And especially in an age when you know hiring is flat or has slowed you know in all the

[00:40:24] industries that involve knowledge work, there's this additional kind of terror on top of that.

[00:40:29] Like, if I don't keep up and I'm not as productive as possible, then, you know, I could lose my

[00:40:35] ability to make a living. So that's a uniquely sort of toxic combination. And I think we all

[00:40:41] have to be wary of that. What I try to do with AI, this is my own personal rebellion. People always

[00:40:46] talk about the 10x engineer, and they're like, oh, AI is going to create so many more 10x engineers,

[00:40:50] just engineers are 10x as productive. I try to find ways to use AI to make myself into

[00:40:57] the 0.5 version of myself. Like imagine sort of a 10x engineer who's 10x more productive.

[00:41:04] What about a 1x engineer who does half as much work?

[00:41:08] So to give you just like one very simple example, which has been transformative for me, and I try to evangelize this.

[00:41:14] When the weather is nice outside and I have to go do an interview or have a call with somebody because I have my AI that's going to record the entire interview.

[00:41:22] I'll put in my earbuds.

[00:41:23] I will go for a walk.

[00:41:25] And I tell people in advance.

[00:41:25] I'm like, I'm going to take you on a walking meeting.

[00:41:28] And I'm having better interviews than ever because I'm just strolling along.

[00:41:33] you know i'm muted so not listening to the birds singing they're not distracted i'm staring at

[00:41:37] trees i'm not looking at a screen i'm not wondering about my next meeting i'm having a like steve jobs

[00:41:43] style walking meeting with people and you know i i'll get you know five ten twenty thousand extra

[00:41:50] steps than i would in a given day and it's all because of ai it's because i know ai is recording

[00:41:57] the entire thing it's because i know that like when i go back later and i'm like didn't that

[00:42:02] person say something about X or Y, I don't have to do a keyword search for it. I can just ask the

[00:42:07] AI, when did they say this? It's going to give me a summary at the end. It has freed me from my desk.

[00:42:14] And I think that is such a simple example of when we apply it intelligently, I'm liberated from my

[00:42:22] desk. If I'm a doctor, it's liberating me from tons of data entry. If I'm a lawyer, it's liberating

[00:42:29] me from having to record every word, you know, in a particular deposition. I think there are going

[00:42:37] to be more examples of this. I mean, AI intensifies work. That is just the nature of technology.

[00:42:45] But I think that there are ways, for now at least, until everybody's using it, and it's obligatory

[00:42:50] that we're all 10x version of ourselves, that we can use it to actually do less work.

[00:42:55] Well, and I wonder if that is the great philosophical divide of the next few years, the people who want to embrace AI to do less to be the 0.5x version of your job versus the people that are like, no, no, no, no, I need to use this to do 10x.

[00:43:14] And, you know, that could apply better in certain jobs than other jobs.

[00:43:20] But, right, I wonder if that's the divide that people are going to fall down on, using AI to do more or using AI to be just as good but free myself up from cognitive load, you know, annoying stuff, whatever.

[00:43:34] Yeah, let me just – one final thought.

[00:43:36] I don't think those have to be intention.

[00:43:37] if we are truly becoming like leaders of our own little teams of agents.

[00:43:43] I always think about Jeff Bezos when he was very much the CEO of Amazon

[00:43:47] in its later years of him being CEO.

[00:43:49] He said, you know, like, I need to get a good night's sleep.

[00:43:53] I need to get exercise.

[00:43:55] I have to make on average two or three really consequential decisions per day.

[00:44:00] That is my job as CEO.

[00:44:01] That's almost the whole of it.

[00:44:03] and I think that there is so much to be said for using this to take over certain toil

[00:44:09] so that we can free up that space to like just zone out or daydream because there's a huge

[00:44:18] difference between I'm going to use AI to like do this work so much faster and I'm going to use AI

[00:44:23] to free myself to know exactly what I should be doing next because we all know in those moments

[00:44:30] of clarity, that is as good as six months of working tirelessly but in the wrong direction.

[00:44:38] Right. Well, once again, the book is How to AI, Cut Through the Hype, Master the Basics,

[00:44:45] Transform Your Work by Christopher Mims. Chris, thanks for talking. It's a great book,

[00:44:50] but also I appreciate the opportunity. It's been a while since I've been able to noodle

[00:44:54] philosophically about what's going on, and that's what this book is all about, so appreciate it.

[00:44:59] yeah appreciate you I love when you noodle

[00:45:02] please do it more on the pod I want to hear more about AI

[00:45:04] varietals

[00:45:05] okay

RSS
https://rakeshr.net/feed.xml