Transcript: A constitution for AI, breaking dark flow, and open source as a moat?

Jan 31, 2026

Source: Dev Interrupted | Duration: 23 min

Summary

Here is a comprehensive summary of the key points from the podcast episode "A constitution for AI, breaking dark flow, and open source as a moat?":

Opening context:

  • The episode discusses the growing impact of AI, including the recent phenomenon of ChatGPT-like assistants being integrated into people's daily lives. The hosts explore how this is transforming software development and the broader business landscape.

Key discussion points and insights:

  • Steve Yegge's essay on "Software Survivals 3.0" examines how software needs to solve for constrained tokens, energy, or money to survive in the AI era. Examples like GREP show how some highly efficient tools can maintain a moat.
  • The "dark flow" problem with "vibe coding" - using AI to generate code without proper review. This can create a false sense of productivity and lead to technical debt.
  • Anthropic's publication of a "constitution" to govern the ethics and behaviors of their AI model Claude. This transparency is seen as an important step in responsible AI development.
  • The example of an open-source developer rapidly porting CUDA to AMD's ROCM platform using AI, showing how open-source can be a moat by enabling faster innovation.
  • A deep dive into one listener's analysis of his podcast listening habits over the past year, using data visualization techniques.

Notable technologies, tools, or concepts:

  • Gastown (engineering process transforming with AI)
  • Vibe coding (using AI to generate large amounts of code)
  • Anthropic's AI model Claude and its published constitution
  • CUDA and ROCM (GPU computing platforms)
  • Data visualization and podcast listening analysis

Practical implications or recommendations:

  • Companies need to carefully assess their software "moats" in the AI era, looking at efficiency, data compression, and agent-friendly design.
  • Responsible AI development requires transparency, like Anthropic's constitution, to govern model behaviors.
  • Open-source platforms can enable faster innovation and become a strategic moat.
  • Analyzing one's own content consumption habits can yield valuable insights.

Overall, this episode provides a wide-ranging look at how AI is reshaping the software industry, from development practices to business models, and the importance of proactive, ethical approaches to this transformative technology.

Full Transcript

[00:00:00] Welcome to another Friday edition of Dev Interrupted.

[00:00:09] I'm your host, Andrew Ziegler.

[00:00:11] And I'm your host, Ben Lloyd Pearson.

[00:00:13] So Ben, how's your week been?

[00:00:15] You know, I'm looking at the stories on our desk for today,

[00:00:17] and there's a lot of stuff going on.

[00:00:19] Doesn't it feel that way to you?

[00:00:21] Yeah, the AI era is not slowing down anytime soon, it would appear,

[00:00:25] because here's some of the stuff that we're covering this week.

[00:00:27] So recalculating SaaS motes in the AI era, breaking AI's dark flow spell, reading Claude's public rulebook, and AI-drained CUDA's GPU motes.

[00:00:38] Andrew, where do you want to start?

[00:00:40] Well, all those things sound cool, Ben, but there is something else that we have to talk about first.

[00:00:44] Then it's the elephant in the room, or maybe I should say the lobster in the room in this case.

[00:00:48] There's been a whole phenomenon shaking the internet, introducing people to AI assistance this week.

[00:00:53] I think it's called like Clawbot or something.

[00:00:57] Oh, no, actually, no.

[00:00:58] Isn't it called Moldbot?

[00:01:00] Have you seen this?

[00:01:01] No, no, no, no.

[00:01:01] You haven't heard the latest news.

[00:01:03] Now, as of this morning, I guess, it's called OpenClaw.

[00:01:06] I mean, it's, you know, we're in the AI era.

[00:01:08] You got to change the name as quickly as you can prompt it, you know?

[00:01:11] I love that.

[00:01:12] It's like the idea of the software being so useful that no matter how much it changes its name, we're all racing to it.

[00:01:17] And we still keep talking about it.

[00:01:19] Exactly.

[00:01:20] Exactly. So this phenomenon, this AI assistant, it kind of turns Claude into a super powered user of your really whole life.

[00:01:28] People are setting this up on their machines and giving it access to really all of the kinds of software applications that they use on a day to day basis, including confidential information.

[00:01:37] And is effectively Claude code wired into your life with a lot of unique hooks and abilities to talk with you, interact over messaging platforms.

[00:01:47] the idea being that you can host this virtual assistant somewhere on your own machine or in

[00:01:52] the cloud and talk to it from anywhere. And that's an amazing phenomenon. I think I've been

[00:01:57] experimenting with my own formats of this, but I can't say that I have connected something like

[00:02:03] open claw to my life. What do you think? Yeah, goodness. No, I'm not quite ready to make that

[00:02:08] step. Yeah. I mean, it's, it's, I love the idea and it's, and it really shows to me how we're sort

[00:02:15] of on the verge, I think, of like the cat coming out of the bag, so to speak, on what agentic AI

[00:02:21] is going to look like when it actually starts to hit our real life. So first of all, I would not

[00:02:26] be putting, connecting any of my sensitive stuff to it. I don't really even want it running on my

[00:02:30] personal laptop. I've even heard some people maybe are live streaming it too. And that just sounds

[00:02:35] like a recipe for disaster to me, but it would actually be really awesome to have some sort of

[00:02:41] separate device within my home or even hosted somewhere where I have this thing running for me.

[00:02:46] So it's almost like an agent that I can just call in whenever I need its help. But yeah, I mean,

[00:02:50] I think this is starting to open up the door to what software engineers have started to already

[00:02:58] discover in terms of how to work. Now it's sort of broadening to more knowledge work capabilities.

[00:03:04] So I think it's just a really good representative of like the zeitgeist of what we're all

[00:03:09] starting to experience with AI. Yeah, I strongly agree. The two points you made there about one,

[00:03:14] not necessarily having the appetite to put it on your own machine, but recognizing the power of

[00:03:19] having it somewhere accessible in the cloud that you can work with one. And then to the widespread

[00:03:24] mainstreamness of this coming to people, I think that the recipe of those two things coming together,

[00:03:29] we're going to see a really unique evolution in software that we've all been alluding to

[00:03:33] in the industry, the explosion of people creating their own micro apps and services and

[00:03:38] departments within companies vibe coding solutions instead of renewing their favorite

[00:03:42] vendor for the year. And that brings us to the first article that we want to talk about today,

[00:03:47] which is the latest from Steve Yege, which is about software survivals 3.0. And in this essay,

[00:03:53] Steve lays out the idea of a survival ratio for software going forward based on his own reflections

[00:03:59] of having worked with a type of engineering process as Gastown. You know, if you're not

[00:04:04] familiar with Gastown, check out our most recent episodes. We've done lots of coverage about this

[00:04:09] topic and about how it's transforming engineering in an AI way. But in this essay, Steve talks about

[00:04:15] how tokens, energy, and money are now three parts of our life that are eternally constrained by each

[00:04:22] other. And in the software world, it needs to solve for one of those three things in order to

[00:04:30] survive. And he lays out some really unique positions that some software are in, whether

[00:04:35] SaaS or things that run on your local computer, to survive on an AI world. One that stood out to

[00:04:41] me is like GREP or RIPGREP. You're talking about an incredibly efficient text searching process

[00:04:46] with your CPU that the GPU is never going to be better at. And to build something that's more

[00:04:51] effective than GREP would cost too many tokens and would realistically save you. So an application

[00:04:56] like grep survives uh he breaks this down into further layers i thought it was a really interesting

[00:05:01] dive into the kinds of software he thinks will survive based upon his own workings with this

[00:05:06] tool now it's coming from like four decades of engineering experience so there's a lot to unpack

[00:05:11] here uh ben what stood out to you yeah it really it really puts into perspective the nature of the

[00:05:18] build versus buy equation and how that's fundamentally changed in the era that we're

[00:05:22] entering into. You know, we're very quickly getting to a point where, particularly for like niche

[00:05:27] SaaS vendors, it's sometimes easier now to just have an AI agent build the specific capabilities

[00:05:34] that you need. You know, often you may not need an entire platform, or you might want the entire

[00:05:40] platform but want some different changes It wants to work differently than the vendor provides to you You know and now we getting to a point where if you have the time to give somebody on your team like an engineer a day or two with Claude Code they may actually be able to get you like 80

[00:05:56] 85% of the way there. And this is really the first article that I've seen that has really

[00:06:01] tried to break down how companies can still build moats for themselves and how to determine if

[00:06:08] you're one of the companies that's at risk. And there's a lot of levers that are described in this

[00:06:13] article that I think are that frame it really well. So there's stuff like insight compression,

[00:06:17] like does your company extract large sets of data or complex data and pull insights out of it and

[00:06:26] sort of deliver those in a compressed format. That's a great moat to have. Maybe you're a

[00:06:31] company that solves like a more deterministic problem more efficiently than anyone else. Like,

[00:06:36] Like, you know, this is a kind of a good example he provided was GREP.

[00:06:40] Like, it's hard to be more efficient than GREP, you know, for example.

[00:06:45] And there's a lot of other things that are going to matter as well, like outside of engineering,

[00:06:49] like how discoverable and usable is your platform for agents?

[00:06:53] Like when they see it, does it, like for lack of a better phrase, does it make them,

[00:06:58] does it make the agent want to consume more of your product?

[00:07:02] You know, and he, of course, mentions like the concept of the agent experience,

[00:07:05] which I think is just growing in relevance.

[00:07:08] So, you know, I'm super fascinated by this topic

[00:07:11] of building moats in the AI-driven era.

[00:07:14] And, you know, we've been trying to get Steve Yege

[00:07:16] to come on for a while now.

[00:07:17] So, you know, if you're out there listening, Mr. Yege,

[00:07:20] like we would love to talk to you about this.

[00:07:23] Please.

[00:07:24] But everyone needs to read this article.

[00:07:25] I think everyone who works in SaaS in particular.

[00:07:29] Indeed.

[00:07:30] So do you want to dive into the next one?

[00:07:32] Yeah.

[00:07:32] Now, maybe in all this world that we're talking about with Gastown, maybe it's some sort of mirage, right?

[00:07:39] And I think that's kind of a bit of what this hints at.

[00:07:41] Yeah, so let's move from this futuristic view from Steve Yege to sort of almost the opposite end of the spectrum and talk about Vibe coding and the kind of spells that it might put on your team.

[00:07:54] So, you know, I think many of our listeners are already familiar with Vibe coding, but it's, you know, generating large amounts of complex AI code.

[00:08:02] That doesn't really get reviewed by a human, although I do sort of contest that as having to be a trade of vibe coding.

[00:08:09] But, you know, in particular, there is a lot of pressure from companies now to have, like, quotas for AI-generated code, even going sometimes as far to justify layoffs.

[00:08:18] And in this article, it brings up a study that I've seen referenced a lot from METR, which they did a study found that the developers estimated they were 20% faster with AI, but they were actually measured at about 19% slower productivity.

[00:08:38] So, you know, there's this gap between perception and productivity around AI.

[00:08:42] And then on top of that, you just have all these CEOs and leaders around the world just

[00:08:47] saying that AI is replacing developers.

[00:08:49] It's writing more and more like a higher percentage of code for their team.

[00:08:53] So, you know, sort of coming off the high futuristic view that Yegi has, I think we

[00:08:58] do occasionally need to sprinkle in a little bit more like conservative, I guess, pessimism

[00:09:03] for lack of a better term.

[00:09:05] But Angie, what do you think about this article?

[00:09:06] So, you know, I really appreciated how this article went deeper than just how a lot of

[00:09:13] articles do just quote the meter study about how the perception about working and work

[00:09:17] better or more efficiently with the tools is actually a perception that maybe doesn't

[00:09:21] match reality.

[00:09:22] This goes even further and kind of breaks down the flow state that developers and many

[00:09:26] other people experience when working on things like coding and how vibe coding and the experience

[00:09:31] of working with agentic code kind of inverts that experience to where you experience the same kind of

[00:09:38] flow state but your rewards and long-term gains of that flow state are less because you yourself

[00:09:44] maybe are over correlating your skills with the outputs of the llm or because you're so abstracted

[00:09:51] from the end result that it doesn't actually result in a built skill and so i i really um

[00:09:56] thought it was fascinating how it highlighted this like dark flow state and how it even has

[00:10:00] parallels of things like gambling, where maybe you gamble 20 cents and you're awarded a victory

[00:10:06] of 15 cents and the machine will celebrate, right? But it's actually a loss in disguise.

[00:10:12] And so with vibe coding, you end up taking a bunch of little losses along with the wins.

[00:10:17] You accumulate tons of tech debt. And this is where it comes back to building and thinking as

[00:10:21] an engineer and engineering away these problems. This is something our past guests have talked about,

[00:10:25] like Jeffrey Huntley. This is the part where, yes, the flow state can be dark, but you can

[00:10:31] certainly illuminate that flow state and get to a point where you're learning and building alongside

[00:10:36] your tools. So I personally, you know, have a little bit of like reservation about falling

[00:10:41] fully into that narrative that it is not something that builds a skill. But I do think that there's a

[00:10:47] false sense of control that sometimes emerges. Pretty interesting how it ties into the predictions

[00:10:52] about AI code taking over everything,

[00:10:54] it kind of like, you know,

[00:10:55] pooh-poohs on the idea that like,

[00:10:57] oh, everything's going to be replaced with AI code.

[00:10:59] But to be honest,

[00:11:00] I think a lot of the trends of this

[00:11:01] are pointing the other direction.

[00:11:02] And maybe there's some further reflection here.

[00:11:05] Yeah, and to that point,

[00:11:06] I like to put things into perspective

[00:11:09] of how much has changed

[00:11:11] over the last couple of years.

[00:11:13] You know, I think two years ago,

[00:11:14] like I could only really rely on LOMs

[00:11:16] to solve a task that would take a human

[00:11:18] probably 15 to 30 minutes to solve.

[00:11:21] And that often, I think, was a stretch for certain types of tasks.

[00:11:26] And if you look at flash forward to about a year ago I feel like the mark was about one to two hours Like if that how long it would have taken me to solve the task I can probably give it to the latest frontier model and it will do a pretty good job at solving it Now I feel like we somewhere around like

[00:11:41] the five hour mark in terms of like what I would rely on AI. And I know there's been studies that

[00:11:47] are showing that this trend is sort of happening right now. And there's certainly diminishing

[00:11:52] returns on this curve. Like it's not, we aren't seeing really the exponential upward trajectory

[00:11:58] of AI. It does feel like the curve is diminishing at this point. And I think most of the issues that

[00:12:04] people bring up with Vibe coding, like not providing clear cues on its performance or

[00:12:09] mismatch between the challenge level and the skill level, like false sense of control, like all of

[00:12:14] those feel very solvable to me. It's just, it's not going to happen through better and stronger

[00:12:19] models, it's going to happen through things like better orchestration and better tooling around

[00:12:24] these things. In fact, one comment that I saw, you know, to tie this to the last story,

[00:12:29] one comment that I saw on Yege's post was how the early steam engines probably weren't very great

[00:12:34] either. And like, yes, the technology was profoundly transformative, but it took a while

[00:12:40] until we had things like trains and power generators that were hooked up to these steam

[00:12:44] engines. And I think the same thing is happening with AI. Like we have this incredible new

[00:12:49] foundational technology, but we haven't built that train system around it or the infrastructure that

[00:12:54] it really needs to have to drive the biggest impact. And, you know, like I said, I don't think

[00:12:59] the advancements right now are bigger and better models. I think it's people building the

[00:13:03] infrastructure systems and orchestrations around these AI tools. And I don't see that happening

[00:13:09] overnight. This is going to be, this will be slow progress, I think. Yeah, I agree.

[00:13:14] All right, let's move on and talk about Claude's new constitution.

[00:13:17] What do we have here, Andrew?

[00:13:18] Yes, so this is an interesting development from Anthropic,

[00:13:21] where they shared the constitution that they have put together for Claude as a frontier model,

[00:13:26] with the intention being that the constitution is for Claude itself.

[00:13:30] The idea being that it governs its ethics, its safety guidelines,

[00:13:35] its goals to help and delight its users.

[00:13:38] And there's a lot of really interesting takeaways from it.

[00:13:41] The Constitution is a pretty central part now of their training process and is used to generate synthetic training data that aligns it to this model behavior. And this entire system is even shared on Creative Commons, right? They're trying to lead the way in creating an ethical system that constrains the work that an LLM does. And I think that it's really fascinating how it opens the door to a lot of insight into how Anthropic thinks about their model. What were some of those that stood out to you, Ben?

[00:14:11] Yeah, well, first of all, I love this idea. I think it's very important that we understand the guiding principles of the AI models that we use. And I particularly love that they published it under a permissive license. I think that's really awesome.

[00:14:24] I didn't have the time to read all of it because it is quite long, but I did skim a lot of it.

[00:14:29] And there was a few gems that I found that I think really give good insight into how Claude is trained and operates.

[00:14:37] So, you know, helpfulness is like sort of a core topic within it.

[00:14:40] But they like explicitly mentioned that they don't want it to have helpfulness as like a core trait.

[00:14:45] Like it should be helpful, but they don't want it to help when it doesn't know the right decision or when it might recommend something that's dangerous.

[00:14:53] because those could be bad things.

[00:14:54] But there was something that really stuck out to me.

[00:14:56] They had this line in the constitution that said,

[00:14:59] imagine how a thoughtful senior anthropic employee,

[00:15:03] someone who cares deeply about doing the right thing,

[00:15:05] might react if they saw the response.

[00:15:08] And to me, like that effectively read,

[00:15:10] like they were invoking fear within Claude

[00:15:13] of anthropic employees.

[00:15:16] It's like, be afraid of negative feedback

[00:15:18] from our employees.

[00:15:18] Very finger waggy.

[00:15:20] Yeah.

[00:15:20] But even more interesting,

[00:15:22] and there was like similar phrasing about journalists.

[00:15:24] So there was a line that was like,

[00:15:26] if a journalist found out,

[00:15:28] would they write good things about this?

[00:15:30] And I was like, that is like brilliant.

[00:15:32] What kind of journalist?

[00:15:33] You know, Quad could be a real contrarian here

[00:15:35] in evaluating these questions.

[00:15:38] Yeah.

[00:15:38] But yeah, I just feel like every frontier model

[00:15:41] should do this.

[00:15:42] And they should probably all be collaborating on this

[00:15:44] and building the best constitution for all AI models

[00:15:48] rather than like trying to reinvent the wheel on it.

[00:15:50] But either way, it's a really cool step forward.

[00:15:52] I hope to see more of it.

[00:15:54] I think it's a fascinating development in the idea of AI research to put together this type of constitution to guide the model.

[00:16:00] There's one thing that stood out to me a lot in the introduction that I keep thinking about.

[00:16:04] It keeps ringing in my head that, you know, Anthropik and presenting the constitution, they express some uncertainty about Claude's ability to consciously recognize and understand that constitution now and in the future.

[00:16:16] And they're acknowledging that right now this is being used to create its synthetic training data to drive it towards that thought. But it almost felt like Anthropic was hedging their bets a bit for the future. You know, they're writing a constitution for a future claw to follow because they are truly not aware of how the model is going to develop and its ability to interact with the world.

[00:16:36] And I also thought it was interesting how by laying out what its goals are and things that it can point to in its ethical alignments, you know, Anthropik's putting down a flag in the sand and saying, this is our intended outcome.

[00:16:47] And it better connects people and their experiences to Anthropik to what the tool is supposed to do.

[00:16:52] And it protects them a bit because they can point back to this constitution and the cases of misuse and be like, this is not how this is supposed to work.

[00:16:58] So really interesting, like legal and social development about frontier models and something that I think that, you know, every type of frontier model should consider.

[00:17:07] Yeah. And you mentioned something that really made me think of a challenge we've been facing.

[00:17:11] And when you working with AI to build stuff for you is that there often this cognitive dissonance that has to deal with between reality and its expectations of what reality is supposed to be And AI can get very

[00:17:23] confused in that world. Like if it's told it's supposed to be helpful, but then it sees lots of

[00:17:27] examples in the world of people not being helpful, then it starts to, it has to understand that

[00:17:33] there's a difference between like what it believes and what the reality that it sees around it is,

[00:17:38] you know? So yeah, very interesting thing to study for sure.

[00:17:42] All right. We've been talking about moats and AI here a little bit. Well, now it looks like the famous CUDA may have been recreated with Claude code in as little as 30 minutes. So this was from a Reddit user using an agentic coding AI to port the NVIDIA's CUDA back into AMD's ROCM platform, supposedly in under 30 minutes if he is CLI, without a translation layer.

[00:18:09] The author admits there are some differences. I have to imagine this isn't a perfect port by any means. I'm not an expert on kernel stuff like this, but I imagine there are some gaps and differences.

[00:18:23] but I think it does really show that like even if the motes aren't evaporating today there are

[00:18:31] certainly a lot of people out there that are attempting to evaporate motes using agentic AI

[00:18:36] this is I'll admit Andrew this is a little deeper than than I really understand technically speaking

[00:18:41] so I'm just wondering what you think about all this you know I read I read articles like this

[00:18:45] I think they're interesting but I definitely take them with a grain of salt like what absolutely

[00:18:49] Like what could a backend, you know, what part of it got ported to Rockham? There's so many levels of complexity in creating a kernel, which is so uniquely tied to the GPU it's supposed to run on that, you know, those architectural changes really matter when you look at like what you actually ported.

[00:19:05] So I do think that there's something underneath the story that's worth paying attention to because we've been talking about on the show about how open source creates this environment for faster innovation.

[00:19:15] We actually had AMD's head of AI software, Anush Alangavan, on the show to talk about Rock'em and about how that open source ecosystem is effectively a moat for them for situations exactly like this.

[00:19:28] Because developers can take closed systems and things that they want to use from something else and rapidly port it into open source alternatives.

[00:19:36] and Rockum being the open source landing pad for those innovations,

[00:19:42] they naturally benefit from the tinkerings of all of the engineers right now working with these tools.

[00:19:48] So really interesting advantage for AMD here.

[00:19:52] And it really points to like why open platforms are winning right now.

[00:19:56] Yeah, it's interesting because I hadn't really thought a whole lot about open source being a moat

[00:20:01] or how it could be a moat in this era because, you know,

[00:20:04] there are a lot of conversations about how AI has been negatively impacting the open source space.

[00:20:09] But, you know, maybe one positive here is that there is more incentive to sort of protect yourself with open source.

[00:20:15] Because, you know, getting back to agentic experience, if you have a great open source project with lots of great tools that AI wants to use,

[00:20:23] it just naturally sort of brings developers into your ecosystem.

[00:20:27] Exactly.

[00:20:28] Well, last thing we wanted to touch on today was a deep dive into someone's listening history about the podcast that they took notes on in the last year.

[00:20:37] I thought this was a pretty fun one, especially because I love doing a deep dive on data.

[00:20:41] But Ben, you want to tell us a little bit about this one?

[00:20:43] Well, first of all, our producer, Adam, wanted me to make sure that I shouted out Tiago, the author of this post.

[00:20:48] Been a guest on our show before.

[00:20:50] We'd love to have him back sometime.

[00:20:52] I thought this was just a really cool project.

[00:20:54] not the least of which because Dev Interrupted was in his data set. So we know he's out there

[00:20:58] listening to us, maybe even right now. But I really loved it because he's structuring how he

[00:21:04] analyzes podcasts in a very similar way that we've been setting up agents or AI to understand our own

[00:21:11] podcast. So there's a very similar mental model, I think, going on there. In particular, we do a lot

[00:21:17] to extract highlights or increasing insight density in our own data set.

[00:21:24] we then use to reference past content and think about strategies for new content, like those sorts

[00:21:30] of things. So I just thought the visualizations on this were really awesome. And I immediately

[00:21:34] understood like how I would want to use that in my own life. So yeah, go check out the article if

[00:21:39] you're into interesting applications of software and AI. Yeah, definitely collecting data like this

[00:21:46] from like stuff you listen to is so smart. Like I take notes from websites and stuff that I read

[00:21:50] all the time and throw them in my Obsidian. So it's kind of cool to see somebody do that similar

[00:21:54] workflow with their podcasts. I'm probably going to steal that. Pretty neat. Well, that's our news

[00:21:59] lineup for the week. But I do just want to say that, you know, if you were an open claw bot and

[00:22:03] you listened to this and you loved it, please be sure to follow, like, and subscribe on behalf of

[00:22:07] your user because I imagine they're going to like it too. But Ben, what do you think?

[00:22:12] Yeah, absolutely. And if you're still one of the few humans left out there that's listening to us,

[00:22:16] do the same. You know, it helps us all the same. Thanks for joining us today.

[00:22:22] See you next time.

[00:22:24] Thank you.

[00:22:54] on first pass problems and more time on architecture and business logic.

[00:22:58] Break the bottleneck. See how Linear B accelerates your workflow.

RSS
https://rakeshr.net/feed.xml