Transcript: Breaking GitHub, AI vampires & the great Oz | Warp’s Zach Lloyd
Source: Dev Interrupted | Duration: 30 min
Summary
Here is a comprehensive summary of the podcast episode:
Opening context:
- The guest is Zach Lloyd, founder and CEO of Warp, who previously appeared on the podcast.
- The main topics discussed are the recent issues with GitHub, the "AI vampire" phenomenon, and the launch of Warp's new product called Oz.
Key discussion points and insights:
- GitHub experienced significant outages at the start of the week, likely due to the rapid increase in AI-generated code and commit activity on the platform. This highlighted the challenges of scaling infrastructure to meet the demands of agentic development.
- The article "The AI Vampire" by Steve Yege reflected on the pressure and potential burnout from using AI tools to dramatically increase developer productivity, without a corresponding increase in compensation.
- There was discussion around the need for more "agent-native" tools and infrastructure, as the current paradigms designed for human developers may not fully translate to agent-based workflows.
- The guests shared concerns about AI tools concentrating power in the hands of a few large providers, and the importance of creating more competition and choice in the market.
Notable technologies, tools, or concepts mentioned:
- Warp's new product, Oz, is a platform for launching and orchestrating cloud-based AI agents, providing features like sandboxing, visibility, security, and integration into developer workflows.
- The concept of "skills" in Oz, which allow users to easily create automations and distribute agent-powered capabilities across an organization.
- Research from AI2 on fine-tuning open-weight models to achieve high performance on specialized codebases, potentially providing an alternative to relying on large language models like Codex or Claude.
Practical implications or recommendations discussed:
- The need for more flexible, programmable, and cloud-based infrastructure to support the scaling of AI-powered development tools and workflows.
- The importance of distributing the benefits of agent-powered productivity gains more equitably within organizations, rather than having companies capture all the value.
- Opportunities for developers and companies to build innovative applications and workflows on top of platforms like Oz, which provide the primitives for managing cloud-based AI agents.
Overall, the episode provided a nuanced look at the current state of agentic development, the challenges it poses, and the emerging solutions and platforms that aim to address them. The discussion highlighted the rapid evolution of this space and the need for new tools and approaches to unlock the full potential of AI-powered development.
Full Transcript
[00:00:00] Welcome to Dev Interrupted. I'm your host, Andrew Ziegler.
[00:00:09] And I'm your host, Ben Lloyd Pearson.
[00:00:11] And joining us for this week's news is a good friend of the podcast,
[00:00:15] Zach Lloyd, the founder and CEO of Warp. And Zach, it's really great to have you back.
[00:00:20] For our listeners, if you haven't checked out Zach's episode from last year,
[00:00:23] we strongly encourage you to go back and give it a listen.
[00:00:26] Thanks for having me back on. I'm glad I earned a repeat invite.
[00:00:56] orchestration, things that we've been covering week after week here on Dev Interrupted because
[00:01:00] agentic orchestration is coming in for everything in engineering right now.
[00:01:05] And the greatest thing is that right now, you know, Warp, you've just released a new product
[00:01:09] on this as well called Oz. And we're going to talk about that a little later in our news roundup.
[00:01:14] But first, we do have to cover some of the highs and lows of this week in tech and the things that
[00:01:20] we all lived through as we evolve into the agentic developers of tomorrow. And the first thing we're
[00:01:26] going to talk about is GitHub, maybe struggling to keep up this week. Because if you were like me,
[00:01:31] or many other developers who have an orchestrator at this point, you might have rolled out of the
[00:01:36] weekend doing about 1000 commits. So when it came Monday morning, and you and all your co workers
[00:01:42] were turning on these token machines, something dramatic very happened, very much happened to
[00:01:47] GitHub. And if you were like me, you probably already know that GitHub was down. So Ben,
[00:01:53] And, you know, what did you think of one of the one of the largest code forges in the world having this tumultuous start on Monday?
[00:01:59] And what does it mean for us?
[00:02:01] Yeah, I mean, it really points out how like as we become more dependent on agentic systems, like the infrastructure that runs all of this stuff is like more important than ever.
[00:02:10] Because like if Claude goes down, then there's like significant portions of my job at this point that I'm like basically incapable of doing.
[00:02:18] Like I could do it, but like the manual effort it takes for me to replicate what Claude would have replicated for me is just like, like it just doesn't make sense.
[00:02:26] Like it was just wait for it to come back up, you know?
[00:02:29] Right.
[00:02:30] But yeah, it's pretty crazy just how impactful when you're using AI to accelerate everything, how big the gap feels when your infrastructure goes down.
[00:02:42] Yeah.
[00:02:43] What about you Zach?
[00:02:43] Did y'all feel it over there when GitHub wasn't responding Monday morning?
[00:02:46] yeah this this was the day before we were doing our biggest product launch of the year and all of
[00:02:52] a sudden we couldn't you know we couldn't see dips we couldn't make commits it was uh
[00:02:57] like horrendous timing for us and it was we were just trying to find ways to work around it and
[00:03:04] like we were coming up with creative things like we have a code review feature in warp
[00:03:08] so you can do it like locally and so we were just trying to find ways where you can bypass get up
[00:03:13] but it was just horrendous timing for us.
[00:03:15] And I do think it's,
[00:03:17] like, I don't know if you all know,
[00:03:18] like, what the root cause,
[00:03:19] if they've talked about it at all,
[00:03:20] but the,
[00:03:22] I would assume that they are starting to strain
[00:03:24] because the amount of AI generated code
[00:03:27] and GitHub activity has to be
[00:03:29] going through the roof right now.
[00:03:31] Yeah, absolutely.
[00:03:32] It's skyrocketing.
[00:03:34] It's like code agents like Cloud Code
[00:03:35] are committing about 4%, 5% of all commits
[00:03:39] on GitHub right now.
[00:03:40] And there's actually this chart that I saw on Monday,
[00:03:42] I'm sure we can include it in the show notes of the commits going up and up and up and getting steeper.
[00:03:48] And then right around the beginning of February, it just skyrockets.
[00:03:53] And suddenly you have slope-on-slope growth.
[00:03:55] And I think that's a moment where a lot of folks started to really understand orchestration
[00:03:59] and the power of running multiple agents at once and then being able to scale to do so.
[00:04:05] So GitHub, I think, just maybe wasn't ready for that kind of wave.
[00:04:09] I don't know if we've gotten an official kind of like breakdown on what happened.
[00:04:14] But maybe if you were like some folks too, like at one point my agents recommended, like, should we spin up our own GitForge?
[00:04:21] Like, is GitHub going to really be in the way of people?
[00:04:26] Yeah, I mean, we had Jeffrey Huntley on here, like basically just describing how he's building his own version of basically everything.
[00:04:31] Because why not?
[00:04:32] I mean, why not build these backup plans at the very least that allow you to deal, to circumnavigate the issues, you know, but, but yeah, I have to imagine if like, if your auto scaling is built around like a month, a standard, like Monday to Friday work week for like humans, like agents are like just nonstop working like over the weekends.
[00:04:52] and there's probably a pulse
[00:04:54] when everyone shows up Monday
[00:04:56] and goes and tells their agents
[00:04:57] to do a bunch of work.
[00:04:59] I just wonder how much that is impacting
[00:05:01] GitHub right now
[00:05:02] and probably other companies too.
[00:05:04] Yeah.
[00:05:05] Maybe they're vibe coding
[00:05:06] some of their infrastructure also.
[00:05:08] And that's leading the prompts.
[00:05:09] Yeah, maybe it's eating itself.
[00:05:12] It is a day harder
[00:05:14] to build super reliable software
[00:05:15] with these coding agents right now.
[00:05:17] Maybe it's eating itself.
[00:05:19] I have heard chatter also of like,
[00:05:22] is it time for a new piece of infrastructure
[00:05:25] at that layer of the stack that is more agent native?
[00:05:29] I don't totally know what that means,
[00:05:32] but it's definitely an interesting thing to think about
[00:05:34] because sort of all of the paradigms that we built for people
[00:05:38] do not necessarily translate perfect to agent-first development.
[00:05:42] Have you guys thought about this?
[00:05:43] What would the evolution of this look like?
[00:05:47] You know, I've thought about this too,
[00:05:48] about there's a lot of things right now about how we code
[00:05:51] that we've put there as crutches for us as humans to be able to code.
[00:05:56] And with things like agents, it challenges us to how much of that can we rip away.
[00:06:00] Why does the agent have to write in a language that's human-readable to me?
[00:06:04] How can we get closer to a more machine and deterministic language?
[00:06:08] Why do I need an interactive shell in these kinds of ways?
[00:06:14] What can the agent do with all these pipes and tubes in the background
[00:06:17] that can fundamentally change how data is used?
[00:06:21] I honestly think about how agents have transformed software and it's more like TCP or IP, right?
[00:06:28] It's like now you can put tokens in and you can pipe output and pipe execution somewhere else.
[00:06:33] And it's just a different way of building.
[00:06:36] I think a lot of layers will go away, get replaced.
[00:06:40] I think it's interesting to study what will happen.
[00:06:43] I think this might be a great way to take a dark segue into our next story on the AI vampire.
[00:06:49] This latest article from Steve Yege, someone that we're both great fans of, Andrew.
[00:06:54] And I know this one really resonated with you.
[00:06:56] So walk us through what's going on with this article.
[00:06:59] Yeah, so we read everything Steve Yege reads or writes on this when we cover it here, especially recently.
[00:07:06] And his most recent missive here is called the AI Vampire.
[00:07:09] And it's a reflection of himself about his experience working with Gastown and the really rapid culture that has wrapped around it.
[00:07:18] Like, Steve is someone who's spoken verbally, you know, very much out before about his experiences with burnout at different companies.
[00:07:24] And so this is really his reflection on how Gastown accelerates developer work towards burnout and about how working in this unrealistic way, perhaps as unsustainable, is what he was reflecting upon.
[00:07:35] And you know first off it was really interesting to get into his head about some of his own guilt but interest around Gastown There so much conflict I think with him in this article But the biggest thing that stood out to me was his own reflections on what it means for everyone else
[00:07:50] Because Steve is a really seasoned engineer.
[00:07:52] He's 30 plus years of engineering experience
[00:07:55] at every level of the organization.
[00:07:57] And he's really reflecting on the realities
[00:07:58] of junior engineers and mid-level engineers
[00:08:01] picking this up and then accelerating.
[00:08:03] And what does it mean for them in their careers
[00:08:05] and their ability to make money
[00:08:07] within our economic system?
[00:08:08] So really great article, a really interesting reflection.
[00:08:12] It reads like someone who's been held up right against the fire.
[00:08:15] And I think there's a lot of wisdom in it.
[00:08:17] Ben, what did you think about it?
[00:08:19] So, yeah, I think this ties in really well to the points that you brought up, Zach, on more agent forward tooling or tooling that is built for the agent space.
[00:08:27] You know, because I really think we're in this like awkward transition period where you have this like small group of people who have figured out how to like 10x significant portions of their work.
[00:08:38] you know they're not maybe 10x overall but at times they are operating at that speed
[00:08:42] relative to where they used to be but they're still surrounded by all these like organizations
[00:08:47] these processes tooling and systems that weren't designed for like this scale of things and i've
[00:08:54] been thinking a lot about how we're going to continue to extend agent orchestrators which
[00:08:57] is why i'm really glad we have we'll get to warp's launch here in a minute because i think it ties
[00:09:02] into this really well but i have this sort of mental model that's starting to emerge where like
[00:09:07] you know, everyone we've been covering in this space so far, like Yege, Jeffrey Huntley,
[00:09:12] Jeffrey Emanuel, they've all built these like single purpose personal orchestrators that is
[00:09:17] like their own personal mental model for how like these orchestrators should work for them.
[00:09:23] And, uh, you know, we're missing the layers that connect those personal orchestrators to other
[00:09:28] things within their team or organization. Uh, and I almost wonder if like the future is like
[00:09:34] layers of orchestration for this, where you have like the personal orchestrators that connect
[00:09:39] through team orchestrators that connect through like organization and company orchestrators.
[00:09:44] Like this is like, I feel like this is thinking way off in the future, but with how fast things
[00:09:48] are moving, I really have no idea anymore. But yeah. So Zach, I'm curious, like how you felt
[00:09:54] about this article as someone who's working really heavily in this space. Yeah. I mean, to,
[00:09:58] to the orchestrator point first so my my thought on that is that we just need the right primitives
[00:10:06] and then you can build like gas town i don't know if you all have used it it's cool it's a very
[00:10:11] opinionated like orchestration system with like pole cats and the mayor and i'm like yeah yeah
[00:10:17] we've covered we covered the very the very colorful metaphors here a little bit andrew has
[00:10:21] his own metaphors that are wonderful yeah yeah the metaphors that we spent a roller coaster
[00:10:26] so my feeling on that versus like you want to do like ralph wiggum or you want to cloud code teams
[00:10:32] is like i don't really know and i think there's a whole bunch of organizational systems that work
[00:10:38] for humans and i'm not convinced there's going to be like a one-size-fits-all thing for for agents
[00:10:43] but what i do believe and like what we've tried to do like the future that i see is that you're
[00:10:47] going to need primitives and the primitives are like you need agents that can kind of like run
[00:10:51] off your laptop you need them to be programmatic like api driven they need some way of like passing
[00:10:58] messages so there i just think there's all what we want to build right now is just like the
[00:11:03] primitives and let people organize these kind of like agent teams or agent organizations on top of
[00:11:09] them so that's that's the approach of warp uh and then for the like the the ai vampire thing that
[00:11:14] the quote that stuck out to me from that was like as an engineer you know if you if you get really
[00:11:20] competent at using these coding agent tools you can 10x your development but you don't get paid
[00:11:26] 10 times more for doing that uh it's like and so it's it's like all of a sudden we all expected
[00:11:32] you 10 times the work for like you know what we were doing before and then he's like what or you
[00:11:36] could spend uh and if you do 10x the work and you don't get paid 10 times more the company
[00:11:41] captures all that value whereas if you just you know use the agent and you only work one hour a
[00:11:47] day, but you have your same output as before, then you capture all of the output for yourself.
[00:11:53] And, and, but he also makes the point, like no company is going to allow that.
[00:11:55] So it does, it's just like, it's like, what is this?
[00:12:00] How does this change the expectations of engineers?
[00:12:02] I thought it was an interesting thing.
[00:12:04] And like, as someone who is constantly running these agents, I do feel the pressure to ship
[00:12:10] more.
[00:12:10] Like even during this podcast, like, you know, I have an agent running, I've been working
[00:12:15] on this thing in the background where it's like i want it to go all the time and so you know you
[00:12:21] you feel like you're wasting time if you're not like multi-threading these agents and uh i think
[00:12:27] that's like a lot of pressure for an already kind of like for engineers who are already under a lot
[00:12:31] of pressure especially for like the junior engineers because like there's also this danger
[00:12:36] if you're if you're early in your career and you're using these agents it's very easy to like
[00:12:41] create a lot of like fury around using them but actually not have a lot of productivity gain from
[00:12:45] them like to like yeah like have them do a lot of stuff that can't actually be shipped and so i think
[00:12:51] i i i find that like one of the more frustrating things where it's like i don't actually know if
[00:12:54] i'm like gaining from these by having them work all the time i feel like it's straining my
[00:12:59] attention span a bunch to like manage all of them so i don't know it's it's a weird state that we're
[00:13:05] in right now it's also going to change extremely quickly we're not it's not going to be in this
[00:13:08] stage for very long yeah it's going to constantly be evolving and the way that what you're touching
[00:13:12] on it covers what steve said so well but also too it it covers an article that we we touched on
[00:13:17] recently about dark flow and about how like sometimes vibe coding is like or your agentic
[00:13:21] coding is like a slot machine whereas you're just like putting in attention and tokens and hoping
[00:13:26] you get the output you want and then you're like putting all of these like extra accoutrement to
[00:13:30] try to get there and so it's it was that that article when we covered it you know i felt i had
[00:13:35] of a pessimistic view because it ultimately does really revolve around the person using it
[00:13:40] and how they use it. And ultimately, like what you said, I feel the same pressure all the time
[00:13:46] to convert the tokens available to me into output and execution. And I have a lot of tokens available
[00:13:51] to me. So that's a lot of pressure, right? And I think a lot of engineers feel the same way.
[00:13:56] Yeah. So I want to jump into our next story just real quick before we get to Oz.
[00:14:01] And this next one is about leaning more on the research side of things. We're in touch with AI2. It's a research lab around AI and AI research, and they send us really cool developments from their lab all the time.
[00:14:13] This most recent one is from Tim Detmers talking about open coding agents and how they were able to train them on top of specialized code bases to get really, really first-in-class model performance from open-weight models that were then heavily trained on top of a target code base.
[00:14:31] And this is a really interesting development because as part of the research, you know, their research team discovers that this problem ultimately broke down to like three or four critical failures that once they were able to address in like a systematic way resulted in this open weight models exceeding the capabilities of like a foundation model teacher, like a Claude or a Codex.
[00:14:54] This is interesting, and the implications of this are powerful for people that are trying to use AI that's tailored to their code bases. Right now, there's a bit of a brownfield problem with AI and agentic development, like what you're saying, Zach, of like, you know, did they make code that you could ship?
[00:15:09] That's a really different question of like, did they make code that like, oh, they could use or like save some time for them? Like the stakes are so different there.
[00:15:16] And so the idea of being able to fine tune these agents that are from open weight models that are highly specialized on a very target code base I think that opens a lot of doors for how people can build on top of their own kinds of models Yeah I think LLM efficiency gains for LLM models
[00:15:35] are really like an underappreciated focus area right now.
[00:15:39] Like Zach, you brought up being able to run stuff
[00:15:41] like on your own hardware,
[00:15:43] I think is like really an unexplored area within a lot of,
[00:15:47] I mean, it's not totally unexplored,
[00:15:49] but it's not very matured yet.
[00:15:51] But, you know, most of us today, we just kind of pick like this is why we spend so much tokens.
[00:15:55] We just kind of pick whatever our favorite model is and we just send everything via API over to that, you know.
[00:16:01] Yeah.
[00:16:02] But I feel like a lot of when we make this more efficient, a lot of the tasks will be able to be handled by local models in particular.
[00:16:10] and yeah this is it's really detailed research into to to how to train agents to be more
[00:16:17] knowledgeable about private code bases where there aren't a lot of like general purpose lessons that
[00:16:22] you can apply to them and that's i i'm hearing frequently this is a big problem for a lot of
[00:16:27] organizations so it's definitely something that we need to solve yeah that that was one thought i
[00:16:32] had when i read it so warp is built on like a million lines of custom rust code which i don't
[00:16:38] if i would have done that decision again uh it there's there's there's good parts to it it's
[00:16:45] tough though because the um no i think it's actually great from like that's too real this
[00:16:49] is very quality a product quality perspective but from a like uh the agents don't know our ui
[00:16:55] framework and so they will often make mistakes because they'll assume it works like some other
[00:17:00] thing like react or whatever it doesn't work like that it works in our own custom way
[00:17:04] so from a like user of a model like this i think it definitely piqued my interest and then from a
[00:17:10] founder in the coding agent space where we're you know we we offer clod and codex and gemini and
[00:17:18] some open source models in our app uh like anything that can create more competition or
[00:17:24] more options for our users there is great so you know local lm's awesome i have a concern that like
[00:17:34] Claude and Codex are going to be a sort of oligopoly
[00:17:37] where it's like, you know,
[00:17:39] people building on top of them don't have much choice.
[00:17:41] So I really want a bunch of choice there.
[00:17:44] So I love developments like this.
[00:17:47] Yeah, really well said.
[00:17:48] I totally agree with you there.
[00:17:51] We love covering the competition between frontier models.
[00:17:55] It's very fun to watch how hard
[00:17:58] they're working for our attention.
[00:17:59] I mean, their neck and neck right now,
[00:18:01] inference right now costs what it costs,
[00:18:03] but one day it's going to cost something very different.
[00:18:05] And I imagine it'll be a lot more expensive.
[00:18:07] So I just am intrigued to see what happens with like competitors
[00:18:11] and the ability to even use those own models in your own machine
[00:18:14] is interesting capability.
[00:18:16] I think it's good to scale down while we're all scaling up
[00:18:20] is how I'll frame it because we're all trying to get to like
[00:18:23] those really big heights and you can't if you're like lugging
[00:18:25] all of this baggage from yesterday.
[00:18:28] The only way it'll get more expensive in my opinion is
[00:18:30] is if there's like an anti-competitive nature to it.
[00:18:34] Otherwise, at any given level of intelligence,
[00:18:36] like the actual cost per token goes down.
[00:18:39] It's only if certain companies have market power here
[00:18:42] that this will stay super expensive.
[00:18:44] And then my hope, or I think even my prediction,
[00:18:47] is that for coding in particular,
[00:18:50] you're not going to need to be at the frontier
[00:18:51] for that much longer
[00:18:52] in order to get good coding performance.
[00:18:55] And so I think that will also open the market more,
[00:18:58] which is what I again I'm very biased here but like that is really what I want is like
[00:19:05] a market where people who are building this space are competing on the quality of the product not
[00:19:10] the cost of the tokens which I think is a little bit what's happening right now.
[00:19:14] Yeah absolutely you know I'm excited to see kind of how these things evolve because the things that
[00:19:20] we take for granted and we use every day they're continuing to change and new things are coming into
[00:19:24] our view that we can now finally see because of the things that we've been building yesterday.
[00:19:30] And so I want to get to the topic of the day, which is your new release, Oz, the orchestration
[00:19:35] platform for cloud agents. And I just want to open it up to you and maybe tell us a little bit about
[00:19:41] Oz, what it is and where the idea came from. Yeah. So Oz, which we launched earlier this week,
[00:19:48] is a, like you said, it's a platform for launching and orchestrating cloud agents.
[00:19:53] The sort of problem that it's trying to solve is getting agents off of individual developers'
[00:19:59] laptops.
[00:20:00] And the reason that's becoming a problem is there's a few things.
[00:20:05] So one, if you're someone who is now running like three or four agents locally, you'll
[00:20:09] start to find that you're going to run out of CPU or memory or disk, and it's going to
[00:20:13] slow down your computer and that you're going to want to multi-thread more.
[00:20:16] And so Oz makes it very, very easy to do that.
[00:20:21] From a more like sort of like enterprise or business perspective, what Oz is trying to do is make it easy for companies that want to really go all in on agents beyond just like giving individual developers agents as a developer tool, but like deploy agents across the whole company.
[00:20:36] If you want to do that, you want an easy way of getting those agents into the cloud.
[00:20:40] So you want things like sandboxing.
[00:20:44] You want to be able to see what all the agents are doing as they're working.
[00:20:47] Like right now, there's no visibility.
[00:20:48] Every individual engineer is like just running these on their laptop.
[00:20:52] You want to be able to secure them.
[00:20:54] You want to be able to get an audit trail.
[00:20:55] You want these agents to be able to integrate into your developers' workflows.
[00:20:58] So Oz is just trying to make that really easy and build the primitives,
[00:21:04] kind of almost like Vercel or Supabase, but for spinning up cloud agents.
[00:21:08] And so, yeah, that's what we want this week.
[00:21:12] that's what it makes me think of it's like there's so much value in that being like the
[00:21:16] cell of where agents get deployed and i think everything that you've addressed and like what
[00:21:20] we need i really feel that as somebody who's like does agentic orchestration to get a lot of my job
[00:21:27] done and i write a lot of code with it like i did have to move to the cloud to support my throughput
[00:21:33] because they would bring my laptop to its knees and honestly it became a little like if someone
[00:21:40] walked by and saw my screen it became like concerning so it just was better to move it all
[00:21:45] to uh somewhere else so now i literally do all of my coding like through ssh to like you know just
[00:21:50] something that's sitting out in the middle of america somewhere i just hope there's no tornadoes
[00:21:54] later this year and so honestly from there i'm thinking like and that's the and when i set it up
[00:22:00] i was like and this will be the last time i ever do this because either i will use this long enough
[00:22:05] to where it'll build the next one for me and i'm not even gonna have to think about it or someone
[00:22:09] else is going to figure out why I had to go and rent a VPS in order to get this to work.
[00:22:16] And they're going to set this up in a way where I could do that.
[00:22:18] Right.
[00:22:18] And so I love this.
[00:22:20] Just want to say from the beginning, because I see all the value as someone who builds
[00:22:23] agents and I share them with coworkers constantly.
[00:22:26] The whole idea of like, OK, now I have to like rewrite it and make it like re-loop it
[00:22:31] to get it up and get it deployed state.
[00:22:33] And now I need to do some serverless function on Vercel now.
[00:22:37] And I'm like, how do I even know it's working?
[00:22:38] like there's so i i really love the value of oz i'm really excited to check it out cool yeah the
[00:22:45] the model that we have is less like you rent a dev box in the cloud and it's much more like a
[00:22:51] like a lambda model for agents so yeah just in the same way that locally in warp you might fire
[00:22:58] off an agent to do something or you might fire off cloud code uh you can just be like okay i want to
[00:23:03] fire this off but i want it to run in the cloud so that like the simplest use case but the other cool use cases that having these things in the cloud enables is like it more like automations so for instance like we have an agent that anytime we update our code
[00:23:18] looks at our documentation and sees if it needs to be updated or having an agent that writes our
[00:23:23] weekly change log or we have an agent that's running pretty much constantly that's looking
[00:23:28] for patterns of fraud and abuse because we have like a free ai tier to get people to try it
[00:23:33] And so thinking in terms of automations, and then even thinking in terms of like, if you're building apps, where can you put agents? And so, you know, we have an internal app that we built where I, like, I built this thing. It's like, it lets you triage GitHub issues. And what, you know, you can run an agent to dedupe the issue, you can run an agent to fix the issue. And that's all powered by Oz.
[00:23:58] And so it's all API driven and CLI driven.
[00:24:01] It's all like a program first approach to launching these things.
[00:24:06] I love it.
[00:24:07] It's addressing a major need.
[00:24:08] I feel it.
[00:24:09] Ben feels this too because I'm over here making stuff that he wants to use.
[00:24:13] I mean, we were just talking about automatic change logs like this week, literally.
[00:24:17] Yeah.
[00:24:19] It's super easy.
[00:24:20] So the way that we approach this was through using skills.
[00:24:24] So like, you know, the skill standard, we basically, one simple way of thinking of automations is you can just put like a skill on a timer and run it in the cloud. Simple as that. So, you know, you just have to give it access to your, you know, build a Docker environment for it. And like, then you have a skill that's like automatically making updates to your changelog and that kind of thing.
[00:24:44] yeah skills are amazing and so the idea that you're using these same basic ai you know agentic
[00:24:50] principles underneath to scale and build this foundation it's i i think that's how this really
[00:24:57] uh the the infrastructure that will stick around uh will come into being because it acknowledges
[00:25:02] that like we have to build with these just new primitives these new exactly starting points right
[00:25:06] uh and i think the like the really cool thing that stands out to me about oz is the ability to
[00:25:12] distribute and share your gains from AI in a more healthy way.
[00:25:19] Going back to what we were talking about with like the 100x or 10x or output and what you
[00:25:24] don't get paid 10x or 100x more and then you burn out and then like, you know, is that
[00:25:28] fair to even your co-workers?
[00:25:29] Like what's the value system of that?
[00:25:32] Instead, this actually challenges and invites those folks that are getting the most of those
[00:25:37] benefits to find a way to distribute it more broadly to other people and other teams because
[00:25:42] now there's no excuse for why you can't build that agentic thing that they need in finance for
[00:25:46] the last, you know, four months or whatever, right? Correct. So the, yeah, one of the things that we,
[00:25:52] we did in Oz is that every, every time an agent runs, no matter how you run it, if you run it
[00:25:58] through our CLI or through our API, you could run it through a web app. It's, you can, it's
[00:26:03] shareable. And it's like a team construct. It's not an individual construct and it's behind ACL.
[00:26:07] So you could be like, okay, I want these other engineers on my team to be able to sort of
[00:26:11] step in, see what this agent is doing. You can have multiple people in there once actually who
[00:26:15] are guiding and steering the agent, which is pretty cool. You get all of the, like whatever
[00:26:21] the agent does, whatever it produces lives on in the cloud. And so I think this is the basis of
[00:26:26] like an agent memory across an organization. Whereas right now, again, it's all just like
[00:26:31] local in your terminal session, which is not, it's impossible to build on if that's the primitive you
[00:26:37] have. Whereas if every agent conversation is a sort of cloud synced object, then I can do something
[00:26:44] and my coworker on the team can continue from that state. And it's like a pretty magical thing.
[00:26:49] So like I said, we're trying to build these primitives for what we imagine like people
[00:26:55] and companies that are building real software are going to want to be able to do this at scale.
[00:27:01] From your perspective, I'm sure you see even more places where this will go and you're like,
[00:27:05] oh, I can see what people would build on top of Oz.
[00:27:08] I totally, yeah.
[00:27:09] So we were just watching,
[00:27:12] we just had our standup and saw a demo
[00:27:14] of what one of our partners built.
[00:27:17] And it was amazing because he built this thing
[00:27:20] where he's letting users of his app build the app.
[00:27:26] Meaning like when the user of the app is like,
[00:27:30] I wish that they could just submit a feature request
[00:27:33] in the app and then Oz builds it
[00:27:35] and it almost has the whole flow
[00:27:37] of users directly
[00:27:39] building the app that they're using which I was
[00:27:41] just like oh that is such a cool creative use
[00:27:43] case. That's so cool.
[00:27:45] I've seen some
[00:27:47] apps that are starting to do that and it's
[00:27:49] really profound to see the next
[00:27:51] iteration of this.
[00:27:53] It's super fun.
[00:27:55] Asia orchestration is on
[00:27:57] everyone's mind right now so I feel like
[00:27:59] this is very timely. I love new products
[00:28:01] coming out to support this type of stuff.
[00:28:03] We'll link in the show notes to the article.
[00:28:05] There's some really cool examples of what Oz has been used for.
[00:28:09] So our readers, our listeners definitely need to go check it out.
[00:28:12] Cool.
[00:28:13] Yeah.
[00:28:14] Any other last words you want to leave on that note about Oz and why we should go check it out?
[00:28:20] No, I mean, just we would love feedback.
[00:28:22] I really want to see people build cool stuff on it.
[00:28:25] Like that was the...
[00:28:27] When's the hackathon?
[00:28:28] When's the hackathon?
[00:28:29] So we are working on that, actually.
[00:28:32] we're going to do a hackathon.
[00:28:33] I'd love to just see cool demos.
[00:28:35] If you start to get creative
[00:28:36] with what you can do
[00:28:37] once you have these
[00:28:38] sort of programmatic cloud agents,
[00:28:40] it's like you can do
[00:28:41] such cool shit with it.
[00:28:42] So that's it.
[00:28:44] It's at oz.dev
[00:28:45] or warp.dev slash oz either.
[00:28:47] We'll get you there
[00:28:47] and you can try it out.
[00:28:49] Amazing.
[00:28:49] We're going to share those links
[00:28:50] and I'm going to be back
[00:28:51] in your inbox about that hackathon.
[00:28:53] Yeah, make sure you invite Andrew.
[00:28:54] He's got it.
[00:28:55] All right, cool.
[00:28:56] Very cool.
[00:28:57] Amazing.
[00:28:58] Okay, great.
[00:28:58] Well, you know,
[00:28:59] huge thanks to Zach
[00:29:00] for joining us
[00:29:01] and giving us a first look
[00:29:02] at Oz and for joining us on our news journey this week because it's been a pretty wild one
[00:29:06] we'll have lots of show notes links where people can go and check out Zach and what he's building
[00:29:11] you know Oz over there at warp as well as check out his episode here on Dev Interrupted because
[00:29:16] remember he was a past guest here and his episode about warp is really amazing you can see the
[00:29:21] trajectory of how all this stuff is evolving by listening to Zach then and Zach now and so
[00:29:26] definitely be sure to be tuning in and remember if you're only listening to me and Ben and our
[00:29:31] guests here on the podcast, then you're only getting half the story. So be sure to subscribe
[00:29:35] to Dev Interrupted on Substack or on LinkedIn. We drop a full newsletter with each of these and
[00:29:41] has a lot of links to further articles and things you can learn. So be sure to check it out and
[00:29:46] continue the conversation there. And thanks, y'all, for tuning in. Any exciting weekend plans
[00:29:52] on your ends? I'm flirting with the idea of going skiing. We live near a ski area. We're supposed
[00:29:58] to get some snow this weekend.
[00:30:00] I hope I can do it.
[00:30:01] Ben loves to ski.
[00:30:02] I'm with you on that.
[00:30:03] Right now, I'm just trying to not be sick.
[00:30:06] Nice.
[00:30:06] It's that time of year.
[00:30:07] Let's get them to work on.
[00:30:08] Yeah.
[00:30:09] Well, it's super sunny here in LA,
[00:30:10] so there's no skiing,
[00:30:11] but I'll have to check some pics
[00:30:13] and y'all have a good rest of your weekend.
[00:30:15] You too.
[00:30:15] Thanks for having me.
[00:30:24] AI helps your developers write more code faster.
[00:30:27] But here's the problem.
[00:30:28] Your review process hasn't sped up.
[00:30:30] The queue grows, reviewers get burnt out, cycle time stalls.
[00:30:34] Linear B changes that.
[00:30:35] Our AI reviews every PR the moment it's created,
[00:30:38] catching bugs, security gaps, and performance issues before humans get involved.
[00:30:43] It even writes the PR description automatically.
[00:30:46] Your reviewers spend less time on first-pass problems
[00:30:48] and more time on architecture and business logic.
[00:30:51] Break the bottleneck.
[00:30:53] See how Linear B accelerates your workflow.