How do you move from dabbling with AI and vibe coding to building real, production-grade software with it? In this episode, Austin Vance, CEO of Focused returns and we transition the conversation from building AI-enabled applications to fostering AI-native engineering teams. Austin shares how generative AI isn't just a shortcut—it's reshaping how we architect, code, and lead. We also get to hear Austin's thoughts on the leaked 'AI Mandate' memo from Shopify's CEO, Tobi Lutke.
We cover what Austin refers to as 'AI-driven development', how to win over the skeptics on your teams, and why traditional patterns of software engineering might not be the best fit for LLM-driven workflows.
Whether you're an engineer,product leader, or startup founder, this episode will give you a practical lens on what AI-native software development actually requires—and how to foster adoption on your teams quickly and safely to get the benefits of using AI in product delivery.
Unlock the full potential of your product team with Integral's player coaches, experts in lean, human-centered design. Visit integral.io/convergence for a free Product Success Lab workshop to gain clarity and confidence in tackling any product design or engineering challenge.
Inside the episode...
-
Why Shopify's leaked AI memo was a "permission slip" for your own team
-
The three personas in AI adoption: advocates, skeptics, and holdouts
-
How AI-driven development (AIDD) differs from "AI-assisted" workflows
-
Tools and practices Focused uses to ship faster and cheaper with AI
-
Pair programming vs. pairing with an LLM: similarities and mindset shifts
-
How teams are learning to prompt effectively—without prompt engineering training
-
Vibe coding vs. integrating with entrenched systems: what's actually feasible
-
Scaling engineering culture around non-determinism and experimentation
-
Practical tips for onboarding dev teams to tools like Cursor, Windsurf, and Vercel AI SDK Using LLMs for deep codebase exploration, not just code generation
Mentioned in this episode
-
Focused (focused.io)
-
Shopify internal AI memo
Unlock the full potential of your product team with Integral's player coaches, experts in lean, human-centered design. Visit integral.io/convergence for a free Product Success Lab workshop to gain clarity and confidence in tackling any product design or engineering challenge.
Subscribe to the Convergence podcast wherever you get podcasts including video episodes to get updated on the other crucial conversations that we'll post on YouTube at youtube.com/@convergencefmpodcast
Learn something? Give us a 5 star review and like the podcast on YouTube. It's how we grow.
Follow the Pod
Linkedin: https://www.linkedin.com/company/convergence-podcast/
X: https://twitter.com/podconvergence
Instagram: @podconvergence
[00:00:00] Welcome to the Convergence Podcast. I'm your host, Ashok Sivanand. Ashok Sivanand We are craftspeople, and as new tools come out, we should be engaging with those tools. Building software with AI is more efficient than building software alone. On this show, we'll deconstruct the best practices, principles, and the underlying philosophies behind the most engaged product teams who ship the most successful products.
[00:00:36] Ashok Sivanand This is what teams are made of. Ashok Sivanand Hey folks, welcome back to another episode of Convergence.fm. Ashok Sivanand You join us today as we continue our conversation with Austin Vance, who's the CEO of Focused. They are experts at using AI and especially building AI and Agentec apps. Ashok Sivanand On last week's episode, we got to hear from Austin about their expertise and experience on building Agentec applications.
[00:01:01] Ashok Sivanand He had super valuable advice and many stories from the field on what apps to build first and how to best build them. The business thinking, the architecture, and the process. So the first AI apps that you invest in are more likely to be valuable, reliable, and maintainable. Today, we're switching gears from building AI-enabled apps to building AI-enabled teams.
[00:01:22] What it takes to build teams that use AI on a daily basis to drive product team productivity and build better, more delightful software products. We get to hear Austin's breakdown on Shopify's leaked AI mandate memo, real-world coaching strategies for skeptics on your team who aren't fully using AI yet, and how leading engineers are evolving their software architecture in the era of LLMs.
[00:01:50] Austin shares how his teams at Focused trains their customers' teams to use tools like Cursor, Langchain, and Grok, and why having AI as a thought partner is becoming essential, no longer optional, for every modern knowledge worker. Whether you're a CEO looking to drive adoption amongst your company and your team, a product leader looking to enable your team with more strategies and tips,
[00:02:15] or an engineer looking for new approaches to harnessing AI in your delivery process, this episode is for you. Subscribe to the podcast to get future episodes as soon as they're published. If you find this helpful, give the podcast a five-star rating on your podcast app, or hit that like button on YouTube. Since we cut this conversation in two, into two episodes, we're jumping right into it with Austin here. And he's going to be talking about how important it is for knowledge workers to be using ChatGPT at work,
[00:02:44] and how the Shopify leaked memo is an amazing artifact to help drive adoption at your company, whether it's amongst your team or something that you can show your boss. Here's Austin. To the people that aren't using ChatGPT at work, I would highly recommend it. Now, you know, there's always data security rules, and like, did your manager even approve it? Like, that kind of stuff.
[00:03:12] I think Toby's memo not only was a ultimatum for his team, but a permission slip for yours. And so, like, what Toby said was, you know, we at Shopify are not hiring anyone new unless you can prove to me that AI can't do it. And so, you, if you are trying to use Gen.ai tooling, even as a marketer, not a software developer,
[00:03:41] even as like a marketer or a, I don't know, a researcher at a company, you should take that memo to your manager and say, look, some of the best companies in the world are mandating that their people get more efficient by using these tools. We should be doing this. And so that permission slip is there for you too. And for anyone listening that's not familiar, this is referring to Shopify's CEO,
[00:04:11] um, had a leaked memo go out that he sent to the entire company that, uh, essentially the use of generative AI is no longer optional as they, um, as they build e-com, they're, they continue to build their e-commerce platform. And, um, we did an episode on it earlier. We'll have a link to the show notes if you want to read it. But, um, Austin, I'm curious, he had five or six points in there, um, that he laid out.
[00:04:38] Um, were there any of those that like you would want to double down on or highlight or bold that resonated more with you than the others? God, there is so many. Uh, so the first is the one I said, you know, we will not hire more people until you can prove that AI can't do that job. I think that is a very strong statement about, and it's a, it's a bright red line saying,
[00:05:00] we think that AI or I think that AI is impactful in a way that we haven't seen automation be, like it is a non-linear progression of like automation of task. Um, working in digital transformation, like you and I both have, we know how hard that can be to drive change at that level of scale. Yep. And this is an amazing forcing function. It is.
[00:05:28] HR just has a checklist. And every time someone, you know, has a headcount wreck. Um, and so even if someone does it the first couple of times, by the third time they know that HR is not going to even hear them out until they've already proven it. And so within a couple of cycles, whether it's, you know, once a month or once a quarter that you're asking for more wrecks, um, by the third quarter, a majority of the leadership on the workforce,
[00:05:54] even if they haven't been thinking about it before, are now sort of forced into it. And so there's a systems thinking aspect to it, I think. Yeah. Uh, from, that, that's amazing to come from a CEO. A hundred percent. Like, and that Shopify is big and they're a very progressive company. You know, they're, uh, Silicon, they're not Silicon Valley, but they're like, uh, They're actually a proud, I'm a, being a Canadian, I'm going to like, you know, come in here. Yeah. But they have that kind of like the Silicon Valley vibes, right?
[00:06:23] Like they're part of that generation of startups. And by being that firm, he, like I said, gave permission to a bunch of other more traditional companies to do it. It felt like, uh, like when Google and Apple and all the, you know, Uber started saying, Hey, you all have to come back to the office. It was a permission slip for a bunch of other companies to say, Hey, we also think that it is more valuable for us to be working in an office. And that second tier and third tier of organizations could have that because the, the kind of top tier were doing it.
[00:06:52] The other parts of that memo that really stood out to me, he has a line in there that's something of the gist, the only way to get good at using AI is to use AI. And I constantly find myself, you know, this is not building LLM based applications. This is using LLMs to make my job every day faster. And I'm constantly reminded how powerful they are.
[00:07:22] And I'm constantly surprised by their capabilities. And so like, I have a travel trailer and the window on it blew out. This is such a silly example, but it's so crazy how good it was. A window on this travel trailer blew out and I needed to buy a replacement window. I took a photo with my phone of the window of the front of the travel trailer. There's a spare tire sitting on that travel trailer as well.
[00:07:48] I sent it to chat GPT and I thought it would do like a web search. I told it like the brand of the trailer and all this other stuff. I said, what size is this window? So I can board it up. And instead of doing a web search, what it did is it saw on the spare tire, the like, it's like an R22, whatever, whatever tire. So the diameter of that tire is this. And then it wrote a Python script to use the tire size to then measure the size of the window.
[00:08:17] And told me exactly the size. It was within like six inches of the window size. Like mind blowing how good that was. And that's such a clever way for it to get good at doing math, which is something that, I mean, the way that I've thought about it is that if we look at the professional services budgets in our businesses, we're more likely to get savings on our legal professional services than accounting and tax. Right. Right.
[00:08:47] Based on how LLMs are built. But having it generate code is a super smart way to overcome sort of that limitation or that gap there. Yeah. And we like other examples of like just learning how to use these things. Like we had a, my head of design did a webinar on how to use, how to vibe code to help you be a better designer. So rather than prototyping in Figma and giving what are essentially flat prototypes, you know, Figma prototypes,
[00:09:16] they can feel like you're clicking a button, but you're not. You're essentially progressing through a slideshow. So can you, he's very passionate about using vibe coding to make those, make those Figma designs come to life. So you can feel them even more as a, as a stakeholder. The platform that we delivered the webinar on, it gave us all the emails for all the people. So we could like follow up and send the video and all that kind of stuff.
[00:09:45] But it gave them to us in like a giant HTML table with no ability to export. And so, you know, I hit, you know, we open up source and we copy the source of the table and we drop it into chat GPT. And we say, extract all of these things and these extract all the email addresses for us. And it can't do it. Like the one, the context was too big. So there was just like too many characters for the LLM to manage. And then these LLMs are also lazy.
[00:10:13] So they have like a token limit that they will, they'll stop giving you output after a certain amount of time. So it'd be like, you know, Austin at focused, you know, Bill at focused and the rest of the emails. And you're like, no, no, I want the rest of the emails too. Like it keep going. But because we, we use these things all the time where we were like, oh, what if we ask it to write a Python script that can extract the emails out of this HTML document? And that's like, okay, here's the HTML document.
[00:10:42] I write a Python script that extracts them. It writes that it's like, you know, 30 lines of Python. And we run that and it gives us a CSV of all the things. And you don't run into the same laziness issues and the same token length issues and context window issues. So using them makes you better. So that was the first one I got out of, out of Toby's. And the second is a kind of in the same vein, AI is your pair.
[00:11:09] Um, I, you know, my first job or I was pair programming and then I got the opportunity to work at Pivotal Labs where they pair program all the time. And like, I just grew up pair programming. And the idea of working solo or programming solo is really actually difficult. Like you, you have this weight of like all these decisions you're making.
[00:11:33] And having AI by my side inside of like a cursor or a windsurf or something like that gives me the same amount of comfort and collaboration that I get from a pair often. And I think now people who never had pairs can have that too. I can have a thought partner as a marketing person. I can have a thought partner as a legal assistant. And I can talk about like, what do you think about this?
[00:12:01] But you need to use these things because LLMs are, they're naturally pleasing. They want to make you happy. And so if you're looking for a thought partner, that's not going to tell you your ideas are really bad unless you kind of know how to ask it to tell you. But that's the same about a human too. Like you have to have high trust with a person before they can be like, you know, Austin, that idea is dog shit. The LLM is the same way. Like, oh, this architecture is really great. And you're like, you can't just be like, okay, cool.
[00:12:30] It told me too. You have to kind of like use it as a probe and an exploration partner and then know how to like get real feedback out of it. Yeah. One of the early episodes we had, we actually had the VP of engineering from Shopify, Farhan Thauron. Yeah. And they're early adopters of Copilot. I think they may even had early access to it from GitHub before their GA.
[00:12:58] And he talks about how the folks who are more apt to pair programming took to Copilot way faster than others. Because they were already familiar with having to think out loud, having to get feedback. If what you didn't get back the first time, like if you and I are talking and I ask you a question and I don't get the answer I'm looking for, I don't quit. I don't fire you.
[00:13:25] I ask you a different way based on learning the answer you gave. And you're able to kind of reprompt. And so there's no doubt about that. Based on your first point, I think there's also something really powerful to call out, which you're obviously doing too as a CEO, is that Shopify, of course, a very technical company, all the way up to Toby, the CEO, who continues to write code and get familiar with these tools themselves.
[00:13:51] Versus thinking this is something for my engineers to do or this is something for my CIO to lead. Versus this is something that is a board level adoption, including the CEO, including the C-suite, not just the technical folks in the room. I mean, I use it as a thought partner for every aspect of my job. It's not just a programming partner to me.
[00:14:14] You know, I'm working through, you know, positioning and go-to-market strategies and doing deep research on our competitors and understanding market trends and AI where I might have had to hire an agency or put a person onto a task and they come back a week later with something that's pretty close to what I was.
[00:14:35] I can iterate on those things so much faster as a CEO and I can give substantially more clarity to my team faster because I have these tools. And I am, I would say, like dependent on them because of that. Like when I have to go back to using human, like an agency to help me with some designs, it feels slow. Like it just does. It doesn't feel as efficient.
[00:15:04] And especially if that agency I can also tell is not using, we're using an agency to help us with some media content that we're going to release soon. And they use AI like crazy. And it's actually really impressive how fast they iterate versus other agencies we've worked with that don't use AI or actually are afraid of it. And that's not to say they're just like, they're like a middleman between ChatGPT and us.
[00:15:32] They apply like their industry knowledge, their experience. There are those things to ask the right questions and keep asking it of ChatGPT and then get to something that is very human, very authentic and very original. But they're using ChatGPT as a, or an AI, I don't know if it's ChatGPT, but they're using an AI as that thought partner to help them go faster.
[00:15:55] Fostering an engaged product organization and aligning them with the principles around lean, human-centered design, and agile will more than likely lead to successful business outcomes for your organization. But getting started or getting unblocked can be hard. This podcast is brought to you by the player coaches over at Integral.
[00:16:15] They help ambitious companies like you build amazing product teams and ship products in artificial intelligence, cloud, web, and mobile. Listeners to the podcast can head on over to integral.io slash convergence and get a free product success lab.
[00:16:35] During this session, the Integral team will facilitate a problem-solving exercise that gives you clarity and confidence to solve a product design or engineering problem. That's integral.io slash convergence. Now, back to the show. So we got to talk a lot about which apps make the most sense to get started with.
[00:17:01] When you're going about building these apps, what are some of the things you want to put into place around your architecture, around choosing the right models, making sure you have the right quality, security, privacy in place? Now, let's switch gears over to your team using AI. I know that you're a very frameworks-oriented systems thinker, and we've nerded out about theory of constraints before.
[00:17:25] And so I can imagine there's a high level of intention put into, really similar to the Toby email, driving your team towards using AI a lot. And at a high level, what's that been like at Focused? Yeah. A little while ago, we canceled all of our JetBrains licenses, and the whole company is using a combination of Windsurf and Cursor.
[00:17:54] And the reason we did that is we are craftspeople. And as new tools come out, we should be engaging with those tools. And it is very clear to me that building software assisted by AI – I don't like the term AI-assisted development. I think there's a better term, which I can talk about.
[00:18:18] But building software with AI is more efficient than building software alone. And I think building software with AI can give everybody a pair. And so, like, we see our teams who used to pair 100% of the time breaking apart, using AI, generating substantial amounts of the code they're writing,
[00:18:41] and then coming back together with their pair and – or with a pair and kind of rounding out parts that AI wasn't able to help with. And it's become another tool that our teams use. And the effect of that has been really meaningful to our business. We're seeing ourselves deliver software, like, whole applications. Proposals are – proposal times are dropping, like, 20%, 25%.
[00:19:08] Where we would have taken – yeah, where we would have taken 10 weeks, it's now 8%. And close rates, I imagine, are going to go up just from speed. But anything else you're seeing around that? I mean, total cost – I mean, that means total cost of project can drop. That means total, like, time to first feature or MVP goes up. Like, all – I mean, the output is substantial. And I am not talking about vibe coding. Like, I think vibe coding – vibe coding is a skill, and I think it's an awesome one.
[00:19:37] It's really cool to see what people are generating, you know, by opening up Cursor, Windsurf, you know, V0, Lovable Bolt, all these different tools, and creating single-purpose SaaS products, creating really nice pieces of software. It is a totally different skill to write, to use AI to build software inside of, like, entrenched existing systems and have the AI-generated code integrate with those systems. So vibe coding – correct my definition.
[00:20:05] It's essentially where folks who are not necessarily super technical, like maybe your head of design isn't necessarily technical, given their role, can generate a real application, not just a clickable prototype. And there's a rapidness to it, where previously they were either constrained by only having the sort of – the power of imagination be assisted by clickable prototype,
[00:20:35] which has a constraint on it, or they have to rely on an engineer who can write the code and then have to wait for as long as the engineer will take in order to create that feedback loop and validate whether that product works or not. Maybe even in some cases put that out to market. The constraint on vibe coding, as I'm understanding it, is, number one, once you have to integrate with a more traditional system, then that starts to get really hard, like we talked about, that furniture or interior design app.
[00:21:03] It's a lot easier to reskin a photograph in a certain design theme versus having parts of it be clickable that I can go by the couch that you're showing in there, which requires a whole other level of things, like integrating into the e-commerce website. And then I think the other thing maybe is there's scale constraints on how many folks could be using or how much demand you could put against a vibe coding app.
[00:21:32] So what are those constraints, if any? So I definitely think the integrations are the hardest part. And I don't mean like the vibe coding tools will start to integrate things like Shopify and they'll like know how to hook up to those APIs, like these big public APIs. Now, most of them now can integrate with Supabase, which is like a SaaS Postgres kind of thing. So you can have a real database and have auth and all that kind of stuff. So you can get a lot of those systems.
[00:22:01] But if I'm an existing company that has existing software, and I don't want to use the term legacy because sometimes that conjures the, you know, it conjures like mainframes, but I just have existing software, existing monoliths, existing services. I have developers. Editing that software with AI is different. Now, you said scale. The apps that the vibe coding tools create are good apps.
[00:22:31] But eventually, you know, the quality of the code base might get bad. And the ability for that code to change might get harder. And the ability for AI to change that code might get harder. And that's, you're running into the same boundaries of what I've been calling kind of AIDD or AI-driven development.
[00:22:50] And the reason I call it that is when I think about test-driven development, test-driven development is not about, it's not about, the tests are a tool to structure the code base in a way that enable change or allow me to change with impunity. And when I use AI to generate parts of my code, I am using AI to structure the code base and build software that I can change in the future.
[00:23:17] I am driving my code out with AI. And like, it's not, the AI-assisted development feels like it's too passive of a role in my architecture. And vibe coding feels like it's kind of too fun, right, for the enterprise or for scaled code bases. Whereas I'm using AI to drive out architecture design decisions. And I'm structuring my code in a way that may be even better for AI than it would have been for a person.
[00:23:45] That is a really, it's a mind shift shift and it's a methodological shift for developers. And starting to think about it that way is very important. I love that. And extrapolating, it kind of starts to feel like AI integrating with other AI. Humans are sort of a constraint, maybe. Oh, 100%. And I like, you know, I don't know how fast this will happen.
[00:24:13] And I'm not working at one of the, you know, top tier AI labs. But if you look through, if you go back and read clean code, or you look at, you know, the solid principles, or even you go to like the Ruby Lang website. Each one of these things are the design patterns that we've created, the architecture that we, the architecture, the good code patterns that we have.
[00:24:40] Those are designed to make it easier for humans to reason about large complex systems. So the reason I use object-oriented design is because I can, you know, have a user object. And then in my brain, I don't have to think about everything a user does. I just assume everything the user does is inside of this object. The reason I name methods a certain way is so I don't have to look at all the ifs, all the if statements and variable assignments inside of that method.
[00:25:04] I can say, you know, if it says log out user, and I take one argument and it's a user, I can assume it's going to log them out. I don't have to, I don't have to reason about everything. Large language models, it is not always the best pattern to design code the way a human would read it for a large language model to be effective at it. And there's a balance between that.
[00:25:26] And as you work in scaled code bases and scaled systems, understanding how to give context to language models and design future features in a way that is easier and better for the AI to help and assist and drive out new feature is a skill. And it's a skill like learning test-driven development. When I learned TDD, the first thing I was doing was writing tests.
[00:25:51] As I mastered TDD, what I was doing is I was architecting my system to two interfaces, a test interface and a real case. And that allowed portability, high cohesion, you know, low coupling. And the same happens as you start to work with AI and drive code out with it. And there's a ton of practice in that. And there's a ton of methodology in actually doing it appropriately. And some of it's easy, some of it's hard. And you can get wins kind of the whole way through.
[00:26:22] And you talked about sort of that art that you have to learn. And you mentioned something earlier around the Googling. And, you know, maybe you're sitting next to your uncle or aunt and they're using the computer to Google something like movie times or something that the family's going together. And you want to kind of slap their wrist, right? Yeah, don't do it that way.
[00:27:12] Exactly. And you want to stop this at the fastest pace. I mean, the biggest thing, and it's straight there in Toby's memo, is they have to use it. You have to use it. Like, it is not an acceptable answer to say, it would not have been an acceptable answer to say, I Googled this so I couldn't figure it out so I just didn't do it. I Googled it, there were no results. I couldn't figure it out so I didn't do it. It's like, you're probably asking the question wrong.
[00:27:41] Can you pull two or three things together? And that same thing is true about using AI to drive out, like, development skill and development process and code. And you start to learn how to ask the LLM the right questions in the right way to get better results. And one of the things, I think that, like, there's a really funny meme, which is, like, vibe coders are going to generate a whole bunch of dog s*** code. And then, like, there's going to be all these consultants that come fix them.
[00:28:08] What I actually think is going to happen is vibe coders are going to generate a whole bunch of dog s*** code and then learn how to use LLMs to fix that code. And they will write the patterns in the future on how to build scaled code bases with AI, where all these older, more senior software developers will actually struggle with it. Because where AI is, like, writing dog s*** code, they're clinging on to patterns of the past.
[00:28:37] File structures, you know, method naming, class and object orientation, like, those types of things will limit them unless they approach with a deep open mind. There are tools that you can use. I do want to get to the tools. I also want to get into maybe any sort of team methodologies that you have.
[00:28:57] Like, I know both our companies have Friday demos and we would shut our laptops down and show each other our work while drinking some beers and give each other feedback. Is there a similar thing where, like, the folks are a little bit more advanced on prompting? Toby mentioned in his memo, too, I think, where you're expected to share with your team. What's a more practical kind of way that y'all are doing that? So we onboarded, when we onboarded Cursor and Windsurf, we set up a kind of a retrospective structure.
[00:29:27] And we're pretty open, you know, the company's fairly open. So there's a lot of chatter in Slack about what's working, what's not. There'll be people like, you know, I could probably pull up our Slack today and someone's like, hey, I'm trying to figure out how to get Windsurf to do this or Cursor to do that. And there'll be, you know, people who come back. So one is just, like, ask for help is what our, you know, it's just, that's already built into our culture. But then really retrospecting back on, like, are these things really adding efficiencies and where and where aren't they?
[00:29:55] And so, like, one example is the generation of significant amounts of, like, standard boilerplate code, huge efficiency. And there's pretty much, there's universal agreement at Focus that that's true. If I'm building a React app and there's a bunch of, like, wiring to make that React app kind of work, when React just has an ass load of boilerplate, the LLMs are so good at doing that.
[00:30:20] But if I'm working on a logistics routing algorithm in Kotlin, the LLM actually slows me down. And so, like, having an acknowledgement of where these LLMs are powerful and where they're not is, like, what we're constantly discussing and retrospecting on the tools. And then we reward people who try and share. You know, and we do that through, you know, public praise, through bonuses, through raises, through all that kind of stuff as well.
[00:30:49] On the flip side, there's, and like we mentioned, like, Farhan was on here over a year ago, and he talked about how they've been doing it for a year. So it was two years before Toby's memo. And this is, you know, think about that as the, like, final step for the, you know, the laggards and solving for that. For the other folks, maybe, you know, call it the late majority of folks on your team.
[00:31:14] Anything else that you can think of that, you know, why they're not adopting yet or ways to kind of help them understand it better? I bucket people into three groups. There's advocates, skeptics, and holdouts. And at any enterprise, you're going to get all three of those. At Focused, we cannot tolerate holdouts because we're trying to push the frontier of, like, what is capable? How do you use AI to drive development inside of, you know, large entrenched systems? How do you build agents inside of large entrenched systems?
[00:31:42] And so if someone's like, AI just can't write code, like, we're just not the right company for you. Now, skeptics are a really interesting group because they often, like, look like a type of person that is kind of, like, peeking around the corner watching, you know, a group of people play. Being like, that kind of looks fun, but it looks a little scary.
[00:32:04] And what we really want to do is platform our advocates, all those people playing, and then have them actively, like, waving over the skeptics. Hey, look, look, look, look what I just did. Look what I just did. When we engage with customers, what we find is often there's more skeptics than advocates or holdouts. And so what we're trying to do is show quick wins. Hey, where are you really generating a lot of boilerplate code? Where do you find that, you know, you're stuck?
[00:32:33] How can an LLM be super valuable to you? And one of the places that we found really, really high value early is not writing code at all with an LLM, but instead exploring code bases with LLMs. And so the first thing I will do is I'll load an LL, or I'll load a code base up into a language model somehow, you know, whether it's in Cursor, Wimpsurf, Copilot, RepoPrompt, you know, there's a gajillion tools. And I'm like, okay, I'm going to start working on this feature. Where, you know, where is authentication handled?
[00:33:03] And like, how does that happen in this application? And the LLM can look through it and be like, oh, we found this, you know, area of the application. Here's the auth scheme. You know, it uses OAuth. The tokens are stored in the database like this. Okay, cool. So like, if I wanted to change auth, what would I do? And what will happen is even the skeptics will be like, oh my gosh, like I haven't opened up this code base in months. And I would have had to like go find this. I'd have to read a readme that's out of date. I would have had to ask Bill, who's been working on it.
[00:33:31] And now this AI can act as a SME or a senior engineer in the code base just to help me explore. So we like turn off agent mode. And we get people comfortable, especially like you said, pair programming gave me the comfort to talk about my code. I get people comfortable with talking about their code and talking about what they want to do with AI. And then as they start to get a little bit more comfortable, we turn on agent mode and we say, hey, how about we ask it to write something? What do we need it to write first?
[00:33:59] Okay, so we want to add, you know, whatever, OTP to auth. Simple feature, right? Let's describe those requirements and ask it what it would do. It's like, don't let it write any code right yet. Okay, here's what it said it would do. Is that what you would do? Actually, it's pretty darn close. Okay, what would you do differently? Okay, let's tell it that. Cool. So now it's really close to where I want to go. Let her rip. And so then we say, all right, go write the code. But the same way I would never,
[00:34:30] I would never just like expect a developer to blind approve a pull request. I need my development, the developer who's piloting that AI needs to understand how to click accept reject on each one of these code suggestions that my AI is suggesting. And their job is to be that pilot. And then the more autonomy I give over my code base for AI to write the code, the more skilled the pilot needs to be.
[00:34:57] And what does the eval framework look for pull requests, you think? So we have a kind of a whip product. It's not really a product, but it's just like a whip kind of help ourselves move faster. So like if our CI goes red, we'll have AI notice the CI is red and then make a PR to make it green.
[00:35:18] And we like that because like, you know, if CI goes red, you know, all developers are supposed to, it's like in the hunt or the Toyota factory, you know, it's like we're pulling the red, you know, and everybody stops. We fix CI and then we keep going. And so if I can have AI fix that for me before a developer even notices there was a problem, like, fuck yeah. Maybe one day pull requests can get auto approved.
[00:35:41] I think at least in my head, there is no place right now where I would give full autonomy to write full production code without ever seeing human eyes. But just like a pull, like I think pull requests are less efficient than pair programming. I think an AI writing and creating a pull request is less efficient than me having a conversation with it and except rejecting each individual change. And then I submit that pull request and then someone else reviews it and then it goes into broad.
[00:36:11] It close that feedback loop. So that makes a ton of sense in terms of the process. I've got a half-baked thought and I'd love to get your feedback or help you like, like help me collaborate on it. When it comes to, you talked about culture a lot and people, there was something that I've always noticed where the folks who do better with non-determinism where, hey, you know what? Maybe your pair is not giving you exactly what you need. Maybe your people manager didn't give you the response you need.
[00:36:40] There are some folks who are really good, in my opinion, who like understand the situation and are able to try again, right? Like say, you know, hey, let me repeat what my concern really was. I didn't get the answer from you. And there are other folks who get super frustrated about. And we obviously want to push for the first bucket of folks. And I kind of have this sense that that first bucket of folks are also going to be better at prompt engineering, given the non-deterministic nature.
[00:37:09] And then maybe like optimistically, I think the use of AI is also going to like help folks be more comfortable with the non-determinism compared to like other apps that we've used before or expecting humans to work like like more rules-based apps. And I'm curious if you've seen like a people managerial kind of observation or anything along those lines. The advocates are definitely the most patient and they're the most forward pushing type of like personality. And so they're okay.
[00:37:39] You know, they know they're at the bleeding edge of a technology. And when you sit on the bleeding edge, there are like cuts that happen. There's imperfection. And they enjoy the challenge of that imperfection. Where a skeptic might see that imperfection as a reason not to engage. Oh, I will wait until it is perfect.
[00:37:59] But the truth is like this technology is moving so fast and even with its imperfections, it is so powerful and so, I don't know, just so powerful that it's irresponsible not to engage with it. We've gotten to talk a lot about building apps, architecting apps, and you've mentioned a few tools already. What are some of the standout tools?
[00:38:25] Like if you could buy secondary shares in one of these startups, what are the ones that you'd want to like take all my money? I mean, I love Grok. We are large advocates for Langchain. It is the go-to agentic, you know, orchestration framework. Claude and ChatGPT, those things are great. But I think obvious.
[00:38:51] The other tool, Cursor and Windsurf are the best programmer assistants that exist right now. Copilot, you know, Microsoft kind of dropped the ball with Copilot. And I think it's getting better and we will engage with it as we see kind of reports. But like I would happily invest in Cursor Windsurf. There's a related question that I tend to have for most of our guests here, or two rather.
[00:39:21] And the first one is around a recent product you unboxed or a service you got to experience that just totally blew your socks off. It doesn't have to be AI related. It doesn't have to be work related. It could be work or home. Yeah. What comes to mind about something that just blew your mind away? So this season I switched all of my ski gear to AT ski gear, which is like a lighter, softer plastic. And holy shit, I rip on that stuff. And you live in Denver.
[00:39:48] So that's like 60 ski days a year at least, I imagine. Well, I have three kids. And so it's like six ski days a year. But still, those six are so much better with lighter, faster gear. It's the first time I bought gear since I was 16 years old. And I just like, it's so good. But on the tech side, those vibe coding tools, like VZero, lovable bolt. VZero is the one I really prefer.
[00:40:15] I was like working with my kid's daycare teacher. And we were trying to generate like a website for her for some local events that she's trying to do some like local kind of cool stuff. And what it felt like is we were able to generate software at the speed of thought. Like we were sitting there together, working on building a website, and she was watching it materialize in front of her eyes versus like, I mean, I'm a fast programmer and I can't do that. Like, it's just not possible.
[00:40:43] I can't put together all the React components. And even a fast designer in Figma can't do that. You can't iterate at that speed. Wow. And what a magic trick you did with the daycare teacher. She's her, your kids going to her probably have like a different archetype now around what their dad is. Yep. That's right. The software magician. The software magician. Exactly. All right.
[00:41:10] Second question that I like to ask everyone is around the teams. And what's a team. And this could be fictional or real. And it doesn't have to be a team that you've been on yourself. That you really aspire towards and gives you a lot of sort of teaming inspiration. Highly collaborative. In-person teams just give me a ton of nostalgia. And I think they're just so good.
[00:41:38] And the way people move, the way technology is moving right now, like high trust teams are amazing. I think the, you know, that Toby memo makes me really think about how progressive that company is and how honest he is with his teams and the clear expectations that he's set. And I think that's just really, it's a high watermark for what I would like to achieve.
[00:42:02] For folks who enjoyed this and want to follow more of the stuff you're putting out and also getting in touch with Focus if they want help with getting AI into their teams or building an AI app, what are the best ways to get ahold of you and your team? Yeah. Yeah. So my company's website is focused.io and then I am Austin B as in boy, V as in Victor or Austin BV on like every social platform. So find me on X, LinkedIn, you know, threads, all that kind of stuff.
[00:42:31] You just like, I post a lot about like agentic architectures, how teams can be writing better scaled software with AI and that. Yeah. Awesome. And the same for your GitHub? Austin BV. Sweet. We'll make sure to have that all in the show notes. Thanks a lot for making the time today, Austin. Oh, this was super fun. Thanks for having me.
[00:42:58] I hope you enjoyed those conversations with Austin. If you missed the first one with advice on what apps to build first and how to architect them, make sure to check that out. We'll have a link in the show notes. If you enjoyed hearing about Austin's breakdown on the Shopify leaked memo, also check out the episode that we put out where we talk about each of the six points in their CEO, Toby Lutke's
[00:43:23] email to the entire company about how important it is for them to be driving AI adoption. And we'll make sure to have a link in the show notes to that as well. We'll be back next week, as always, with another episode on how to foster more engaged product teams who ship more delightful products. Until then, I hope you have a great week and we'll see you then.
[00:43:51] Thank you for joining me on the Convergence podcast today. Subscribe to the Convergence podcast on Apple podcast, Spotify, YouTube, or wherever you get your content. If you're listening and found this helpful, please give us a five-star review. And if you're watching on YouTube, hit that like button and tell me what you think about what you heard today.
