Experimenting with AI to Ship More Valuable Products with Mike Gehard
Convergence.fmFebruary 25, 202501:41:0092.7 MB

Experimenting with AI to Ship More Valuable Products with Mike Gehard

Artificial intelligence is radically transforming software development. AI-assisted coding tools are generating billions in investment, promising faster development cycles, and shifting engineering roles from code authors to code editors. But how does this impact software quality, security, and team dynamics? How can product teams embrace AI without falling into the hype?

In this episode, AI assisted Agile expert Mike Gehard shares his hands-on experiments with AI in software development. From his deep background at Pivotal Labs to his current work pushing the boundaries of AI-assisted coding, Mike reveals how AI tools can amplify quality practices, speed up prototyping, and even challenge the way we think about source code. He discusses the future of pair programming, the evolving role of test-driven development, and how engineers can better focus on delivering user value.

Unlock the full potential of your product team with Integral's player coaches, experts in lean, human-centered design. Visit integral.io/convergence for a free Product Success Lab workshop to gain clarity and confidence in tackling any product design or engineering challenge.

Inside the episode...

  • Mike's background at Pivotal Labs and why he kept returning

  • How AI is changing the way we think about source code as a liability

  • Why test-driven development still matters in an AI-assisted world

  • The future of pair programming with AI copilots

  • The importance of designing better software in an AI-driven development process

  • Using AI to prototype faster and build user-facing value sooner

  • Lessons learned from real-world experiments with AI-driven development

  • The risks of AI-assisted software, from hallucinations to security

Mentioned in this episode

Subscribe to the Convergence podcast wherever you get podcasts, including video episodes on YouTube at youtube.com/@convergencefmpodcast

Learn something? Give us a 5-star review and like the podcast on YouTube. It's how we grow.

[00:00:00] Welcome to the Convergence Podcast. I'm your host, Ashok Sivanand. Technologies change, documentation changes. I might be asked to work in a language that I'm not an expert in because I'm a consultant. I don't have to write code, like physically type into the computer. That's a win for me. On this show, we'll deconstruct the best practices, principles, and the underlying philosophies behind the most engaged product teams who ship the most successful products.

[00:00:36] This is what teams are made of. Welcome back to the Convergence FM podcast, folks. And welcome to our season premiere of our second season. Now, there's no shortage of hype around artificial intelligence-assisted software development today. The investment in AI coding tools has gotten into the billions in 2024 alone.

[00:01:00] And that number is not even counting how much investment GitHub has put into Copilot or Amazon's put into their QDeveloper or Code Whisperer tools. Now, beyond the buzz, what's the real impact of how we build software? Generative AI promises accelerated development cycles and transformed engineering roles, yet raises some critical questions,

[00:01:24] like how do we maintain quality, security, and keeping our IP lawyers composed through this? What new skills with engineers, as well as engineering and product executives need? And how will this reshape the way that we restructure our product teams? Today, we get to go really deep into this space with someone who I believe brings a lot more substance rather than speculation. Our guest is Mike Gayherd.

[00:01:53] I specifically chose Mike for this conversation because of a really unique perspective that he brings. He's got over 25 years of software engineering experience. A lot of that was dedicated to studying software engineering processes, organizing high-performance teams, and maximizing the business value that they bring at his time at Pivotal Labs.

[00:02:15] And I think what sets Mike apart here is that he's moved beyond the theory of how we might use generative AI, and he spent hundreds of hours doing structured experimentation, implementing AI into real development workflows. I also find him to be super thoughtful about the way that he shares the synthesis and his opinions.

[00:02:37] In the conversation today, we'll get to explore the impact on AI on extreme programming, pair programming, test-driven development, red-green refactoring, and what it looks like going forward. How generative AI is shifting the role of software engineers from code authors to code editors, and further closes that gap between engineers and the users of the applications they're building.

[00:03:04] The delicate balance between automation and human judgment in building scalable, high-quality software, and a pets versus cattle analogy is something we get into. And the question that's on every technology leader's mind, how do we embrace this to genuinely help our teams ship better products, rather than just chasing the buzzwords and the hype?

[00:03:30] Mike's journey spans from tinkering with a Commodore 64 as a kid, like many of us have, to 25 years of professional software development. He's got a unique background combining a chemical engineering undergrad with a master's in software engineering, and he's weathered every major tech downturn so far, from the dot-com bubble to the 2008 crisis, and the transformations going on in the industry today.

[00:03:57] After working at VMware that he joined via VMware's acquisition of Pivotal, he took 18 months off to do some non-profit work, and returned to find a tech landscape that's been revolutionized by AI. And I found his perspective to be extremely thoughtful. He bridges multiple technology eras in his perspective, and also brings forth a lot of actionable tips that you can put to work today. Let's get into it with Mike.

[00:04:25] Subscribe to the podcast to get future episodes as soon as they're published. If you find this helpful, give the podcast a five-star rating on your podcast app, or hit that like button on YouTube. Hey, Mike. Thanks a lot for making time here for us. And so you've been at Pivotal Labs for quite a long time amongst the folks that I've gotten to spend time with, maybe the most tenured,

[00:04:54] with a couple of different stints there. So tell us about your time there. And I think more importantly, for the folks who are not as familiar with Pivotal, what are some of those things that kept you coming back or kept you staying in those roles? Yeah. So I started at Labs in 2010 when the Boulder office opened. So at that point, it was in San Francisco. I forget, probably 80 people in San Francisco. There were four of us in Boulder. I think New York, the New York office was like four dudes in a co-working space or something like that.

[00:05:24] And left in June of 2023, finally. I actually, in between there, I left twice and came back twice. So I've done everything from individual contributor, working with clients. I was in engineering leadership for a while. We had this thing called the Labs Practices Council. So I was the engineering representative for that, which was an effort to try to, I guess as much as you can use the word standardize these days, standardize what our engineers were doing and the practices we were doing.

[00:05:53] And, you know, standardize might be a bad word, but try to, you know, codify, I guess is a better word for that. I did education for a while. So I led a team that both wrote curriculum for education. So spring curriculum, things like that. Also kind of how to build 12-factor apps for Cloud Foundry. So in Cloud Foundry is a big thing. We were teaching 12-factor application development. So I wrote and taught some material there.

[00:06:20] I was a product engineer on both Cloud Foundry and Spring Cloud Services. So both on the platform side, working on logging frameworks for Cloud Foundry, and then also on the app side. So how do you build API gateways for the platform? So lots of experience doing lots of different things. I said I left twice. So one was a stint at Living Social. So I went to run an engineering group at Living Social, which was interesting.

[00:06:49] So your question of what kept me coming back is like, I love labs. The labs practices were great, but they tended to be dogmatic, and they were dogmatic for a reason. Leaving and trying to take those practices and principles to another company where they didn't sign up to pair eight hours a day, didn't sign up to test drive. It was a really interesting way to remind myself of why we did those things,

[00:07:18] not necessarily the things we were doing. So I'll tell a story. I had an engineer at Living Social, and he refused to test drive code. And I was like, okay, we're going to have a discussion around this. It forced me to tell him and try to convince him of why I value test-driven development. But test driving is great, but not everybody sees the value. We finally got to the point where we agreed that he didn't have to test drive code, but if production broke at midnight, he was going to get up and do production support.

[00:07:47] So when you leave a company that is, I guess dogmatic is the word I will use, like pivotal, you have to remind yourself of why you do those things. The things are important, but why do you do those things such that you can break the rules when you need to break the rules? The other one was a small startup. And again, you get to go out into this world where not everybody holds the same values, and it forces you to remind yourself and dig deep as to why you do those things, not the what's less important than the why.

[00:08:17] So it was an interesting stint. I mean, like I said, I left 13 years later, so I guess probably, it's a long time in the tech world. One of the things that you mentioned in there was 12-factor apps. I think it's something that we almost took for granted, and I've also found that not everyone knows the history or the nature or the benefits of that pattern. And so to layperson, how might you describe that? Yeah. So 12-factor apps, also kind of known as cloud native development,

[00:08:47] it was popularized, I think it was Heroku that popularized this. And, you know, I loved the Git push Heroku master. Like that was an amazing experience. Like here is my source code, just go do this. And 12-factor apps refer to the 12 factors that Heroku came up with. And they are patterns that your app has to implement to be able to plug into something like Cloud Foundry or plug into something like Heroku. One of them being, the environment is injected via environment variables.

[00:09:17] So you don't want to hard code your configuration settings in your artifact. You want to inject those via the environment. And there's 11 others that I can never remember because I have to look them up. But it is a way of building software such that it runs efficiently and well on cloud native platforms like Heroku, like Cloud Foundry, like Kubernetes, like, you know, App Engine, anything there. And certainly a good kind of self-check for a lot of teams, especially if you're new to it,

[00:09:45] in terms of what to maybe prioritize as part of the big change management process and changing the way that we write code as we were changing paradigms. Today, of course, we're going to talk about a different paradigm around artificial intelligence coming into software engineering. And, you know, you talked about sort of a practice council. I've heard about sound similar but very different centers of excellence around artificial intelligence,

[00:10:14] even within software teams. And I think what's maybe more interesting for me to get started on this conversation was this thing that you mentioned around the why of the different practices we followed. And understanding at a more foundational level allow us to change the rules in different environments. And given that the environments are changing here, what are some of the, you know, you've spent a lot of time doing artificial intelligence sort of assisted development. So maybe let's start with

[00:10:44] what you've been working on first and then we can get into some of the other details. So I've been running a bunch of experiments. So we are very early in the hype cycle on artificial intelligence and in two ways that we can apply artificial intelligence to software. The first one being apps that leverage artificial intelligence as part of their functionality. So ChatGPT, perfect example, you know, all of this movement coming around browser use and how do I leverage LLMs

[00:11:13] to like get stuff done? That's one part, you know, that's a whole, you know, whole way we go down that. But the other one that I've been really pushing all in on is AI-assisted development is how do we leverage these tools that are being presented to us in the form of large language models to help us write software regardless of what the software is better for some definition of better. And I know better is a low-determined software, but for me,

[00:11:42] it's all about focusing on the product. So you have two, I've found typically you have two types of engineers. You have the product engineers. So folks at Pivotal always focused on what we were building and why we were building it and what problems we were solving with technology. There's, you know, there can be other engineer stereotypes that you see where like the tech is the cool thing. Like, oh, we want to use Kubernetes because, and you ask why, and they're like, because Kubernetes. And you're like, okay, great. But like, why Kubernetes? So I fall into that first camp.

[00:12:12] I am very much of the belief that I am a software engineer and the only reason that I write code is to add user-facing product value. And if I can do that lean startup style without writing code, then that is a net win to everybody. So, you know, Eric Ries in the Lean Startup talked about doing, like running your startup on quote unquote paper before you actually built software because software, building software is expensive. So I view all of this through that lens. And for me, it's like, okay,

[00:12:42] well, if I don't have to write code, like physically type into the computer, that's a win for me because, you know, technologies change, documentation changes. I might be asked to work in a language that I'm not an expert in because I'm a consultant. So I walk into a company and, oh, here's a Python application. I've written very little Python in production, but AI can help me in 25 years of experience. I know what languages look like. I just don't know what Python looks like. So it helps me do that. So for me, I have been running

[00:13:12] a ton of experiments of what does it look like to take an engineer that worked at Pivotal Labs for 13 years that understands software, has been writing software for 25 years, and inject that as just another tool in the tool belt to allow me to add user-facing value as quickly as possible. And with the emergence and accessibility of the AI assistants, what's the biggest thing that you're really optimistic about? Oh, man, there's a lot.

[00:13:42] I mean, AI autocomplete's kind of cool. I mean, we've had autocomplete for, you know, decades where it's like, oh, I start typing in the machine at some point because programming languages are deterministic. It knows what I'm going to do. It's like, okay, well, I'm starting a for loop. I need a semicolon here. Okay, cool. I really, I think that's the minimum barrier to entry. Like, it is a better autocomplete. It's just not autocomplating a line. It can autocomplete a whole block of code for me. So that's really cool

[00:14:12] because I'm typing less. I am not a great typer. I am not the fastest typer under the sun. That is not the value that I add. The value I add is encoding the business into it. So that's the first thing. Like, yep, give me that. Give me that. I've even started using it when I'm writing. So I turn on copilot while I'm writing prose because all we're doing when we're writing prose is determining what the next word is that I want to put on the page. And LLM does the same thing. What is the next token that I'm supposed to output here

[00:14:41] based on the probability in the English language that that is the next word that you're going to want to use? So I've even found, like, just writing. It's like, yep, yep, that's about where I want to go. Bam, bam, bam. Okay, cool. I'm done with six sentences. So it's another place I've been using it is just writing blog posts or articles and things like that because I suffer from blank page syndrome where if I'm staring at a page, how do I get started? I can just start typing, let the machine kind of guide me as like a co-author. And then if I don't like it, I'll just go back and edit it.

[00:15:11] I do that all the time. I'll, like, write paragraphs and then stop and then go back and, like, tweak stuff. So those are the two places I've been using it heavily that I'm like, if you're not doing this, like, time to start doing this. I love that you brought the pros in. It's similar for me. I have paired into, essentially, as a post-technical person, I wrote a lot of code and then I didn't write a lot of code for a long time. And then a lot of things changed along the way. The co-pilots have allowed me

[00:15:40] to roll up my sleeves again and feel a lot more empowered. I had the benefit of some of the engineers that I worked with at Integral Pair Program with me and co-pilot and got me kind of through the initial friction where I felt pretty comfortable now using GitHub co-pilot. And then pros, very similarly, I think, or articles I'm writing or preparing for this even, the AI really helps me with finishing some of my thoughts

[00:16:09] and that what it doesn't help me with is figuring out what I wanted to do and why we had this conversation and what I wanted to bring to the audience, right? So I think something else that we learned at Pivotal in the earlier days or the earlier phases of the product development, maybe closer to discovery when things are a little messy are kind of the more unstructured work. Just get all your thoughts out on whether it's sticky notes or whatever else and then start synthesizing from there versus starting from the first line

[00:16:39] in the pros and the introduction of what you're starting with and it sort of helps me with the first part but I think a lot of the value comes downstream once you've already formed an opinion around what kind of takeaway you want to have from the pros you're writing and then probably similar analogy to the code in terms of what you want the code to output. It doesn't help as much with the initial part. I'm curious if you have a reaction to that analogy. Totally. Totally.

[00:17:08] So I've actually like I said I've pushed all in on this like I am like at Pivotal Labs you know I talked about being dogmatic so at Labs we parodied hours a day we test drove everything we were XP to the extreme so extreme extreme programming the reason we did that is because when you go into a client and they're at let's say at zero they've brought us in because they have a new dev team where they want some sort of transformation they want to learn how to do XP you kind of have to take them to a hundred

[00:17:37] knowing that they will backslide to somewhere between zero and a hundred typically 50% so if you don't take if you only take them to 50 then they're going to backslide to let's say 30% so I have pushed I've done that with AI I'm like where can I apply AI and to your point I need to be critical of that is this helping me or am I doing this because it's cool and I really liked your part about the generation piece

[00:18:06] so I've started the substack I have a GitHub repo where I'm literally brain dumping all of these experiments into I've actually started to use AI to help me generate playbooks so and to your point I generate the outline or I might have AI generate the outline like go do me a bunch of research research you know AI security for me and deep research from Google unfortunately deep research from open AI like this is a thing they can go scour the internet

[00:18:36] come back and give me an outline cool I'm going to edit the outline so I'm going to go through and be like hey that works that works that's not what I was thinking but oh wow that's really cool I never thought about that go through and edit the outline and then hand that outline to an AI to go generate the prose for me and then I go back and edit the prose so I've elevated myself out of the writing and I'm now like an editor so newspapers have editors there's a reason they have editors and it allows me to move myself

[00:19:05] up the value chain to not generating the code but being more of an editor and I think that's a perfect example of this and the same thing in code I'm you know I'm up reviewing code bases not necessarily typing them so for better for worse I mean some people say it generates slop but there is always a human in the loop I am not just setting these things loose and having them publish sub stack articles for me they're generating ideas I am then refining those ideas maybe fact checking some stuff and then you know

[00:19:36] eventually putting my seal of approval on it and being like okay this is work that I have paired with an AI on just like I would pair with a human fostering an engaged product organization and aligning them with the principles around lean human centered design and agile will more than likely lead to successful business outcomes for your organization but getting started or getting unblocked can be hard this podcast is brought to you by the player coaches

[00:20:05] over at Integral they help ambitious companies like you build amazing product teams and ship products in artificial intelligence cloud web and mobile listeners to the podcast can head on over to integral.io slash convergence and get a free product success lab during this session the Integral team will facilitate a problem solving exercise that gives you clarity and confidence to solve

[00:20:35] a product design or engineering problem that's integral.io slash convergence now back to the show one of the points specifically that I come across you mentioned it as they that's really cool I never thought about that and I start to wonder at that point of like is this continue to be something that I'm generating can I put my name on it or not in part because I think there's an authenticity

[00:21:05] element in other part maybe more concerning for me at the specific times is if the AI is hallucinating this is not something that is a product of my experience or instincts this is something that's additive or augmentative and I found myself ending up sort of doing more fact checking or gut checking around those specific points and I'm curious if you've come across similar sort of friction points or anything else as you're writing and as you're creating content

[00:21:36] this is one of the things I love with the movement of where Google is headed with deep research and where open AI is headed with deep research unfortunately I don't know why they picked the same name but whatever perplexity is the same way like perplexity they will give you references so it is still on me to go read that paper at some point to fact check that and you know in my like burgeoning studies of LLMs LLMs will only I won't say only LLMs tend to hallucinate when the context

[00:22:06] window gets too big when you give them too much information or you ask them to generate from memory you say I'm not going to give you anything I want you to generate based on training data well guess what humans do the same thing I make stuff up all the time so I think it is still on us to fact check this stuff I am not turning my whole life over to AI but it just eliminates some of the you know some will say the creative parts of this

[00:22:35] like if I were writing fiction I wouldn't be doing this but I am writing technical playbooks that tend to be short are very like do step one do step two do step three are very verifiable you know another problem we have with AI is people say it hallucinates and it makes things up but in my world it's deterministic the test either path or they don't the you know the thing that wrote either works or it doesn't and I'm going through those steps so I think it's a both and I use this analogy with people where AI

[00:23:05] is just a tool it's a screwdriver you can fix things with a screwdriver and you can kill people with a screwdriver like it is you know it is just another tool that we can use and it's how we use those tools to better society and you know for some definition of better but it doesn't alleviate us from just turning it over to the machines and letting them go wild I mean AI slop is the new term these days where these things are generating what they call slop and it is kind of low quality

[00:23:35] and you know but it's humans still can be in the loop and kills can still add value it's just to the human to take the time to do that now the big question comes up is it saving me time you know a lot of people say well does this save you time I was like well I don't measure it to begin with so I don't know what it takes me to write an article it takes me x period of time to write an article now with AI did it save me time I don't know because I wasn't measuring to begin with do I feel more productive yes am I getting stuff out that I

[00:24:05] probably there's three things that I haven't fully formed into an opinion but I feel very optimistic personally and you hit on one of them which I think is iteration and maybe you're putting out more articles on your sub stack than you would have otherwise if you had to write them some scratch but I think that

[00:24:35] article is a final product of something that is it's increased the number of iterations without confrontation if you and I were pair programming or if we're pair writing an

[00:25:05] article and there wasn't something I particularly liked or agreed with there's some level of investment now in putting into communicating effectively additional effort into making sure the communication is also kind and received in a way that it will get the most collaborative response out of you and an AI doesn't necessarily have the same kind of feelings I have to worry about I can use shorthand a lot of the time in terms

[00:25:35] of providing feedback which would be considered rude in most countries and then the other things I think are the non determinism where growing up as engineers we are unfortunately sort of brainwashed into thinking that this is a very deterministic world and then one of the things I really loved at Pivotal is it really changed the mindset around that of actually it's very non deterministic and we're going to build in

[00:26:05] resiliency either from a product standpoint through understanding your customers better or from a systems and engineering standpoint of hey if we're wrong the cost of change to get it closer to right based on new information is going to be minimal because we've architected our teams and our systems in a way that I can again go back to iterating but the underlying assumption is things are non deterministic and I think talking to an AI that's non deterministic has helped me also be

[00:26:35] more articulate in my communication to other humans and then the third one is I think it forces us to design better and a little bit maybe more upfront design than before because there's fewer assumptions that you think you can that the other person who typically a pair programmer would be a similarly smart person who understands a lot of the content and has the same morality that now you no longer have so

[00:27:05] between the iterations non determinism and forcing me to design better I think it helps me write better code write better content I'm curious what your experience might be on any of those yeah I mean it's a skill prompting an LLM is a skill but prompting a human is a skill too and I think you know I've had this thought recently in person next to a human for many many years so starting in 2010 I was was I good at

[00:27:35] to begin with no but I got constant feedback on how I could get better prompting an LLM is the same thing it's just this it's a bit different and I think for me I'm struggling with the mental dissonance where I'm not sitting next to a human I'm talking to literally something that is predicting the next token that's supposed to come out of my mouth based on a probability of human language so yes I agree with you I have also felt the as much as I love pair programming there are times when my pair and I

[00:28:04] disagree and we're both right you know there's no one right way to write code as long as a test pass we're good to go so it reduces that friction a little bit sometimes that friction is good sometimes that friction gets in the way so as someone who's been doing code for long I'm building such that I know that I have to add my own friction or I have to like consciously ask myself why are we doing this or ask the LLM why we're doing

[00:28:34] this so I really like that I think deiteration is great but there is friction deiteration the non determinism humans are non deterministic you know you and I might have the same thought in our heads my partner and I do this all the time with the same thought in our heads I express it she would have used different language and we have to come together and say okay you said this I heard this like did you mean this like okay cool so there's no right way to write English language so there's this non determinism is

[00:29:04] is it both a pro and con sometimes it you know it manifests creativity sometimes it manifests as harshness it depends on and I think this is the big thing I've been thinking about lately and pushing back a little bit on LinkedIn I've kind of gotten to some LinkedIn wars with some folks I posted a couple weeks ago everybody that talks negatively about LLMs I really wish they would say this is what humans do just like that

[00:29:35] and not the LLM is simply a reflection of humanity that has been digitized to probabilistically tell us how we would react as a species so you know I think we should think critically about LLMs but we should also ask ourselves like will LLMs generate the wrong code 40% of the time how often do humans generate the wrong code and as long as the LLM is doing a better job than I'm doing on a very base level a very non humanistic

[00:30:05] non emotional level it is doing better than I should adopt that tool because if it's so I think it's both a feature and a bug and I think we have to start asking ourselves like being critically looking at humans and could a human do this better if not why not let the machines do it and that opens up all kinds of other societal problems with what do we do with all the humans and then I think what was the last one I've already lost track of what your third one was and

[00:30:58] it and over here I feel a lot more risky around those right assumptions being made and so I find myself writing one or two more layers of context into the requirements that I'm putting in or the input or the prompt than I might have with if you and I were pairing on an article or some code and then that one or two more lines it's easier said than done because it takes me like an Pareto principle 80-20 amount of thought of things that I hadn't done before

[00:31:28] so designing better was the third one I think you're right agile processes we want to get to working code as quickly as possible and we'll talk about code for a second so one of the reasons we test drive code is that I want to get to working code as quickly as possible then I want to make changes but in that case when a human is generating code the generation of the code sometimes can be the rate limiting process or there's a cost to generating that code

[00:31:58] so I it but let's say the machine can generate it in one I can now take five to think about this I can think more deeply I can think about edge cases I might write a test or two I might design some things up front and feed that to the LLM to give

[00:32:28] it more context to better read my mind this is another thing that happens with humans I have it in those two words could output what you wanted but you talk to another human that doesn't have that same context in their head or share those same principles those two words get you nothing they stare at you blankly and they're like what are you talking about so now you've got to

[00:32:58] go into that iterative loop again of becomes very interesting and I'm not talking about one shoting like Tetris apps like the videos most of the videos that are going out like watch me one shot snake again for the 55th time using a different model I'm talking about formal methods I

[00:34:09] reading it it it allows us to take some more time to design up front and it's not big up front design it's still agile like help it move but we are thinking critically about the problems we're solving not backloading that stuff

[00:34:40] the LLMs oftentimes will help you with defining the problem too and in my experience I've taken it takes a little bit more of that iteration or back and forth on that side and once you get it into a

[00:35:11] code code generation only takes me one I could try it five times in the time it would take me to type it so the cost of generating encoding that idea into code goes down so I find myself doing this a lot a lot of people will complain of like oh the LLM doesn't get it right the first time great do a get reset and just ask it again and see what happens because that non determinism kicks in is this the time it's going to get it right oh is there feedback of why it didn't get it right that I can refine the prompt and

[00:35:41] give it a second time and maybe it gets it right then oh it didn't get it right great get reset okay let's start over I mean because that cost of the actual encoding in the software I think is going down that's my hypothesis that it's going down you know I

[00:36:17] forget it's called TESOL I think they took a bunch of money venture money and I watched a talk from their CEO and he said what would a world look like where we didn't have source code where all we had was specifications for the software that we want to see and we could regenerate that code whenever we wanted whenever security vulnerability came up whenever a library got updated so just like

[00:36:47] infrastructure no longer do we log into machines fiddle with a bunch of bits to get like SSH working when it breaks we just destroy the machine rerun the scripts we have which build our specifications for that machine and we go about our business what would a world look like and this is probably way down the road unless you're Mark Zuckerberg and then it's like in six weeks what would

[00:37:17] I mean that's a pretty cool place I mean granted again societal problems it eliminates a bunch of jobs like there's whole you know I'm not saying we're there yet but it's pretty interesting because again it it focuses on why we're here writing software we're here to add value and if we

[00:37:47] exist to get the idea that's in my head encoded in a mechanism that the machine can understand one of my co-workers at Rdm took offense to the term waste product which sure it's words but let's think about that let's not throw the baby out with the bath water why do we write source code to get the machine to do what we want I can just move myself up

[00:38:17] I got a chance to look at that article as I was prepping for this call and loved it so much I've pissed off a few folks in the past before the AI version of this conversation but did tell folks that hey like in my world the perfect software has no code because there's some magical way in which the customer gets value without this liability that's introduced along the way and you

[00:38:47] if we look at it from a lean definition of waste it's anything that's being done along the way that the customer doesn't get value from and putting software into the sort of waste or liability bucket versus the asset bucket I think even before AI has just a

[00:39:17] creates something that as soon as you drive off the lot or put into production loses 20% of its value and so you start to be attached to it a lot less I think and then the cost of change also I think as a system tends to go down if you think about it

[00:39:47] the same way when we go in we have to clean it up libraries have to be updated security vulnerabilities have to be patched bugs have to be patched so there's I love that because it is a liability it's a liability first but it takes a mental shift a lot of people see it as an asset if you're looking at it as an asset then it's hard to say I want to minimize this but once you look at it as a liability you want to minimize it so you want to be more efficient you want to have less of it

[00:40:16] and there is a cost to your organization whether or not you want to justify it or not so it's been always in the back of my head but this AI thing amplifies that because now I have this other way of generating value it's not me sitting typing at a computer anymore and I think it also frames a lot of the whether it's 12 factor or even looking at XP in a lot of the practices a lot of the things we're doing is essentially

[00:40:46] one of two I think how do we increase the lifespan or the shelf life of this line of code for as long as possible because we've thought about what problem it's solving and we've thought about it in the most first principle way or the most foundational way in the lens of the customer and the business and the second thing is when that eventually no longer holds true how do we minimize the cost of change when we need to update it to the next one right those are probably two things that I

[00:41:16] think I would always look back at in terms of whether the value of these practices and disciplines that did carry its own cost were in service of or taking away from more XP practices things and want to hear about your thoughts on how that might or may not change here we talked about pair programming a lot so let's talk

[00:41:46] about pairing has changed a lot since when we were doing it in in person and since 2020 a lot of us are not in person anymore let alone the fact that we've got co-pilots that arguably could be like pair so let's hear where you've come up with through experimentation on the new pairing paradigm yeah so this is so I took 18 months off of work I quit my job at VMware in 2023

[00:42:15] took 18 months off to focus on some non-profit work here in Chattanooga I was just burnt out I was done I needed to take some time I'm a huge mountain biker so I pushed all in on that took 18 months off came back late started looking for work June of 2024 finally started landing some stuff late 2024

[00:42:46] it's a different world so you know as you pointed out we are no longer pairing in person I loved pair programming in person you can't I don't think you can get higher bandwidth feedback than in-person pairing did some remote pairing interviews the tech is getting better but there's just this lag that bugs me and it drives my brain nuts because it picks up on it and it's been really hard so I've gone through some interviews so I'm struggling with remote pair programming I still

[00:43:16] do it but it doesn't give me the same feeling that I was doing it in person so this is what kind of prompted this push all in on AI like how close can I get how asymptotically can I approach this in-person thing that I've done in the past that unless I go back to in person probably at pivotal will never exist again because pairing is hard humans are different so like what does it look like okay so what do we get from pairing so

[00:43:46] we talked earlier about the practices is pairing but why do we pair fast feedback loops you can't get a faster feedback loop than humans sitting next to you so how can I get fast feedback loops and I'm having to go back to first principles to remind myself of why I do this and because I'm experimenting and I've made the assumption that AI is the solution because I have to make an assumption somewhere I got to push all in can I reproduce that and how

[00:44:16] close can I get I'm an engineer by trade so everything has a cost everything has a benefit if the benefit outweighs the cost then you shouldn't do it I wish more people would adopt that mindset sometimes and that's a hard for 25 years it forces me to be my own pair I mean rubber

[00:44:46] ducking is a thing we have a whole term in our industry called rubber ducking where you set a rubber duck on your desk and you talk to it as your pair well why not just add something that's actually going to speak back to you so it's getting there it's not perfect do I screw stuff up all the time yes do I still like talking to humans yes am I more selective when I talk to humans do I seek out human feedback more selectively now yes so yeah pairing I mean I think there is something there

[00:45:16] I'm not sure a pair will an AI pair will ever get to the point where it's like hey we've been going down this rabbit hole for 60 minutes I don't know I can overcome those limitations by setting myself a timer so I can fill in some of that stuff something that I gave up relatively early on and you may have had a little bit more experience with was around using some of the

[00:45:46] accessibility features and I just did the MacBook one which compared to an app like speechify isn't really the I haven't

[00:46:15] there's because I haven't gotten to that point there's so many other things that I'm working on where I haven't done speech to text there's a tool I use called Ader so it's one of the AI assistants that I really like it's all command line based they do have a speech to text that I haven't tried out but no I haven't really dove into that but why not like the models will do that some people I think it's going to be individual some people like to hear that like some people like talking to Siri all the time you know some people don't

[00:46:45] like talking to Siri some people process speech better some people process text better so I think the upside is it gives us the ability to it's a both and or both or like you know everybody can figure that out for themselves folks who are more natural or think about software or think about even products

[00:47:14] from a testing perspective will likely be more adept or enjoy working with an LLM more and I know that's a rabbit hole loaded question because there's different styles of testing different styles of one of the things I have been doing back to this first principles idea is going deep on test

[00:47:44] driving like going back and finding Kent Beck videos and trying to figure out what is test driving again because there's an article video I'm consuming so much content at this point I don't remember where I even read it but there is this movement and I'm guilty of this as well is when we're test driving code we write a test we make it green you know red green refactor I make it green when I'm all green I

[00:48:14] refactor and keep the test green test driving sometimes gets a bad wrap when when I do a refactoring and I extract out something like a function or another class I go and write tests for that what does that do that solidifies my architecture so one class one test is an anti pattern in testing and that I've literally found a bunch of videos I think it was Ian Cooper I watched one where he was talking about this so I've gone back to that like what is good

[00:48:44] test driving which is a whole another god we could spend hours talking about that and probably arguing about half of it LLMs like feedback so an LLM they are trained on reinforcement learning as another article of TDD and testing in general is a huge return on investment in this

[00:49:13] world because what would it look like if I could write an acceptance test some basic types let's say I'm writing in Kotlin or I'm writing in TypeScript back to our discussion earlier about some upfront design I might know what function I'm writing I might know what the return type is I might know what the parameters are I can write out that skeleton of a method in types it's called type driven development it's actually really cool I might write some unit tests but I'm

[00:49:43] giving the LLM feedback and I'm like just go make this all green and the LLM is really good at that and back to our conversation earlier if it goes and tries five times it get the tests we might have the LLM write the tests but if we go back to why we test drive test driven development is small steps to allow the human brain to iteratively create a solution for a problem that

[00:50:13] might be too complex to one shot and when I say one shot just spit out of my head fully form so we baby step our way to this LLMs are I still do think it has value I think outside in TDD where you write the user facing tests add more value because now I'm operating back to that immutable software structure if I know what the system should do from the user

[00:50:43] standpoint the rest of that is immaterial even software architecture is immaterial at that point if a human never has to change that code why does software architecture exist so we can fit enough context in our brains which can hold on to what is it like seven plus or minus two pieces of information architecture exists so we can fit it into our brains so we can reason about it but Gemini has a 2 million token context window I don't know what the human brain context window is in tokens

[00:51:13] but I context it needs and back to your comment about design what design can I do up front writing types as a design writing tests as a design I think will change and I've been liking the outside in stuff just because it gets me closer to that I don't care how this is written I I've been making the shift I want to define

[00:51:44] what what are we building why is it important and I want to turn over the how to the machine I don't care how it's built I don't care if it's JavaScript I don't care if it's Python I don't care if it's TypeScript I don't care if it's Ruby people will say well what about performance requirements great write a test for it say it should perform 6000 loops in

[00:52:18] how and maybe some of the why the machine is filling in the details on the how I think I've been biased towards outside in testing I rationalized it is because I've transitioned from working in software engineering to working in product management and then went into sales and entrepreneurship and everything else and articulating the business problem that we're trying to solve in return for an investment whether or not there was tech even involved in the form of

[00:52:47] a hypothesis or in the form of trying to establish the success criteria as a communication mode as a design forcing function and everything else kind of just fit really well into the way my brain worked and then having that dovetail into how what the engineering teams should prioritize and how they should architect as a function of business tests just worked really well there's risks and trade-offs there and I sometimes wondered if that bias told me that pushed me

[00:53:17] closer towards thinking that folks who kind of bias like I do towards outside in testing might do better with AI assisted software development and better is a loaded term but if we just kind of put an asterisk on it for a second that sound was that sort of line the lines of what you said around outside in may play a better part of using AI I mean I'll own my

[00:54:33] getting getting getting getting the guts of Kubernetes and fiddling the bits you think a different way are you wrong no I don't know I mean I tend to bias that way my favorite haiku was Onsi from Cloud Foundry came up with here is my source code running in the cloud for me I don't care how I heard that and I was like we're done here like I don't care what this thing is doing just like get this thing running for me I think we will be forced to go that way

[00:55:03] as engineers of the future let's play this thought experiment out let's say the do we do with all those people that don't focus on the product

[00:55:33] that are going to become more obsolete more quickly because if you look at the world we are operating in you know Pivotal Labs started in a world where money was very cheap interest rates were almost zero venture capitalists were doling out money like it was going out of style we took advantage of that interest rates are 4.5% right now many economists are saying we will never go back to the world of free

[00:56:02] money so money is expensive now which is trickling up the chain of how do I do more with less of that people are going to have less patience for they're going to want to see a return on their investment and that investment if we make that

[00:56:38] they're not seeing the return on investment and I think that's a lens we're going to see a lot of so that has entered into my thought process as I start to think about this it's all about return on investment and it sucks because it's non-humanistic but we exist in a capitalist society this is the world we live in we can't just stick our head in the sand and pretend it doesn't exist so I think it will get interesting I think those that embrace this product mentality will hopefully be better off than those

[00:57:08] that aren't maybe that's just selection bias maybe I'm just wishing this into existence but I think that's where we'll end up I tend to agree with you and I have a little bit of an addendum I think and you gave me a great segue at the end around the product thinking and I certainly agree with you it's a good separation of archetypes amongst engineers the folks who are working back from the customer value and then the folks who get a little bit closer to solving the technical

[00:57:38] problems without the outside context right and while there's always going to be folks that are left behind I think the separation for me is going to be there's folks who are going to use the LLMs to create customer value faster create more customer value help more customers and then those folks are also customers those engineers that need these LLMs to continue iterating and getting better and making them more usable so they can do

[00:58:08] that and similar to the folks who would build Cloud Foundry I think that's maybe the separation where there's going to be another set of engineers who built better LLM tools and better enablers and assistants so that the folks who are closer to the customer can do this and that's where I arrived and I wonder if that's butting up against or augmentative to what your vision of the future might be now I think we're going to see a collapse of the balanced team

[00:58:37] I read an article and everybody's talking about this now so there's a lot of ideas being generated so I've been trying to consume all of them just to you know there's a lot going on here this whole idea of personal software has come up a lot in the threads that I'm reading where you know I want something that solves my needs as a software engineer I can do that now what does it look like when we have other people who are non software engineers being able to do that you know we talk

[00:59:07] about balanced teams at pivotal so product design and engineering I suck at design if I design you a so I think we'll see a collapse of that and engineers that have that product

[00:59:37] mentality are going to start to move up the chain product owners are either going to need to move down the chain towards the tech or kind of be encroached upon I think there's a lot here and I think there's a lot of people talking about it and a lot of people doing it I think the biggest thing is what works for you if you're in a balanced team are you giving your product owners more AI tools to do this stuff so I don't know there's a lot of

[01:00:07] what ifs here and especially the higher we get into those abstractions there's a lot more what ifs because it's not concrete I certainly feel lucky I agree that the balance team collapse or at least compress I me designing your application or architecting it I've been close enough to understanding the problems that the designers had that the engineers had and

[01:00:36] the iterations that we went through and the really good balance teams that we'd be debating arguing going and conducting additional research whether it's a technical spike or finding more things about a business model or customer needs and getting to be in that iteration where we were fully autonomous albeit as a team and not as myself I think has given me a much better mental model in terms of the different pieces that come together to ultimately surface customer value through

[01:01:06] a technical product and while that looks different I think folks who've gotten to understand all the three or how many ever legs of the stool are going to be able to harness the LLMs to compensate for where they may not be particularly passionate or skilled and then really shine a light on their skills and passions a lot more I think it's going to be interesting too because I think time to working prototype has gone way down if we

[01:01:49] been a foreign world to me I really appreciate those folks and the work they do but I like your idea of if you've been exposed to this you have enough understanding of it that all you need is some help getting across the finish line so I think the V0s and the bolts of the world are giving me the opportunity to be like okay just design me a UI I don't want to write JavaScript I don't want to write HTML I don't want CSS

[01:02:30] so I fast feedback loop into running software so we're compressing that timeline from idea to running software and I think it's going to squeeze a lot of things in really different ways and this is why I love experimenting it's like what if I try this so there's a lot going on

[01:03:07] I product because product just got in the way but those of us that have been in those environments where it's a healthy environment see like wow there's value there and I can figure out how to squeeze the value out of that because I break down problems into smaller problems and I solve those problems so it's going to be interesting a lot of people are going to start feeling squeeze 100% and I think minimum time prototype we're going to transition from sort of the envision clickable prototypes that gave us a lot of feedback that helped us with maybe

[01:03:36] fundraising in the VC world and everything else to now having like working software where the front end is hitting a real database and you performing real actions on this website or whatever at what point do you think that starts to push against the boundaries of what we're capable of today in early 2025 help me understand when you be what you mean by push against the boundaries maybe this is leading the jury I imagine as we start to

[01:04:07] integrate with more legacy back end systems like ultimately like larger companies have something sitting behind a main frame and some middleware as we start to go beyond maybe an app that can be loaded on one or two phones to something that's going into more mass production those are things where I find that really well architected apps tend to fail less and also can be more resilient when we hit up against the boundary and we need to push an update to production and we

[01:04:37] can do it relatively quickly right those are the areas where my hunches tell me generated applications are going to find a boundary and I'm wondering if you agree with those or forget about my examples where do you think the humans will play a bigger part when we

[01:05:17] it will force us to slow down at certain points it will force us to slow down at certain points so again back to the it takes 10 units to go from idea to running software let's say not just running software but production ready running software well

[01:05:47] money based on this prototype and it only took me one unit where it might have taken me four units to go from that with sigma diagrams or a designer who wasn't fluent in generating code running code with a tool well now I can take two and productionize it so it's going to force us which we're not doing now we're not building in time to take the prototypes that we're

[01:06:17] working and it's going to take hopefully it frees up those two units of work to say okay now is the time we stop we've gotten our money now we have three sprints on productionizing this thing and you know there are pivots in the out there listening to this they're going to throw up their hands and be like well we don't do that stuff you know but we need to and

[01:06:48] we're not just evolving user facing software so I think it will change that whether or not we stop and intentionally build in the time to do that will be the turning point for companies that do and companies that don't the companies that do will succeed the companies that don't will fall over in production and is no different than right now so I think there was just an article a paper that came out today that I saw

[01:07:19] I think it's called get clear they've been doing some state of the code generation report for the past couple years mostly tracking AI generated code and I haven't read it more so churn in code is when a line of code changes and how often it changes the theory behind churn says if I

[01:07:49] write an abstraction that is stable I never have to change that code again unless there's a change in requirements if I see a piece of code that is high churn I stuff I have to change is off in the drawer somewhere where I can deal with it in isolation from the rest of my code base

[01:08:19] this is the one thing that does concern me and I refactoring step more important and that might be refactoring for throughput it might be refactoring for human readability

[01:08:49] does that step become more important those that skip that run the risk of falling off the cliff because yeah they've used AI to generate a bunch of code but it's still a big pile of spaghetti and they've got more spaghetti now and they're literally choking on it the upside of that being is seeing that pattern rearranging that pattern and creating a new pattern my hypothesis LM's are really good at pattern

[01:09:19] matching what would it look like for me to teach a model to refactor code for me because I have tests I know it's a true refactoring I know it doesn't break functionality but I can go to sleep at night have this thing rip through my code base go match all the patterns from Martin Fowler's amazing book and just go do that work in the background for me and because I have tests and it's a pure refactoring it didn't change any behavior I wake up in the morning I'm like yep those six pull

[01:09:49] requests look good off we

[01:10:36] and be more successful because they're automating these rudimentary steps github dependabot is a perfect example of this it used to be that we used to have to go find dependency upgrades manually in our code bases we'll get what guess what github road dependabot and now once a week there's something that goes out to the internet tells me what libraries have been upgraded and does the pull request for me what does it look like to wake up in the morning and look at six

[01:11:20] take away for me is it amplifies the practices that I believe develop high quality software if you are an organization that doesn't have those practices tread carefully because I believe they will amplify all of the things that may be causing organizational strife and just make it worse for folks who are leading pretty big teams maybe big enterprises that are extra worried about some of the risks of AI

[01:11:49] I have talked to some folks that it seems like there's for lack of a better term some navel gazing happening where folks are humming and hawing or postulating versus doing and acting is there one or two pieces of advice you'd want to yell from a loud speaker to them about AI assisted code I think one is bias for action I mean navel gazing is an inability to pick an experiment to run and run the experiment and

[01:12:19] learn from it so bias towards action come up with a hypothesis this is scientific method stuff this is Newtonian physics stuff none of this is new I have a hypothesis that if I do this this is the outcome okay cool now go run changing so quickly models are coming out at an alarming rate the deep seek drop shook everybody

[01:12:48] up because open AI thought they were way ahead in raising models and deep seek drops to R1 and everybody was like whoa okay now we've got oh I think it was 01 like models for free so the big one is just run the experiments you the principles that your company holds dear that you can trust give them six weeks give them a

[01:13:18] very small like go run this experiment very well formed here's my hypothesis here's what I assume it will do and then as soon as it doesn't do that start over again security is a thing this is another area that I'm still poking at a little bit I haven't really gotten too deep in it so I run I currently run every time I'm using an LLM to do development I run it inside of a dev container

[01:13:50] my hypothesis is that I can just let it generate code and if I'm letting it run commands on my machine that I run the risk of it generating a Bitcoin miner like if I'm not paying close attention so I just run dev containers so cool like if that machine gets infected I I have read about prompt injection

[01:14:19] there is this thing in LLMs where you can craft a malicious prompt and have it actually do data extraction where it will generate an image tag in the output that contains encoded information on the back end that you can send somewhere so I don't know about security you definitely want to run it to be insecure go talk to somebody

[01:14:49] who knows what they're talking about and get an answer don't just fear uncertainty and doubt this whole fit yourself into the corner the last one I have been reading about is legality so depending on who you talk to these LLMs were trained on source code the licenses of that source code are questionable at best sometimes we do not have legal precedent and I have legal precedent that say if that

[01:15:19] if statement if that for loop showed up in licensed code somewhere am I liable if that same for loop shows up in my code A I don't know how they'd figure that out B what's the legal precedent of but I think

[01:15:49] the big one is just go try it and don't that is not open ended come back in six months this is like here's three weeks here's a small project go figure this out your goal is to inform the direction iteratively get feedback and inform the direction of the organization which is no different than any other change you'd make organizationally I love it as we wrap up here get to know you

[01:16:30] Spanish road motorcycles in the Himalayas and a bunch of cool things and I always came back well energized and I'm curious if you had advice for someone that was considering taking some time off what might that be first I want to acknowledge the privilege to have been able to have done that so I am 52 years old I have been in tech for 25 years I do not have kids my

[01:17:00] partner and I have been together for five years now I have some privilege like I want to acknowledge that so I was able to save some money and do this not everybody has that privilege I think the thing for me was to to

[01:17:31] have to have to schedule anything I didn't have to get on his calendar it's a very different company than what I joined 13 years previously so for me it was a mental health thing I just needed to do this yeah so I took 18 months off and saved up a bunch of money and the thing I did was I knew what my goal was for the sabbatical so my goal for the sabbatical was to in my time away before my

[01:18:01] funding expired hire an executive director for our local mountain biking organization down here in Chattanooga so in 20 years this organization has either advocated for or built themselves 160 miles of trail within 30 minutes of downtown Chattanooga we are one of the premier riding destinations in the southeast humbly name of the organization Sorbet Chattanooga cool so I'm biased because I moved down here but it was time for us to take the next step

[01:18:31] we needed a human who was getting paid to do this work you know we estimate 10 million dollars in economic impact to our county here in Tennessee but I had a job I was like I had get this done so had it wasn't about the money it was about I'm going to get this done for me and my community selfishly I like good trails I wanted someone

[01:19:01] to take care of those trails but it also benefits my community so that's the key having the privilege saving up for it and having a reason for it what am I hoping to get out of this that's going to our guests here is if there's a favorite team out there that you have and this could be a fictional team or real team it could be one that you've been on or one that you've heard about and team dynamics organizational behavior

[01:19:30] have been key parts of how we've been really successful at developing software and I'm curious if this anything that you look out to and remember or aspire around I'm going to show my age here it's got to be the A team I mean there's a bunch of dudes driving around in a sweet van helping people but a bunch of crackpots you're like really they're going to get this done and I say that tongue in cheek

[01:20:01] they were helping people I got them where they needed to be Hannibal was the master mind behind it all BA was the muscle so I think that's one of them just because that's part of my childhood I've been on a lot of teams and each one

[01:20:38] is we still delivered software we were still doing things and it forced us to work together so I think that teamwork piece is key how do we all work together to a common goal because we're all just trying to do that so yeah the A team and many software teams that I've worked on that had good teamwork awesome and we talked about the shift towards customer value and building better products from

[01:21:10] whether it's a product a service an experience that just totally blew your socks off whether for work or at home that's a great question I tend to be a minimalist I am also of the buy once cry once mentality so as a mountain biker I do like expensive bicycles I'm going to have to say it's my latest bicycle so I ride

[01:21:40] not the newest generation but the previous generation of rip bows it's my second rip bow they just work like I just really like them it's a great bike I do like the lines on it

[01:22:10] and my partner is like we need to get you some new slippers so I found these Austrian slippers on the internet I use chat GPT or Claude to find them for me they're the bomb working from home it's cool down here in the southeast in the winter so I'm going to go with my slippers I don't remember who makes them but they're kind of the bomb and I

[01:22:40] last me probably until 10 15 years so my slippers we'll get a link to the slippers from you and put it in the show notes for folks and lastly it seems like you're doing so much work and so much experimentation and also just reading so much I know you put out a lot of synthesis of your opinions here what's the best way for folks to follow your work and also how do you like to be gotten a hold of? LinkedIn is the best way to get a hold of me.

[01:23:11] I am trying to post once a week there. I'm also writing once a week on a sub stack that we can put the link in the show notes to try to synthesize some of this stuff. So yeah, you want to get all the LinkedIn, we'll put the sub stack in the notes. I also have a GitHub repo that I'm kind of tracking all of this in. And I have, I think this morning I looked, I have 80 issues in the repo now where I'm like, I read something like analyze this paper. So I'm trying

[01:23:37] to somehow drink from the fire hose of this thing. So it's kind of cool because you can kind of go back and see all the things I'm doing. I'm also publishing playbooks there. So these bite size actionable pieces of here's what you can do. So the dev containers is one. I'm also doing some logger form thought process. I've got some prompts in there. So I'm trying to refine my AI process.

[01:24:01] So it's kind of a hodgepodge of what I'm doing that may not make sense to anybody but myself, but you just never know. Maybe it's room for an LLM wrapper for, for, for someone coming in from the outside to find stuff on the inside. We will, we'll definitely have links to all of that. This was super fascinating for me, validated, invalidated, a bunch of my hunches and gave me a whole bunch

[01:24:28] of new ones to go chase after. Thanks so much for making the time for us here, Mike. Yeah. We'll hopefully have you on soon as you learn more and maybe you can host me down in Chattanooga and I can bring my bike down there. Yeah, there you go. And then do everybody out there? I mean, definitely we want to challenge this. Like none of this is set in stone. So I think that's an important thing is we should not assume the future is preordained, but thanks for having me. This has been super fun.

[01:24:54] And we could probably talk forever on this stuff, but it's changing so quickly. It might change tomorrow. So we just never know. Hey folks. I hope you enjoyed that episode with Mike Gayherd as much as I enjoyed recording the chat with him. Something that he mentioned in there that I agree with a thousand percent is that there's going to be a collapse of the balanced team into smaller units.

[01:25:24] For context, when Mike and I were talking about a balanced team, it's a term that we use at Pivotal a lot. And a balanced team is essentially a small team, definitely fewer than 10 people that have all the skills needed to ship software from start to finish. So this small team is not slowed down by coordinating with other teams or distracted by competing priorities. And they can apply hyper focus and ship software and iterate on that software super quickly. This kind of team usually comprises of a

[01:25:54] product designer, a product manager, engineers, maybe some data scientists and so on. And at Pivotal where Mike and I work together, as well as at Integral, the company that I founded, we had some really strong playbooks on who best fit these roles and how these roles came together to advocate for the customer that we're serving and the problem we're solving for them. The business that's making the investment into this product and the priorities for that business, as well as the

[01:26:23] technology and the data components of this swirling equation. Now today with AI assistants being able to take on some of the roles played by many of these folks, I agree with Mike that a fully balanced team is likely going to be a luxury in the upcoming future. And we're going to have to change. For us engineers, it's important that we look up the value chain and use AI to do some of the work of the

[01:26:49] other roles around us. For the product and business people, I think it's about using AI coding tools to develop a deeper understanding of the technology that we're building and even make early versions of the product yourself before that heavy investment of the engineers is warranted. I think a lot of money is going to be raised from VC as well as internally using prototypes with working software built by AI

[01:27:14] in the upcoming months, if not already. Now things are uncertain for sure in terms of what our jobs look like in an AI world. And in order to traverse that uncertainty and de-risk at least the next few cycles of your career, I've got some thoughts for both my engineer friends as well as my product and business friends on what you can do to help yourselves get closer to the value creation.

[01:27:40] For my engineers in the audience, you should know that I grew up writing code and it was when I worked at a startup that I transitioned into product and then pre-sales, sales, and then ultimately channel development. And I got a lot of these skills before I worked at Pivotal, before I started my own company. And here's some tips that I would go back and tell myself to do or to double down on things that I was doing if I got to do it over. And the first one, I hate to say it because it does sound

[01:28:10] pretty cliche. It is to be curious. This sounds really generic, so here's what I specifically mean. Ask questions about the value the product that you're working on is bringing to the customer, to the business, and to the entire ecosystem. What kind of inputs and decisions went into prioritizing the product roadmap as it stands? And why are you working on what you're working on today?

[01:28:36] Ask about who the customer is. What do we know about them? What did we learn about them in the early days of discovery? And if you already have an app in production, what are the analytics telling you about the customer? The next customer question to ask is around the problem that you're solving for the customer. If you were to meet one of these customers, how might they describe the problem? If you were to talk to them, why is this such a big problem for them? Amongst this problem,

[01:29:03] where are the migraines and what are the headaches? How did they do things before they had a product to help them? Today, how much of this problem is solved by technology and how much of it still continues to be old school or manual and why? The next thing is thinking about the company you work for. Understand why your company is building this. How is it helping your business and what does it mean in the next three years for your company? Why is it worthwhile for your company to invest on what

[01:29:33] you're working on and what you're working on this week? Now, be mindful depending on the organization you're working at. These questions may be met with some defensiveness. And as much as I hope this isn't the case for you, I also do want to acknowledge that I have worked with clients and at Integral, we've had to educate some of the folks on the value of this open and collaborative culture

[01:29:59] and the value of high bandwidth communication with the entire product team. Before that, there was maybe a shut up and dribble equivalent from the National Basketball Association where business folks would say things like, hey, stop asking these business questions and build me a database that does the thing. I'll handle the business. I'll handle the customer. I don't know why my business person has a Southern accent, but he does. Speaking of as an engineer, if you're looking to earn trust with these

[01:30:28] business folks and chip away from that mindset to one where they're more open to sharing and being more collaborative, here are some things that were super helpful for me and I've seen other engineers do that have really helped them build trust with the business. One of the things is ultimately they're looking to close deals and make sales. So help them go on sales calls with them, help them prep for them. As you learn more, do some research, go back and ask them thoughtful follow-up questions,

[01:30:54] maybe some ideas. You will need to be really kind with these ideas and non-confrontational so they're not defensive. Maybe think of ways in which they can communicate with the customer better or showcase the value that they articulate better. And I think what this will do is, number one, it will show them that you're someone who's wanting to help them. It'll also hopefully make their jobs easier in the short term and the long term. And most importantly, I think for you, it'll give you some practice.

[01:31:24] It'll give you some practice of thinking more like the business. Another business-related topic is understanding the business model or unit economics for your product. How does the profit come in? Our integral team would oftentimes get pulled into helping answer these questions about sizing the problem or build versus buy when it came to implementation. Is it better to license off-the-shelf SaaS or does this warrant building something custom? We've been asked to compare

[01:31:53] cloud providers and on-premise solutions and balancing the business goals and vision with costs and other risks that are posed by these decisions. And I think a lot of those questions are going to get amplified and swirl around a lot more as AI is incorporated into our products. The cost of inference and AI is becoming a huge factor in building viable businesses in the space

[01:32:17] that are profitable. And I think you can be really helpful by helping your company choose maybe things like AI models by understanding the different paradigms there. Where is it best to be hosted? And you can really augment this conversation by not just thinking about the best technical solution, but even better, the best overall solution for the business goals and where things currently sit in

[01:32:42] the roadmap and how easily we can change that in the future if we need to. This is already a super hard conversation today. And it's an area that you can be super helpful with your technical depth if you're willing to add the context about your customers and the business at your company into the problem solving and ideation that you bring to the table. Now let's switch over to my product and business folks.

[01:33:07] Again, I tend to agree with Mike's prediction here. It's going to be really important that y'all get closer to the code and the technology. Some of y'all, I'll concede, are born business folks. You're great at relationship building and really big relationships. You're stone cold assassins at marketing, sales, business development. You might be okay, at least in the near future, because that's a skill that doesn't

[01:33:34] seem as obvious to me on how AI is going to replace, maybe compared to assisting you with writing code. That being said, that's a severe minority of us and for the rest of us, being able to build technology using AI can be a tremendous unlock. You can have working software today instead of clickable prototypes in Envision or something, or God forbid, a PowerPoint deck when you're trying

[01:34:00] to articulate the value of an idea. Understanding things like information architecture are going to be paramount. They already are, and way more so when you're building AI applications. As you solve worthwhile problems for your customers and you make business decisions around things like AI models, data sources, and you ultimately help your team with prioritizing your product roadmap.

[01:34:24] While a lot of the best product folks that I've worked with have come from a technical background, there are certainly exceptions, obviously. And here are some tips from observing those really great non-technical PMs. Now, of course, similar to the advice that I gave to the engineers early,

[01:34:46] I will begrudgingly start with a cliche of be curious. And I think this is a route that has a lot of tentacles that you'll notice. Being friendly and curious with the engineers around and building relationships there is a good place to start. When I worked at a startup shop logics when I was an engineer, I still remember today the business and salespeople who would slow down to explain things

[01:35:13] to me. These ultimately became people that I became friends with, went out and got drinks with, and I had a lot more patience for. And I tended to favor them when it came to helps that I was providing or ideas that I was shooting up. So when engineers on your team are also doing estimations for things like time and cost, ask to join in as a listener. Maybe bring donuts to the meeting or whatever and just listen. If you're going to ask questions, ask really non-judgmental, curious questions.

[01:35:41] To understand the risks and challenges that your engineering team goes through when converting your business ideas into a product backlog to ship software. You'll probably earn some trust with the donuts. And more importantly though, I think by maybe helping them short circuit some of the assumptions about what this is going to look like in the future for the business, for the customers.

[01:36:05] And trust me when I say that that helps them tremendously with future proofing their architecture and minimizing the technical debt. If the uncertainty in the architecture can marry the uncertainty in the business model. To get familiar with information architecture, something I would often do with a non-technical person looking to get into product, even during an interview, is I'd build a spreadsheet together with them. I'd import multiple data sources. We'd work together to apply logic on that,

[01:36:35] to ultimately present the insights using a graph in Google Sheets or Excel or maybe a table. And what I really liked about this was that it created a metaphor for some really key aspects of technical architecture. Where do we get the data from? How usable is this data that's coming in? What logic do we need to apply to this data to get the outcome? And how easy is it to apply that logic?

[01:37:01] How much harder is that when the data isn't structured or isn't as predictable? How do we make output reports, or UI in this case, in this metaphor, that is most useful and super intuitive for your audience? Which of our steps in building this spreadsheet makes it easier to change the things in the future? And what things do we do that take a lot of work to go back and change?

[01:37:26] Sometimes we do some pretty complex logic. And today, ChatGPT makes it so much easier to spit out formulas that even myself, a self-proclaimed Excel nerd, wouldn't have necessarily thought of. And very similarly, it's way easier for you as a non-technical person to ship code than it's ever been before. Go on Cloud AI or ChatGPT and ask it to write an application for you. It's going to spit out source code,

[01:37:54] and sure, you don't know what to do with that. Guess what? Ask ChatGPT. It will tell you how to compile this code, how to build it, how to deploy it. And you can make a pretty basic but useful web application that's hosted on your own computer in less than two hours. Maybe start with something fun and simple, like maybe a calculator app that helps you decide whether you should buy a book of flight using points

[01:38:20] or should I use cash here and go from there. Now, I know this is super hard and probably feels scary, but I promise you that the risk is super low. When I went from all the business work that I did back into wanting to be more technical, I had the benefit of one of our engineers, Joe Colburn at Integral, pairing in with me, showing me carefully how to do this. And it wasn't scary as I thought,

[01:38:46] and it often never is. But if you're looking for a similar experience, I highly recommend going on a website like upwork.com where you can hire freelance engineers and find someone who's willing to pair program with you on building that first application. And if you want to be really efficient, do your homework before. Wrestle with the AI and try really hard to do it yourself, maybe do it with a

[01:39:11] friend before calling in for that help. And I bet it would be the best $250 to $300 that you can invest in your career today. The last thing that I'll recommend for everyone listening is to watch a video that Andre Karpathy recently put out on his YouTube channel. The video is called Deep Dive into LLMs like ChatGPT. It is a barely technical video. I think he has done really well to explain LLMs

[01:39:40] or Generative AI to the layperson. We'll have a link in the show notes to that too, along with all the other goodies that our guest Mike Gayherd gave us. Hopefully this helps you folks. And if you've got follow-up questions to this, hit us up on socials. Our team looks at that pretty often. Or head to our site, convergence.fm and use the contact page to ask a question and we will be sure to address it

[01:40:05] in a future episode. Until then folks, thank you so much for listening. I hope you found this helpful and entertaining. We will see you next week with another episode about enabling engaged teams who ship delightful products. We'll see you then. Thank you for joining me on the Convergence podcast today.

[01:40:32] Subscribe to the Convergence podcast on Apple Podcasts, Spotify, YouTube, or wherever you get your content. If you're listening and found this helpful, please give us a five-star review. And if you're watching on YouTube, hit that like button and tell me what you think about what you heard today.

software engineering,tech innovation,future of tech,agile software,product management,software quality,AI development,lean startup,pair programming,AI research,prototyping,cloud native,test-driven development,software architecture,continuous improvement,