Derek Ferguson from The Fitch Group returns to share how his team of 600+ developers leverages generative AI tools like Amazon's CodeWhisperer and implements DORA metrics to boost productivity and team health. In this second part of the conversation, he delves into the transformative impact of these tools and the innovative strategies driving adoption and success at scale.
Listen to Derek's experiences in introducing cutting-edge tools to a large organization, his lessons in fostering experimentation, and the surprising parallels between today's AI adoption and the internet boom. From the role of community practices versus centers of excellence to pragmatic advice on technology adoption, this episode is packed with actionable insights for leaders and developers alike. Stick around for Derek's perspective on the evolving role of technologists in an AI-driven world and how music creation intersects with his tech expertise.
Inside the episode…
• Exploring generative AI for software development and its transformative potential.
• Implementing DORA metrics to boost productivity and enhance team alignment.
• Lessons learned from scaling technology practices across large organizations.
• The balance between prescriptive guidance and fostering creativity in teams.
• Insights into creating impactful developer communities of practice.
Mentioned in this episode
• Generative AI tools (e.g., Amazon's CodeWhisperer)
• DORA metrics (DevOps Research and Assessment)
• Tools for music and tech crossover (e.g., RipX, Replicate)
Unlock the full potential of your product team with Integral's player coaches, experts in lean, human-centered design. Visit integral.io/convergence for a free Product Success Lab workshop to gain clarity and confidence in tackling any product design or engineering challenge.
Subscribe to the Convergence podcast wherever you get podcasts including video episodes to get updated on the other crucial conversations that we'll post on YouTube at youtube.com/@convergencefmpodcast
Learn something? Give us a 5 star review and like the podcast on YouTube. It's how we grow.
Follow the Pod
Linkedin: https://www.linkedin.com/company/convergence-podcast/
X: https://twitter.com/podconvergence
Instagram: @podconvergence
[00:00:00] Welcome to the Convergence Podcast. I'm your host, Ashok Sivanand.
[00:00:07] There is so much demand for technologists, for good technologists, who are able to understand a requirement that might not be perfect.
[00:00:19] On this show, we'll deconstruct the best practices, principles, and the underlying philosophies behind the most engaged product teams who ship the most successful products.
[00:00:36] This is what teams are made of. Welcome back to the Convergence Podcast, folks.
[00:00:41] Today, our guest, Derek Ferguson, the Chief Software Engineer from Fitch Group, joins us again for part two of our chat.
[00:00:50] Now, we come across great ideas, new systems, methods that maybe we've had success with that we want to drive across our team.
[00:01:00] A major challenge for companies as big as Fitch that have hundreds or thousands of developers is driving the adoption of these new systems and methods, and especially letting go of the old.
[00:01:14] On today's episode, Derek talks to us about how he fostered the adoption of two very relevant topics for product engineering teams today.
[00:01:23] The first is the use of generative AI to augment your team software development.
[00:01:29] And the second is the use of the Dora set of metrics to measure the health of your product teams.
[00:01:35] On the episode, Derek shares the efficiencies that Fitch is seeing across their team from using AI-powered coding assistants like QDeveloper from Amazon Web Services, which is very similar to GitHub's Copilot, I would say.
[00:01:50] And we also talk about why they chose the Dora set of metrics to measure the team's health and the gains that they're seeing from it.
[00:01:58] More importantly, though, for myself, I really love Derek's stories on how they went about fostering the experimentation and ultimately the adoption of these two paradigms across the team of over 600 software developers.
[00:02:13] In part one last week, we got to hear from Derek about a disciplined approach to creativity, as well as ways in which technology and product leadership teams can foster deep, trusted relationships with their business counterparts in order to ultimately collaborate better and ship better products faster for your customers.
[00:02:34] So make sure to check that episode out in the archives if you haven't had a chance to listen to it.
[00:02:41] At the end of today's episode, I will also share some of our past experience from Integral on some of the disciplines that your software team can adopt in order to gain a better control and improvement on your Dora metrics.
[00:02:54] That's something that you're looking to do.
[00:02:57] So here is Derek in part two of our conversation.
[00:03:02] Let's tune in.
[00:03:03] Subscribe to the podcast to get future episodes as soon as they're published.
[00:03:07] If you find this helpful, give the podcast a five star rating on your podcast app or hit that like button on YouTube.
[00:03:14] Something that I think you mentioned about writing many books in your history, going way back to that adoption, moving away from dial up internet into DSL and being part of the Microsoft venture to make internet available to the masses.
[00:03:38] I'm curious if you feel like there's any parallels you see now with generative AI and, you know, coincidentally, Microsoft being at the hot seat of wanting to bring this technology and capability to the masses and any parallels that you can draw having seen this happen before.
[00:03:55] I think one parallel is, although everyone feared missing out in the long run, it didn't turn out to be that problematic for most organizations not to be left out because a whole ecosystem sprung into existence to take what was initially a very high bar and required a certain level of technology knowledge in order to get online.
[00:04:24] And a complete ecosystem got built out beneath that from, you know, America online for folks who remember back that far for folks who had no technology knowledge all the way up to very advanced, uh, close to the, close to the metal internet for folks who want to do something more custom.
[00:04:45] I see the same thing happening now with, uh, that you want, you're, you're, you're seeing brands getting built that specialize in a very opinionated way of doing things.
[00:04:57] That's very easy.
[00:04:58] While at the same time you have more customization.
[00:05:02] I'm also fascinated.
[00:05:04] The other parallel I see is so many organizations are trying to centralize their AI smarts and their expertise and their subject matter.
[00:05:15] Um, gurus at this point reminds me very much of what happened with the internet early on.
[00:05:21] I mean, early on being a cutting edge organization meant that you spun up in internet department,
[00:05:27] but less than two years after, after that, if you still had an internet department, you were in the dark ages, right?
[00:05:35] Because it went from the sort of thing where this is so important.
[00:05:39] We need to have some experts to this is so important.
[00:05:43] We can't rely on just a small set of experts.
[00:05:45] Everybody needs to know it and use it on a daily basis.
[00:05:49] I think that's, that's the parallel I see with generative AI and no doubt.
[00:05:54] I think we're, we're in that early day Gartner may call it the hype cycle and, and it's kind of potentially overinvestment or people kind of trying to make efficient something that they haven't learned enough about and made effective in the first place.
[00:06:11] Uh, centers of excellence, I think are a great example of something where, uh, it's hard to know how someone might have any level of excellence so early without having really strong business wins.
[00:06:22] That all being said, I think one area that we've certainly experimented a lot and seen a ton of wins isn't around enabling software development teams to write better code faster or test their code better using generative AI.
[00:06:37] And so tell us a little bit about your team at Fitch and how y'all are using generative AI.
[00:06:43] Yeah.
[00:06:44] So on the software development side, I started the year with a mandate that by the end of the year, all of our software developers got to move about 600 would be empowered and using generative AI tools.
[00:07:00] We weren't exactly sure what to expect from it in terms of productivity boosts, but our early experiments,
[00:07:09] what we were going to be, um, what we did as a kickoff.
[00:07:16] We took four or five of our recent university graduates and at our big annual senior leadership meeting at the end of last year,
[00:07:30] we brought these fresh out of university employees up on stage and we sat them at a table and we got from the audience a mandate of an application that they'd like to see built.
[00:07:49] And myself and my peers and our boss,
[00:07:53] Fitch's CIO did our, you know, 15 minute a piece presentations up, you know, leading up to an hour.
[00:07:59] And by the end of the hour, we were able to say, um, developers, have you finished the application?
[00:08:08] Then they built it.
[00:08:11] It's a little bit, uh, you know, a little bit of circus.
[00:08:13] Never, never hurts.
[00:08:14] Right.
[00:08:15] This showmanship is an important thing in business as with anywhere.
[00:08:19] Um, it was a real danger too, by the way, we had, um, we had no backup plan.
[00:08:23] The backup plan, if it hadn't worked, would have been, uh, probably me running off this, running off the stage crying would have,
[00:08:29] would have been the backup plan.
[00:08:31] Burn the ships, I think they call that.
[00:08:33] Burn the ships.
[00:08:34] Exactly.
[00:08:35] Yeah.
[00:08:35] Yeah.
[00:08:35] If you've got no alternative, you're, you're, you're going to make it.
[00:08:38] Um, I guess we would have said that we failed fast, right?
[00:08:41] Yeah.
[00:08:41] This is a fail fast culture and we, we just failed.
[00:08:43] So, um, but yeah.
[00:08:45] And so they, they built this application, which just in that one hour did stuff that would have taken a squad probably a week prior to that.
[00:08:54] Um, so that set our expectations fairly high.
[00:08:59] And we have been following the process of putting our AI generation, uh, coding tools of choice in front of all of our developers.
[00:09:09] This year, I sort of perceive as leading the horse to water.
[00:09:13] Uh, it's been a very prescriptive, supportive approach to say, here are the tools.
[00:09:19] Next year, I want to switch footing.
[00:09:22] And this is perhaps a piece of advice I would give.
[00:09:25] At some point in the inflection cycle, you have to switch from being prescriptive about the approach to, and this is true of all management, being prescriptive about the desired results.
[00:09:40] Um, from what we have seen and from what the industry also reports, the productivity boosts associated with these tools can be anywhere from 20 to 50%, depending upon the nature of the developer.
[00:09:53] Um, we see more productivity gains for the newer, less experienced developers.
[00:10:00] And it sort of tails off a little bit more as you get more towards experienced developers.
[00:10:05] But it never, never, never, never failed, never shrinks to zero, even with the most experienced of developers.
[00:10:13] And, um, as a result, we're going to set our productivity targets double digits, uh, for, for next year, just based on the emergence of these tools.
[00:10:24] But it is, um, it's a change in approach from saying, here are the tools, please use them to saying, here's the expectation around productivity boost.
[00:10:36] Achieve that through whatever means you think is best, but we're saying we think these, these coded AI tools are, are the way to go.
[00:10:44] I like that a lot is, uh, I think another adage that we hear more than we implement is to try and be as leaders more descriptive around where we want to get to versus prescriptive around how we're going to get there.
[00:10:58] And in this case, I also like that.
[00:11:00] Not only are you saying, Hey, this is the level of efficiency improvement we're looking for through generative AI.
[00:11:07] You're saying, here's also an example of the team that's done it.
[00:11:10] So if you're having trouble getting started, then use them.
[00:11:12] Um, but I imagine it also enables a lot of creativity in your team and allows them to also get to know the tool in their own way versus limiting it to how that initial team got to know it.
[00:11:23] And I have a hunch that maybe you've been surprised with teams that come back with more efficiencies or better outcomes than you described in the first place because you're taking multiple approaches.
[00:11:34] And I'm wondering if you have any examples of that or what you think of that hunch.
[00:11:39] When I was, uh, starting to look for my, uh, the thing I wanted to do next after the big bank I worked at previously, one of the things, and this was five years ago, one of the things that really caught my interest and imagination about Fitch was that Fitch was already completely on the cloud.
[00:12:02] And in many cases, it wasn't a lift and shift.
[00:12:05] It was, Hey, wait, this is cloud native stuff.
[00:12:09] Wow.
[00:12:10] Coming from the big bank I was at, that was like this cutting edge stuff.
[00:12:15] Um, the piece that is still a core focus for us is around that whole DevOps cycle and trying to figure out how to push what are known in the industry as the Dora metrics.
[00:12:34] I'm sure many of your listeners are familiar with them, but for the, those who aren't.
[00:12:38] Maybe break, break it down for the few that may not have heard about the Dora metrics yet.
[00:12:42] Um, and this will be my, uh, pseudo layman's, uh, description of it, but you had, uh, Dr. Nicole Forsgren, who a number of years ago at Google, Google, I believe it was, did a survey of thousands.
[00:12:59] And now it's tens of thousands of software engineering groups at different firms to find out how successful they were and what are different aspects of the way in which they build software.
[00:13:14] And this is all documented in a book called Accelerate, which, um, when I joined Fitch, I actually bought a copy of the book for every single developer on my team.
[00:13:26] That's how important I thought it was because what she and her team proved was if you chase four metrics,
[00:13:36] and if you were in the top performers in those four metrics, you will be measurably, um, having happier customers, better commercial outcomes, you name it.
[00:13:52] Goodness seems to proceed from these three causative measures.
[00:13:55] And it's things like, how frequently do you release?
[00:14:00] How long does it take the customer to get what they ask for?
[00:14:04] Um, how often do you have major incidences?
[00:14:08] And how long when you have some sort of an incident does it take you to correct it?
[00:14:13] So we've been, uh, super focused on pushing those four things of late.
[00:14:20] And in answer to the question that you asked about, you know, have I been surprised by anything?
[00:14:25] The groups that have gotten big productivity boosts using things other than generative AI or in addition to generative AI tools
[00:14:35] have been the groups that perhaps had gotten their stuff onto the cloud, but it had been more of a lift and shift.
[00:14:43] And as a result, they weren't getting the real, um, bang for the buck that we often think about with cloud deliveries
[00:14:51] who have been able to go back and revisit what they built and realize, wow, you know,
[00:14:57] if we made this, that, and the other change, there's no reason why we couldn't be doing 20 releases a day.
[00:15:03] Going back to our earlier conversation about iterative features, you know, this whole idea that,
[00:15:07] hey, the customer asked us to move a button.
[00:15:09] We're not going to take that button move and put it into our next release a month from now.
[00:15:15] We're going to do that right now.
[00:15:17] Those sorts of, um, what's known in the industry as continuous delivery
[00:15:24] seems to be the sweet spot.
[00:15:27] If I had to name one thing other than generative AI coding,
[00:15:31] getting that whole, um, DevOps acumen around things like trunk-based development
[00:15:37] and feature flagging and all that sort of stuff, um, infrastructures, code,
[00:15:42] really empowering the development teams such that as soon as they finish a feature,
[00:15:47] they can immediately put it out there in production and do so safely.
[00:15:52] That's the other thing I would really flag up as a big area for productivity improvement.
[00:16:01] Fostering an engaged product organization and aligning them with the principles around lean,
[00:16:07] human-centered design, and agile will more than likely lead to successful business outcomes
[00:16:12] for your organization.
[00:16:14] But getting started or getting unblocked can be hard.
[00:16:17] This podcast is brought to you by the player coaches over at Integral.
[00:16:21] They help ambitious companies like you build amazing product teams
[00:16:26] and ship products in artificial intelligence, cloud, web, and mobile.
[00:16:32] Listeners to the podcast can head on over to integral.io slash convergence
[00:16:37] and get a free product success lab.
[00:16:41] During this session, the Integral team will facilitate a problem-solving exercise
[00:16:46] that gives you clarity and confidence to solve a product design or engineering problem.
[00:16:52] That's integral.io slash convergence.
[00:16:56] Now, back to the show.
[00:17:02] I know a few teams that have implemented DORA.
[00:17:06] My team at Integral, we used to help do that with different customers.
[00:17:10] And we've got to experience varying levels of adoption, overcomplications, oversimplification.
[00:17:17] And I do think this is one of those rare cases where you get to hear that DaVinci quote
[00:17:23] of simplicity is the ultimate sophistication.
[00:17:26] And to your point, there was a lot of work and a lot of different candidates of metrics
[00:17:32] that ultimately got prioritized to these four that had the biggest impact.
[00:17:37] And it's also a very holistic approach where you can't overclock on one of them
[00:17:41] because they will usually end up showing in one of the other three
[00:17:45] unless you're doing things in a truly sustainable best practices way
[00:17:49] versus taking sort of short-term Band-Aids.
[00:17:52] So there's certainly something very beautiful around how the metrics have been organized
[00:17:57] and how they sort of have an implicit self-balancing nature to them if they're implemented correctly.
[00:18:02] I'm curious for the executives out there that are looking to implement DORA and get their teams on board,
[00:18:11] other than buying the book and making it a high priority,
[00:18:16] are there any gotchas or lessons learned or stories you can remember that might be helpful
[00:18:20] that can help folks avoid burning their fingers on what you burned your fingers on?
[00:18:26] You cannot possibly spend too much time explaining why on any of this stuff.
[00:18:34] I mean, besides the fact that I just think it's good and humane and ethical
[00:18:40] for folks to understand the why behind the stuff that they're doing.
[00:18:44] And we always ask the business, you know, could you tell us why we're building this?
[00:18:48] What's, you know, how's this going to benefit the customers?
[00:18:50] It's the same way with this purely technical stuff like the DORA metrics.
[00:18:56] And I remember some of the feedback early on,
[00:19:00] and I was responsible, I should say, for implementing the DORA metrics
[00:19:05] at my previous employer to Fitch also.
[00:19:08] So I'd been through this in two organizations.
[00:19:11] And the same thing came up is the idea phrased as a challenge,
[00:19:17] which is, well, I could just game these metrics, you know.
[00:19:21] You want to see us releasing code more frequently.
[00:19:24] Well, maybe I'll just release a code every time I write three lines of code.
[00:19:28] You know, I'll just do a release for that.
[00:19:31] And the funny thing is, and what sort of needs to be explained is,
[00:19:36] that's not gaming the system.
[00:19:38] If you're releasing everything and it's properly able to be rolled back
[00:19:44] and it has good unit tests and you're actually able to get your code to production,
[00:19:48] in the right setting, there's nothing gaming about that.
[00:19:52] That's actually the desired outcome.
[00:19:54] Because I want everything that goes to production to be so small and focused and gradual
[00:20:01] that at any given point in time, if the customer looks at it and says,
[00:20:05] whoa, I don't like that, rather than us having put a big chunk out there every weekend or something
[00:20:12] and now we have to figure out how do we take this piece out and leave the other pieces there.
[00:20:16] If we're constantly putting stuff out there and we have an issue here,
[00:20:22] then we can just take that piece out.
[00:20:24] That's not gaming the system.
[00:20:25] That's actually the desired outcome.
[00:20:29] I would also say, spend some time talking about the metrics around production issues
[00:20:39] and the amount of time that it takes to resolve.
[00:20:43] And I've even fallen into the trap a little bit myself by saying production issues.
[00:20:47] The specific DORA metrics are around issues.
[00:20:52] How often do the changes that you make cause issues and how long does it take you to undo them?
[00:20:59] In our case, we made the decision, starting out at least,
[00:21:03] that we're going to track major incidences that are caused by code changes.
[00:21:11] So, we extract all of the issues that are like, you know, the phone system went dead
[00:21:16] or we lost power or anything like that.
[00:21:18] You know, all the stuff that's not under our control, we've made the decision to pull that out.
[00:21:24] Anything that's not a major incident, like things that our customers don't recognize,
[00:21:28] we don't count that either.
[00:21:30] Is that right?
[00:21:31] Is that right?
[00:21:33] For us, it's right right now.
[00:21:36] Maybe it won't be right for us later on.
[00:21:38] Maybe it's not right for other organizations right now,
[00:21:40] but I would have that conversation and at least understand what you're going to do
[00:21:46] and the pros and cons of the approach that you choose
[00:21:49] because those two metrics, I think, are a little bit more nebulous than the other ones.
[00:21:54] And it warrants some reflection around exactly how you're going to define that.
[00:22:00] I love that a lot.
[00:22:01] And I'm going to ask a similar question, zooming out, about rolling out generative AI
[00:22:06] of what are some cautionary tales or things that you may do differently the next time around
[00:22:15] or for an executive as they're encouraging their team to get the benefits of generative AI
[00:22:20] and their product development.
[00:22:21] A community of practice will be tremendously helpful to an organization when they embark on this effort.
[00:22:35] You want to have natural champions and you want to have a forum where people can share their successes
[00:22:44] and also where they can share the things that haven't been successful for them.
[00:22:49] In our case, we work with a vendor.
[00:22:57] I think I can say the name.
[00:22:59] It's Amazon AWS.
[00:23:01] We've chosen QDeveloper.
[00:23:04] We have a great relationship with Amazon where we are able to tell them issues that we're having
[00:23:12] and get that sort of support to say,
[00:23:17] well, have you tried doing it this way?
[00:23:19] Have you tried doing it that way?
[00:23:21] And even in some cases, hey, this would be a super useful feature for us
[00:23:26] and get that into the backlog to see it incorporated in the products.
[00:23:31] I think in general, when you evaluate products,
[00:23:36] but particularly in the generative AI space now because it's so rapidly developing,
[00:23:43] don't get hung up comparing the feature check marks between the different products exclusively
[00:23:53] and choosing the one that currently has the most check marks
[00:23:57] because the space is so rapidly developing that, you know,
[00:24:02] we've seen it with the models, right?
[00:24:05] It's like, oh, hey, GPT, OpenAI, they have the best model.
[00:24:09] Oh, no, wait, Claude just leapfrogged them.
[00:24:12] Oh, wait, you know, this week it's back to OpenAI.
[00:24:15] Now it's Llama.
[00:24:18] Every week somebody else is on top.
[00:24:20] So what you really want to do in choosing these AI tools is work with a partner
[00:24:29] or choose a consumer-facing product that you think has the potential to be stable
[00:24:38] and to provide good support in the long term
[00:24:40] because whatever, if you just choose based on the current feature set,
[00:24:45] you are guaranteed to have buyer's remorse
[00:24:48] probably the week after you make that choice
[00:24:51] because things are evolving so quickly.
[00:24:54] But then a few weeks later, you might be glad again that you got that.
[00:24:57] And then, you know, two months later, you might be sad
[00:25:00] because the competition is evolving so quickly here.
[00:25:04] Choose your partnerships more based on the strength of the partner
[00:25:08] and your relationship with them than the current state of affairs
[00:25:12] for any one product, I would say.
[00:25:15] You mentioned a community of practice
[00:25:17] and we talked about centers of excellence earlier.
[00:25:21] They sound similar, but I tend to think they're pretty different.
[00:25:26] And I'd love to hear from your perspective
[00:25:29] how you maybe want to make sure you're fostering a community of practice here
[00:25:34] and maybe it's a little too soon for a center of excellence.
[00:25:38] Yeah, I think both have a definite role.
[00:25:43] Community of practice, to me, some people call it a birds of a feather.
[00:25:48] Some people call it a guild.
[00:25:50] That's probably the more agile parlance for something is a guild.
[00:25:55] But it's really about host support
[00:26:00] that people are coming together because they have a genuine interest
[00:26:04] in whatever the topic at hand is.
[00:26:08] And they are going to share successes
[00:26:10] so that other people can imitate their successes.
[00:26:13] And they're going to share challenges
[00:26:15] because maybe someone else has encountered the same challenge
[00:26:18] and worked through it.
[00:26:20] And they can help each other and say,
[00:26:22] oh, here's how I solve that on my end.
[00:26:24] On the other hand, a center of excellence,
[00:26:26] I tend to think as being more a set of SMEs
[00:26:31] who are tasked with the idea
[00:26:35] of putting forward specific patterns and practices
[00:26:38] and providing more of a sort of second tier of support for things.
[00:26:44] So like a center of excellence in the case of Gen AI coding
[00:26:49] would be the folks who were responsible for the upgrade
[00:26:52] and probably the relationship with the vendor
[00:26:55] that provides us a given set of tools.
[00:26:58] So they'd be the ones who would be saying,
[00:27:02] okay, we're going to switch the model now.
[00:27:05] We're comfortable enough with GPT 4.0
[00:27:08] that we're going to get off of GPT 3 next week.
[00:27:13] Everyone get ready.
[00:27:14] Here's the guidance in terms of how you would change
[00:27:16] any of the stuff you've already built on GPT 3,
[00:27:19] so on and so forth.
[00:27:20] One, I tend to think of centers of excellence
[00:27:23] as being more top-down
[00:27:24] and communities in practice
[00:27:25] are a little bit more federated.
[00:27:29] Flat, yeah, for sure.
[00:27:31] And peers.
[00:27:34] And I'd never thought of it that way,
[00:27:36] but I'm thinking maybe the community of practice
[00:27:39] tends to have faster-moving innovation
[00:27:44] and maybe is more like playing offense,
[00:27:46] whereas a center of excellence
[00:27:48] makes sure that there's sort of checks and balances
[00:27:51] that, yeah, let's make sure everyone is adopting it
[00:27:54] and the versions are ones that we trust.
[00:27:58] Of course, with generative AI,
[00:28:00] we have to be fairly careful about where we're putting what
[00:28:02] in terms of if you have a private instance,
[00:28:05] a public instance,
[00:28:07] and everything else that goes with that,
[00:28:08] which defense is also important
[00:28:11] in order to grow in a structured way
[00:28:15] and something I hadn't necessarily thought of in that way
[00:28:19] and I'm curious if you tend to agree with that or not.
[00:28:23] Yeah, it's interesting.
[00:28:24] You touched on the idea of generative AI
[00:28:27] and whether you have a private instance
[00:28:29] or a public instance
[00:28:31] and knowing the difference between the two.
[00:28:32] I see a parallel there with the start of the intranet also
[00:28:39] that part of the reason that you had these,
[00:28:43] you know, intranet centers of excellence
[00:28:46] back in, say, the mid to late 90s
[00:28:48] that were passed, say, by the start of the 2000s,
[00:28:52] reverse proxies coming into existence
[00:28:54] and becoming more widely deployed
[00:28:58] did a lot of the work
[00:29:00] that those internet committees did before, right?
[00:29:03] Like initially there was the thought that,
[00:29:06] well, what we'll do is we'll have a central list
[00:29:07] of all of the approved websites.
[00:29:09] You know, we'll have the, you know,
[00:29:12] we'll have the white list or the good list
[00:29:13] that folks will be able to use
[00:29:15] and we'll have the bad list of things
[00:29:16] that are absolutely broken
[00:29:18] or absolutely forbidden.
[00:29:20] And then very quickly,
[00:29:21] there was a technology solution that said,
[00:29:23] well, wait, why don't we just have that as a service
[00:29:26] and you put this here
[00:29:27] and this will make sure that nothing goes out
[00:29:30] that you don't want going out
[00:29:31] and nothing will come in and vice versa.
[00:29:33] I think you're seeing very much
[00:29:35] the same sort of thing happen
[00:29:37] with AI technologies now.
[00:29:38] All of this stuff around building guardrails,
[00:29:44] both for internal users of AI
[00:29:48] and so in the case of Fitch, for example,
[00:29:52] in our, I mean, we sell financial research
[00:29:54] amongst other things, Fitch solutions.
[00:29:58] And there is the functionality now
[00:30:01] in a couple of our paywall sites
[00:30:02] to go in and ask questions
[00:30:05] and get a response from an agent
[00:30:07] and it'll go and it'll look through
[00:30:08] all of the research and say,
[00:30:10] here's the answer to your question.
[00:30:13] And those guardrails
[00:30:17] that didn't exist as a service
[00:30:19] when we started building that functionality,
[00:30:22] by the time we were done,
[00:30:24] just a few months later,
[00:30:26] had come into existence
[00:30:27] and all the stuff that we were doing
[00:30:29] to say, well, we don't want to support
[00:30:31] our customers asking questions like this
[00:30:33] or this or this.
[00:30:35] That code sort of,
[00:30:38] we don't need that.
[00:30:39] Now we just use the guardrail service
[00:30:41] that's provided in the cloud.
[00:30:44] I think that's,
[00:30:45] that is the way of technologies like this
[00:30:47] that it just, you know,
[00:30:49] they grow and more and more stuff
[00:30:52] just becomes table stakes,
[00:30:53] I guess you'd call it, right?
[00:30:55] Speaking of generative AI,
[00:30:56] going back to your music background,
[00:30:59] you're in a band
[00:31:00] and I think there's applications out there
[00:31:05] like Suno AI that are composers
[00:31:07] and the jury's a little bit out on that.
[00:31:11] But at the same time,
[00:31:12] you know, maybe we can apply
[00:31:13] the same analogy of
[00:31:16] how your software engineering team
[00:31:18] is way more efficient
[00:31:20] as a result of it.
[00:31:21] And I'm curious if you've dabbled
[00:31:23] in bringing generative AI
[00:31:24] into when you wear your music
[00:31:27] and band hat.
[00:31:29] I have used,
[00:31:30] I like to use
[00:31:33] ChatGPT
[00:31:33] as a sort of
[00:31:35] very sophisticated
[00:31:38] thesaurus.
[00:31:40] Like I can ask it,
[00:31:42] tell me
[00:31:42] all of the
[00:31:44] word rhymes
[00:31:45] that are associated
[00:31:46] with the following topic.
[00:31:48] And it can come back with
[00:31:50] a list of rhymes,
[00:31:51] which you couldn't really do
[00:31:52] with a thesaurus, right?
[00:31:53] Because a thesaurus,
[00:31:54] it can give you rhymes,
[00:31:56] but it doesn't know themes
[00:31:57] or anything like that.
[00:32:00] So to me,
[00:32:02] that's that's a great tool,
[00:32:03] but it's not
[00:32:05] turning over the creative keys
[00:32:07] to
[00:32:10] essentially an automaton, right?
[00:32:11] It still gives me
[00:32:13] the flexibility
[00:32:13] as a human being
[00:32:14] to say,
[00:32:15] you've given me
[00:32:15] the raw materials,
[00:32:16] here's how I want to use them.
[00:32:17] I'll tell you one other thing.
[00:32:20] There's a product
[00:32:21] called RipX,
[00:32:23] which
[00:32:24] uses
[00:32:25] an AI algorithm
[00:32:26] to
[00:32:27] take
[00:32:28] recorded songs
[00:32:29] and
[00:32:30] break them back apart
[00:32:31] into their
[00:32:32] constituent pieces.
[00:32:34] So
[00:32:35] if you've got
[00:32:36] keyboards,
[00:32:37] drums,
[00:32:37] guitar,
[00:32:37] bass,
[00:32:38] vocals,
[00:32:39] say,
[00:32:39] you can feed a song
[00:32:41] through RipX
[00:32:42] and it will give you
[00:32:44] just the bass
[00:32:44] or just the guitar
[00:32:45] or just the vocals
[00:32:47] and
[00:32:48] so like in my case,
[00:32:49] as you can see from behind me,
[00:32:50] you know,
[00:32:51] drums is sort of
[00:32:51] my primary instrument.
[00:32:53] I love being able
[00:32:55] to feed songs into that
[00:32:56] and take the drums out
[00:32:57] and then I just,
[00:32:58] it's an electronic drum set
[00:32:59] so I put it through my
[00:33:00] headphones.
[00:33:01] I'm able to play
[00:33:02] play drums along
[00:33:03] with pretty much
[00:33:04] any song I love.
[00:33:05] I want,
[00:33:06] so I
[00:33:06] I love that experience.
[00:33:08] I've never used it
[00:33:09] to actually create music though.
[00:33:10] That to me is
[00:33:11] just one bridge too far.
[00:33:13] Well,
[00:33:13] while we're on the topic
[00:33:14] and you know,
[00:33:15] while you've avoided
[00:33:17] composing music
[00:33:18] or making songs
[00:33:19] using AI,
[00:33:21] I sense that
[00:33:22] there might be
[00:33:23] a fear
[00:33:23] that you have
[00:33:24] around what might happen
[00:33:27] with the industry
[00:33:28] and
[00:33:29] I'm curious
[00:33:30] if you've got
[00:33:31] any,
[00:33:33] any things
[00:33:33] that you would hope
[00:33:34] wouldn't come to reality.
[00:33:35] I'm actually super optimistic
[00:33:39] in the area of technology
[00:33:41] and AI.
[00:33:44] I hear
[00:33:45] from time to time
[00:33:46] from software engineers
[00:33:48] have I chosen
[00:33:49] the wrong profession
[00:33:50] because our computer
[00:33:51] is going to be doing
[00:33:52] all the coding
[00:33:52] going forward,
[00:33:53] you know,
[00:33:53] if you can get this
[00:33:54] and I
[00:33:57] am constantly
[00:33:58] reminded
[00:33:59] of a
[00:33:59] lunch
[00:34:00] that I
[00:34:01] had with
[00:34:02] the man
[00:34:04] who was
[00:34:05] Microsoft's
[00:34:06] head of developer tooling
[00:34:07] for many years.
[00:34:08] Soma
[00:34:09] was his
[00:34:09] name.
[00:34:10] There is so much
[00:34:12] demand
[00:34:14] for technologists,
[00:34:16] for good technologists
[00:34:17] who are able
[00:34:18] to
[00:34:21] understand
[00:34:21] a requirement
[00:34:22] that might not
[00:34:24] be perfect,
[00:34:25] have the
[00:34:26] clarifying
[00:34:26] conversations
[00:34:27] that you and I
[00:34:28] have talked about
[00:34:28] today,
[00:34:30] get it to a point
[00:34:31] where it can actually
[00:34:32] be built
[00:34:33] and then built
[00:34:34] in a good way
[00:34:35] that's secure
[00:34:37] and easily
[00:34:38] expandable,
[00:34:39] etc.,
[00:34:40] etc.
[00:34:41] And what I think
[00:34:42] will happen
[00:34:43] with
[00:34:44] this profusion
[00:34:46] of tools
[00:34:47] is
[00:34:48] maybe initially
[00:34:49] you do see
[00:34:50] a little bit
[00:34:51] of a dip
[00:34:53] in demand
[00:34:54] because now
[00:34:55] you've enabled
[00:34:56] non-developers
[00:34:58] to come into
[00:34:58] the fray
[00:34:58] and put together
[00:34:59] these version
[00:35:00] 0.1s
[00:35:01] of their applications
[00:35:03] but if
[00:35:04] history is
[00:35:04] any guide
[00:35:05] and I think
[00:35:05] it is
[00:35:07] five years
[00:35:08] from now,
[00:35:08] six years
[00:35:09] from now,
[00:35:10] however long,
[00:35:11] I don't think
[00:35:11] it's long term,
[00:35:12] it's more
[00:35:12] medium term,
[00:35:13] you're going to
[00:35:14] see an explosion
[00:35:15] of need
[00:35:17] for software
[00:35:18] engineers
[00:35:18] unlike anything
[00:35:19] we've ever seen
[00:35:20] before
[00:35:20] because what
[00:35:21] winds up
[00:35:22] happening is
[00:35:23] supply
[00:35:24] drives
[00:35:24] demand.
[00:35:26] You will
[00:35:26] get a lot
[00:35:27] of folks
[00:35:28] who come in
[00:35:28] and get
[00:35:29] their ideas
[00:35:30] 90%
[00:35:31] done
[00:35:32] but that
[00:35:33] last 10%
[00:35:34] really needs
[00:35:35] somebody who
[00:35:36] has a software
[00:35:37] engineer's
[00:35:38] knowledge and
[00:35:39] discipline
[00:35:40] and then
[00:35:41] the thing
[00:35:41] is
[00:35:42] I think
[00:35:43] that these
[00:35:43] ideas
[00:35:44] that are
[00:35:44] 90%
[00:35:44] done
[00:35:46] will sadly
[00:35:47] probably be
[00:35:47] 50%
[00:35:48] in the wrong
[00:35:49] direction
[00:35:49] so there's
[00:35:50] going to be
[00:35:50] that rework
[00:35:51] that needs
[00:35:51] to be done
[00:35:52] to say
[00:35:52] boy,
[00:35:52] you know,
[00:35:52] if you
[00:35:53] brought us
[00:35:53] in earlier
[00:35:53] we would
[00:35:54] have done
[00:35:55] this,
[00:35:55] that,
[00:35:55] and the
[00:35:56] other
[00:35:56] thing
[00:36:00] besides
[00:36:02] that as
[00:36:03] a feature
[00:36:04] we're going
[00:36:05] to have
[00:36:05] people
[00:36:06] thinking about
[00:36:07] putting
[00:36:07] software
[00:36:08] places
[00:36:08] software
[00:36:09] has never
[00:36:09] existed
[00:36:10] before
[00:36:10] because there
[00:36:11] will be
[00:36:13] this profusion
[00:36:14] of ability
[00:36:15] therefore
[00:36:16] it might
[00:36:17] initially bring
[00:36:18] the cost
[00:36:18] down of
[00:36:18] software
[00:36:19] development
[00:36:19] to say
[00:36:20] hey,
[00:36:20] let's put
[00:36:20] software
[00:36:21] in this
[00:36:22] kind of
[00:36:22] device,
[00:36:22] let's put
[00:36:23] software
[00:36:23] in that
[00:36:23] kind of
[00:36:24] device,
[00:36:24] what if
[00:36:25] we had
[00:36:25] software
[00:36:25] that did
[00:36:25] this,
[00:36:26] this,
[00:36:26] this,
[00:36:26] and this
[00:36:27] then because
[00:36:28] you have
[00:36:29] all these
[00:36:30] places that
[00:36:31] are running
[00:36:31] software that
[00:36:32] never ran
[00:36:32] it before
[00:36:33] that contributes
[00:36:34] into the
[00:36:35] demand for
[00:36:36] having experienced
[00:36:36] software engineers
[00:36:37] also.
[00:36:38] So,
[00:36:39] boy,
[00:36:40] I tend to
[00:36:41] be an
[00:36:41] optimist in
[00:36:42] life in
[00:36:42] general but
[00:36:43] in this
[00:36:44] case I
[00:36:44] am insanely
[00:36:46] optimistic.
[00:36:48] Yes,
[00:36:48] it is going
[00:36:49] to drive
[00:36:49] the supply
[00:36:50] of software
[00:36:50] but that
[00:36:51] supply is
[00:36:51] going to
[00:36:51] drive the
[00:36:52] demand like
[00:36:52] I think
[00:36:53] nothing we've
[00:36:53] seen before.
[00:36:54] So,
[00:36:54] if you've
[00:36:55] chosen to
[00:36:55] go into
[00:36:56] software as
[00:36:56] a career,
[00:36:56] you've
[00:36:57] made the
[00:36:57] right choice.
[00:36:57] This is
[00:36:58] the place
[00:36:58] to be.
[00:36:59] I tend
[00:37:00] to agree
[00:37:00] with you
[00:37:01] a lot
[00:37:01] and I
[00:37:01] think
[00:37:02] the advice
[00:37:03] I might
[00:37:03] give that
[00:37:04] engineer
[00:37:04] is also
[00:37:05] to think
[00:37:06] about what
[00:37:06] you said
[00:37:07] earlier on
[00:37:07] around
[00:37:09] understanding
[00:37:09] core problems
[00:37:10] that you're
[00:37:11] solving for
[00:37:11] a customer,
[00:37:12] understanding
[00:37:13] some of
[00:37:13] the business
[00:37:15] model or
[00:37:16] the viability
[00:37:17] aspects
[00:37:17] of it,
[00:37:18] things that
[00:37:19] maybe software
[00:37:19] engineers
[00:37:20] felt like
[00:37:20] we were shielded
[00:37:22] from or
[00:37:22] not interested
[00:37:23] in.
[00:37:23] Those are
[00:37:24] things that
[00:37:24] I think
[00:37:24] we can
[00:37:25] get way
[00:37:26] closer to
[00:37:26] now and
[00:37:28] we're going
[00:37:28] to have
[00:37:29] much more
[00:37:30] higher fidelity
[00:37:31] clickable
[00:37:32] prototypes,
[00:37:33] if you will,
[00:37:33] that do
[00:37:35] more than
[00:37:35] being prototypes
[00:37:36] in our
[00:37:37] functioning
[00:37:37] software.
[00:37:38] But once
[00:37:39] we have to
[00:37:40] get beyond
[00:37:40] that,
[00:37:40] I think
[00:37:41] we're going
[00:37:41] to need
[00:37:41] real horsepower
[00:37:42] beyond the
[00:37:44] AI to scale
[00:37:45] and integrate
[00:37:46] and everything
[00:37:46] else.
[00:37:47] But I think
[00:37:48] the number
[00:37:48] of ideas
[00:37:48] that can be
[00:37:49] tested are
[00:37:49] so much
[00:37:50] more beyond
[00:37:51] what a
[00:37:52] clickable
[00:37:52] prototype could
[00:37:53] do and now
[00:37:54] actual working
[00:37:54] software,
[00:37:55] as much as
[00:37:56] what's behind
[00:37:57] the surface
[00:37:57] maybe doesn't
[00:37:58] scale or
[00:37:59] isn't necessarily
[00:38:00] as reliable,
[00:38:00] but it proves
[00:38:01] out that idea
[00:38:02] of does this
[00:38:03] customer workflow
[00:38:04] actually work?
[00:38:05] Can we build
[00:38:06] a business
[00:38:06] out of this?
[00:38:07] At which point
[00:38:08] there's a much
[00:38:09] better gate on
[00:38:09] what do we
[00:38:10] expend the
[00:38:11] human engineering
[00:38:12] prowess into?
[00:38:13] But to your
[00:38:13] point, there's
[00:38:14] just so many
[00:38:14] more ideas as a
[00:38:15] result of the
[00:38:16] supply that
[00:38:17] there's going
[00:38:18] to be better
[00:38:18] ideas that
[00:38:19] float up to
[00:38:20] the top of
[00:38:21] that cream
[00:38:21] of the crop.
[00:38:24] Appreciate you
[00:38:25] spending time
[00:38:25] with us here
[00:38:26] today.
[00:38:27] One of the
[00:38:27] questions that
[00:38:28] I love to ask
[00:38:29] all our guests
[00:38:29] is what a
[00:38:31] recent product
[00:38:32] or a service
[00:38:32] has been,
[00:38:33] either at work
[00:38:35] or at home,
[00:38:36] that has blown
[00:38:37] your socks off
[00:38:38] that you've been
[00:38:38] totally delighted
[00:38:39] by?
[00:38:40] So Flux
[00:38:41] AI as a
[00:38:43] competitor to
[00:38:46] any of the
[00:38:47] I mentioned
[00:38:48] Dolly earlier
[00:38:49] compared to
[00:38:50] that,
[00:38:51] Stable
[00:38:52] Diffusion.
[00:38:54] I've had
[00:38:55] super good
[00:38:56] results with
[00:38:57] it.
[00:38:58] And in
[00:38:58] combination
[00:38:59] with a
[00:39:00] cloud capacity
[00:39:01] service called
[00:39:02] Replicate,
[00:39:04] which...
[00:39:05] What does
[00:39:06] Replicate do?
[00:39:07] Well, so I
[00:39:09] made the
[00:39:09] investment in
[00:39:10] a very
[00:39:11] high-end
[00:39:12] MacBook
[00:39:13] personally,
[00:39:17] and it's
[00:39:18] not the
[00:39:18] easiest thing
[00:39:19] in the world
[00:39:20] to use
[00:39:22] with GPUs,
[00:39:23] and at the
[00:39:24] time I bought
[00:39:24] it.
[00:39:25] For whatever
[00:39:26] reason, I
[00:39:27] didn't think
[00:39:27] about GPUs,
[00:39:28] and Apple
[00:39:29] MacBooks are
[00:39:30] notoriously
[00:39:31] difficult to
[00:39:31] upgrade also.
[00:39:32] It's sort of
[00:39:33] like you buy
[00:39:34] it with the
[00:39:34] capacity that
[00:39:35] you're going
[00:39:35] to get, and
[00:39:36] then if you
[00:39:36] need more
[00:39:37] capacity, get
[00:39:37] something
[00:39:38] different,
[00:39:38] right?
[00:39:40] Replicate
[00:39:41] is a
[00:39:44] really, really
[00:39:45] super easy
[00:39:47] to use
[00:39:50] partial
[00:39:51] GPU
[00:39:52] capacity
[00:39:53] service
[00:39:54] where
[00:39:56] you can
[00:39:57] write a
[00:39:58] Python script
[00:39:59] on your
[00:40:01] desktop,
[00:40:02] and then
[00:40:03] there are
[00:40:03] some APIs
[00:40:04] you can call
[00:40:05] that will
[00:40:05] run it
[00:40:06] remotely,
[00:40:06] but you
[00:40:07] only get
[00:40:07] charged for
[00:40:08] the GPU
[00:40:09] capacity that
[00:40:09] you actually
[00:40:10] use rather
[00:40:11] than having
[00:40:11] to pay for
[00:40:12] it in an
[00:40:13] hour at a
[00:40:14] time or
[00:40:14] whatever.
[00:40:15] You could
[00:40:16] do the
[00:40:16] same thing
[00:40:17] with pretty
[00:40:18] much any
[00:40:18] cloud service,
[00:40:21] but just
[00:40:22] the simplicity
[00:40:25] of it,
[00:40:28] so many
[00:40:28] things,
[00:40:29] particularly
[00:40:29] in the
[00:40:31] area of
[00:40:31] generative AI
[00:40:32] because it's
[00:40:33] evolving so
[00:40:33] quickly.
[00:40:34] The technology
[00:40:35] evolves quicker
[00:40:37] than the
[00:40:37] documentation,
[00:40:38] and so you
[00:40:39] spend a lot
[00:40:40] of time trying
[00:40:40] to do trial
[00:40:41] and error
[00:40:42] to get
[00:40:42] something to
[00:40:43] work.
[00:40:44] What I've
[00:40:44] noticed about
[00:40:47] Replicate is
[00:40:47] they do a
[00:40:48] really good
[00:40:48] job of
[00:40:49] staying on
[00:40:50] top of
[00:40:51] their documentation
[00:40:52] so that when
[00:40:53] you try to
[00:40:54] do something,
[00:40:55] it works the
[00:40:56] way that it
[00:40:56] says in the
[00:40:56] documentation,
[00:40:57] you don't spend
[00:40:57] a bunch of
[00:40:58] time trying
[00:40:58] to figure
[00:40:58] out,
[00:40:58] should I
[00:40:59] be on
[00:40:59] version 1.5.6?
[00:41:01] Oh no,
[00:41:02] I should be
[00:41:02] on version 1.5.5.
[00:41:03] I need to
[00:41:04] roll back
[00:41:05] because something
[00:41:06] changed but the
[00:41:07] documentation isn't
[00:41:08] up to date yet.
[00:41:10] The relevance
[00:41:10] to me having a
[00:41:11] MacBook is rather
[00:41:12] than having to
[00:41:13] go out and
[00:41:13] buy a very
[00:41:14] expensive Windows
[00:41:17] tower with a
[00:41:18] bunch of GPU
[00:41:18] capacity and all
[00:41:19] that sort of
[00:41:19] stuff, I just
[00:41:20] made the
[00:41:21] decision, hey,
[00:41:21] I'm just going
[00:41:22] to run my
[00:41:22] stuff on
[00:41:22] Replicate and
[00:41:24] pay that
[00:41:25] fractional cloud
[00:41:26] percentage for
[00:41:27] GPU rather
[00:41:28] than having
[00:41:29] standing cycles
[00:41:29] that I probably
[00:41:30] wouldn't be
[00:41:30] using.
[00:41:33] Been super
[00:41:34] impressive to
[00:41:35] me and it's
[00:41:36] probably saved
[00:41:37] me a boatload
[00:41:37] in terms of
[00:41:38] processing power
[00:41:40] too and what
[00:41:40] I'd have to
[00:41:41] pay for it,
[00:41:41] so that will
[00:41:43] be my choice.
[00:41:43] So what is
[00:41:44] the best way
[00:41:45] for folks to
[00:41:46] get a hold of
[00:41:47] you?
[00:41:47] You're welcome
[00:41:48] to contact me
[00:41:49] on LinkedIn.
[00:41:50] Always happy
[00:41:51] to chat with
[00:41:52] fellow professionals
[00:41:53] there, that's
[00:41:54] usually my
[00:41:55] window to the
[00:41:56] world.
[00:41:58] I do a
[00:41:59] fair bit of
[00:41:59] conference
[00:42:00] speaking, so
[00:42:01] if you happen
[00:42:01] to be at a
[00:42:01] conference where
[00:42:02] I'm speaking,
[00:42:02] please do come
[00:42:03] up and
[00:42:03] introduce yourself.
[00:42:05] We'll make
[00:42:05] sure to have
[00:42:06] a link to
[00:42:08] those tools as
[00:42:09] well as your
[00:42:09] LinkedIn in the
[00:42:10] show notes.
[00:42:11] And then the
[00:42:12] last thing I'd
[00:42:12] like to put in
[00:42:13] the show notes
[00:42:13] is how can we
[00:42:15] listen to your
[00:42:16] music or catch
[00:42:17] a show of your
[00:42:17] band?
[00:42:18] podcast.
[00:42:21] So, Rief
[00:42:22] Estramus on
[00:42:24] Spotify, YouTube
[00:42:25] Music, Apple
[00:42:27] Music, so on
[00:42:28] and so forth.
[00:42:29] Our next gig is
[00:42:31] actually next
[00:42:32] November, so
[00:42:34] maybe we'll get
[00:42:34] something before
[00:42:35] then, but we've
[00:42:36] got a slot
[00:42:38] opening for
[00:42:39] Asia.
[00:42:40] Folks remember
[00:42:41] that band from
[00:42:41] the 80s in the
[00:42:43] UK.
[00:42:44] It's the
[00:42:46] HRH Prague
[00:42:47] Festival 15, so
[00:42:50] yeah, you're
[00:42:51] welcome to
[00:42:52] look it up.
[00:42:53] Thanks for
[00:42:54] asking, that's
[00:42:54] cool.
[00:42:55] Hey Derek, thank
[00:42:56] you so much for
[00:42:56] making the time.
[00:42:57] It was really
[00:42:58] wonderful to chat
[00:42:59] with you, I'm
[00:42:59] sure.
[00:43:00] There's a lot of
[00:43:00] tidbits for the
[00:43:01] audience there, I
[00:43:02] know I learned a
[00:43:02] lot.
[00:43:03] Thank you, it
[00:43:04] was super great
[00:43:05] talking to you.
[00:43:11] I thought that
[00:43:11] was a really
[00:43:12] great share from
[00:43:13] Derek at
[00:43:14] Fitch.
[00:43:15] I really wish
[00:43:16] we'd had more
[00:43:17] time to
[00:43:18] dive further
[00:43:18] into Dora
[00:43:19] metrics
[00:43:19] specifically, as
[00:43:20] I think can
[00:43:21] be a really
[00:43:22] powerful way to
[00:43:23] measure the
[00:43:24] progress on
[00:43:25] your teams.
[00:43:26] Our team at
[00:43:27] Integral used
[00:43:28] to see new
[00:43:29] leadership,
[00:43:30] especially at
[00:43:30] some of our
[00:43:31] larger customers,
[00:43:32] often come in
[00:43:33] and introduce
[00:43:34] something like
[00:43:34] Dora without
[00:43:35] offering a ton
[00:43:37] of guidance to
[00:43:37] the teams.
[00:43:38] I appreciate
[00:43:39] Derek's approach
[00:43:40] personally, where
[00:43:41] there's an
[00:43:42] initial step of
[00:43:43] starting out being
[00:43:44] prescriptive about
[00:43:45] the approach of
[00:43:45] the process, and
[00:43:47] then shifting to
[00:43:48] being descriptive
[00:43:49] about the
[00:43:50] desired outcomes
[00:43:51] to let the
[00:43:52] teams discover
[00:43:53] the best path.
[00:43:54] I think that
[00:43:55] initial step is
[00:43:56] sometimes overlooked
[00:43:57] and is especially
[00:43:58] important for new
[00:43:59] leaders, especially
[00:44:01] at large teams,
[00:44:02] where that initial
[00:44:03] guidance can
[00:44:04] outweigh that risk
[00:44:05] that we all feel
[00:44:06] of coming across as
[00:44:08] micromanagers.
[00:44:10] Coming back to
[00:44:11] Dora, the acronym
[00:44:13] Dora stems from
[00:44:15] the DevOps
[00:44:16] research and
[00:44:17] assessment team
[00:44:18] at Google, where
[00:44:20] the research
[00:44:20] originated.
[00:44:21] And I think there
[00:44:22] are four things
[00:44:24] that we have
[00:44:24] seen time and
[00:44:25] time again that
[00:44:26] can really have a
[00:44:27] disproportional impact
[00:44:28] on improving your
[00:44:29] Dora metrics.
[00:44:30] The first one's
[00:44:31] obvious in the
[00:44:32] DevOps context
[00:44:33] around CICD.
[00:44:34] I think many
[00:44:35] teams excel at
[00:44:36] the continuous
[00:44:37] integration part,
[00:44:38] but hesitate at
[00:44:39] full CD or
[00:44:40] continuous
[00:44:40] deployment and
[00:44:42] stopping short
[00:44:42] of production.
[00:44:43] I really like
[00:44:45] encouraging the
[00:44:45] teams to take
[00:44:46] ownership of
[00:44:47] eliminating bugs
[00:44:48] early without
[00:44:49] relying on the
[00:44:50] safety net of a
[00:44:51] manual QA team
[00:44:52] in that it builds
[00:44:53] really strong
[00:44:54] responsibility on
[00:44:55] the team and
[00:44:56] also broadens the
[00:44:57] focus of your
[00:44:58] teammates beyond
[00:44:59] the individual
[00:44:59] tasks to the
[00:45:00] entire system's
[00:45:01] value to your
[00:45:02] customers.
[00:45:03] The second one
[00:45:04] is test
[00:45:05] automation.
[00:45:07] Comprehensive
[00:45:08] automated testing
[00:45:09] is crucial, but
[00:45:10] I still find it to
[00:45:11] be overlooked or
[00:45:12] optional on
[00:45:13] product teams.
[00:45:14] I personally love
[00:45:15] test-driven
[00:45:16] development to
[00:45:17] help teams write
[00:45:18] better tests,
[00:45:19] of course, and
[00:45:20] ensure high test
[00:45:21] coverage, but
[00:45:22] more importantly,
[00:45:23] align technical
[00:45:24] and business
[00:45:25] goals.
[00:45:26] I think TDD
[00:45:27] fosters richer
[00:45:28] frictions,
[00:45:29] conversations between
[00:45:30] teams like business
[00:45:32] and technology, and
[00:45:33] bridges the gap so
[00:45:35] that you can
[00:45:35] ultimately build
[00:45:36] better products
[00:45:37] faster.
[00:45:37] The third one is a
[00:45:40] good segue, is that
[00:45:41] close relationship
[00:45:42] between business and
[00:45:43] technology.
[00:45:44] Strong collaboration
[00:45:46] and deep trust
[00:45:47] allows for things
[00:45:48] like thin slicing
[00:45:48] requirements, which
[00:45:50] turns large
[00:45:50] deployments into
[00:45:51] smaller frequent
[00:45:52] releases, and from
[00:45:54] Adora's standpoint, it
[00:45:55] obviously reduces
[00:45:56] lead times, minimizes
[00:45:58] change failure impacts,
[00:46:00] and improves things
[00:46:01] like mean time to
[00:46:02] restore when your
[00:46:04] issues will inevitably
[00:46:06] come to being.
[00:46:08] The last and final one
[00:46:09] is context sharing on
[00:46:10] your development team.
[00:46:12] Ensuring that more
[00:46:13] team members
[00:46:13] understand the
[00:46:14] code base helps
[00:46:15] mitigate risks,
[00:46:17] prioritize technical
[00:46:18] debt the right way,
[00:46:19] and foster a better
[00:46:21] emergent architecture
[00:46:22] as your system and
[00:46:23] your customer grows.
[00:46:25] More informed teams
[00:46:26] can restore systems
[00:46:27] faster and make
[00:46:29] smarter design
[00:46:30] choices.
[00:46:31] And personally, we
[00:46:32] had a lot of
[00:46:33] benefit from using
[00:46:34] pair programming or
[00:46:35] mob programming in a
[00:46:37] pragmatic way to
[00:46:39] help create a safe
[00:46:40] environment for
[00:46:41] everyone to get
[00:46:42] familiar with the
[00:46:42] system in an
[00:46:44] efficient and
[00:46:44] sustainable way.
[00:46:47] Given the
[00:46:48] importance of these
[00:46:49] topics, we're going
[00:46:50] to definitely explore
[00:46:51] these further in a
[00:46:52] future episode.
[00:46:54] And for now, you
[00:46:55] can check out a
[00:46:56] blog post on
[00:46:58] Dora Metrics that
[00:46:59] we did at
[00:47:00] Integral that is
[00:47:01] linked in the show
[00:47:02] notes.
[00:47:03] Thanks a lot, as
[00:47:04] always, folks, for
[00:47:05] listening to this
[00:47:06] part two with
[00:47:08] Derek Ferguson from
[00:47:09] Fitch.
[00:47:10] We will be back
[00:47:11] next Tuesday, as
[00:47:12] always, with
[00:47:13] another episode on
[00:47:15] fostering the most
[00:47:16] engaged product teams
[00:47:17] who shipped the
[00:47:19] most delightful
[00:47:20] products.
[00:47:21] In the meantime, I
[00:47:22] hope you have a
[00:47:23] wonderful week.
[00:47:30] Thank you for
[00:47:31] joining me on the
[00:47:31] Convergence podcast
[00:47:32] today.
[00:47:33] Subscribe to the
[00:47:34] Convergence podcast
[00:47:35] on Apple Podcasts,
[00:47:37] Spotify, YouTube,
[00:47:39] or wherever you
[00:47:40] get your content.
[00:47:42] If you're listening
[00:47:43] and found this
[00:47:43] helpful, please give
[00:47:44] us a five-star
[00:47:45] review.
[00:47:46] And if you're
[00:47:46] watching on YouTube,
[00:47:47] hit that like button
[00:47:48] and tell me what you
[00:47:49] think about what you
[00:47:50] heard today.
