Kat Gaines: Welcome to Page It to the Limit, a podcast where we explore what it takes to run software in production successfully. We cover leading practices used in the software industry to improve both system reliability and the lives of the people supporting those systems. I’m your host, Kat Gaines, and you can find me on Twitter @StrawberryF1eld using the number one for the letter “I”. Okay folks, welcome back to Page it to the Limit. Again, I’m Kat Gaines and we’re really excited to talk about the topic that we have on the podcast today. I guess you would say the hot topic of the moment. So with us today to talk about all things AI and software development lifecycle, we have James Governor from RedMonk. James, if you want to say hello and introduce yourself.
James Governor: Yeah, great. I am James Governor, Co-founder of a research company called RedMonk. We are basically research and advice, helping people to understand the choices that software developers, engineering teams and practitioners are making. So historically, if we think about so much of IT was a purchasing-led phenomenon, like you needed to buy all the infrastructure before you could do anything. So the part of this self-service world is that with the cloud, with open-source, with open cloud-based repos, likes of GitHub, that need to ask permission has become far less. So we try and help people understand how to empower developers, why developers are making the choices they do and kind of where the industry is headed.
Kat Gaines: Yeah, and I think that’s great framing for this conversation today. So just to kind of get us started James, do you want to talk about where you see things in the AI conversation right now? And just for anyone who maybe isn’t caught up, hasn’t been following the many, many, many headlines out there, just kind of where we’re sitting and what things look like.
James Governor: Okay, well I’m not in team “this is the end of the world”.
Kat Gaines: Me either.
James Governor: I think there are more pressing concerns that we have than the robots coming to Skynet us sort of out of here. So for me, there’s one key story that really brought home to me how much things have changed in the last six months. So we’ve been probably a little bit AI skeptic. Yes, machine learning is a thing, has been a thing for a while. But what’s different? And certainly, with the launch of ChatGPT end of last November, there was a step change in the culture about the considerations of what you might be able to do with generative AI, and just general excitement about that. The story I like to tell came from some friends and clients, actually, at a company called Deepset and they are a German company, works in the transformers space. So basically building natural language processing engine for transformers. Began just after Google published its famous paper back in 2017. And so they’ve been at it for a couple of years. And trying to persuade large organizations and succeeding government orgs, some financial, legal of the value of natural language processing. Anyway, they were just winding down for Christmas in Germany mid to late December 2022, and they get a call and it is an 80-year-old German gentleman. And he’s like, “Well, I run one of Germany’s biggest legal information firms and I’ve heard about this ChatGPT thing. I think it’s going to change my business and so I need you in Stuttgart tomorrow.” And the guy’s like, “Well, I just had a baby and Christmas is coming.”
Kat Gaines: Not ideal timing,
James Governor: Not ideal timing. And the guy said, “In that case, we’ll give you two days.” So you’ve got this octogenarian, very traditional legal services. And he was literally saying, this is as big as the internet and we need to get on this now. I think the other story for you is probably my colleague, Rachel, her mother had been looking for a microwave for the longest time that would fit a specific size of cabinet they had. She had done so much search engine, so many times a Google search every month she’d try again, couldn’t do it. She was like, I’ve heard of this ChatGPT thing. And she went and found with that a microwave that would fit the built-in cabinet that they … see microwave maybe it’s bigger than that. So anyways, the fact that it crossed over into the general culture and then, by the way, the people that would be skeptics, software developers, you would expect them to say, “This is bullshit.” They’re all like, “Well, I’ve been using Copilot and actually some of the code is pretty good.” So I think the TLDR is, there is a there, there. Now we just need to work out how it’s going to make us more effective. But the demand is going to be really substantial because if 80 year olds want it, then probably everyone wants it.
Kat Gaines: I think so. And I think that the microwave story is sticking with me right now because that’s the type of thing that I’ve worked at PagerDuty for almost nine years, I don’t know how many times I’ve explained to my parents what I do. They generally get it now, but it took probably five or six different explanations and analogies. And so if you have everyone relating to a technology in a different way in their life, it actually impacts something that they’re doing day to day, that they understand. That does mean that it’s kind of big, that it’s interesting, that it’s something that is probably here to stay. And I think you’re right, probably change things. Maybe it’s not drop everything tomorrow, go cancel Christmas big. I don’t think anything is, personally. Have your holidays, have your time off, your life is generally going to be more of a higher priority than your work life. But it is something to obviously pay attention to and think about how it’s going to impact us. And so we were talking a little bit before we started recording and we were talking about the impact across the software development life cycle. And I think for our listeners, I really want to get into that a little bit because that’s what we’re all thinking about right now. You mentioned Copilot, obviously. There are a number of other ways that we’re all engaging with this and trying to figure out where it fits in. What do you think are the ways that we’re going to see that impacting the software development lifecycle? Not just now but going forward. And what types of new things are we going to have to just manage as human beings around this?
James Governor: Right. I think that’s a great question. And I’ve been thinking about this just in terms of probably around productivity. And the ability to scratch an itch, the ability to build a tool, to make the business of building tools more effective. There’s a guy called Simon Willison, he was one of the original creators of the Django framework. I think he has a great radar. I mean, I’m an analyst, so I’m supposed to have a radar. But that means I have to look at practitioners that are doing interesting work. And he wrote a post where he was talking about historically speaking, he builds a platform called Dataset, which is a citizen journalism platform based on SQLite. His daily decision-making is about productivity and making himself, at the moment, one person team, I’m trying to build a platform. And historically, he would be in a situation where he would ask the question, maybe I could build a native application front end for this, using iOS, but I don’t really know iOS. So I’ll do that later. Or actually, I can think of a transformation I could do with a library that I’ve been meaning to try, but I know that it’s going to take me a day to research that, to learn that library, to work out the implications in terms of installing it, where it runs, what language, bindings I would need, those sorts of things. And he said that now instead of asking himself the question, can I do this? It will probably take me a day, which will prevent me from doing my core job. He generally goes and plays around with Copilot, asks ChatGPT. Now he’s a brilliant developer, don’t get me wrong. So he is able to tell if the machine is hallucinating, like he will be, oh wait, this won’t quite work. But if I did this, maybe make the query a little bit more well-articulated. It’s giving him code that he thinks will run and he’s now doing those experiments. And so he’s now like, it takes me longer to write the blog post about doing it, than it does to actually do it itself. And I think that productivity advantage … or another example, I was at Microsoft Build just recently, and you’ve got Thomas Dohmke that runs GitHub, he was like did a demo where it pointed at the AI at some regex and it was able to make it readable. And I don’t know about you, but most people you create regex, you can’t even remember what you, yourself, wrote. Let alone an idea of what somebody else wrote. And then you can be like, can get this into a … so there’s a really strong productivity advantage there. And if we’re going to be building more tools to create tools, there’s a lot more software. But how do we turn it off? How do we manage that? What are the day two implications of a world where there are so many flowers blooming, but maybe we need to prune them? What are the software development management questions for observing, making sure that we’re not using up a load of additional capacity, we want it to be ephemeral. When do we switch it off? What if we switch it off and a user is annoyed because they were already using it? I think there are all these questions, not about the birth of a service, but the death of a service that are really service management questions and that’s something that I’m fascinated by.
Kat Gaines: Yeah, and I think that’s the piece too where I think you hear a lot of concern and fear around, oh, it’s going to take jobs, it’s going to eliminate the need for human connection in what we’re building, in our services and whatever we’re doing. Especially, I am embedded in the customer support and technical support community, and that’s been a lot of the initial conversation, the fear around, well, you won’t need to hire people anymore. But really, it is around the fact that, yeah, you still need people to manage it because you can’t trust the AI to be absolutely be all, end all. And it can’t do those things for you that you’re mentioning. Just understanding that death of a service state. Understanding, too, those pieces that do need that human moment involved in them. You said that the person you were talking about is smart enough to know if the machine is hallucinating. We’ve seen those examples where the AI just gives completely incorrect information. And so you need someone to be able to see that, to recognize it, to mitigate it. And what I’ve been, I feel, like a little bit of a broken record at this point, what I’ve been saying is, it’s not a human replacement, it’s a human assist. It’s something to help you do exactly those types of things, where you’re putting off work because you don’t have the bandwidth or knowledge or something else is standing in your way. And being able to say, “Okay, I’m going to go solve that problem.” You’re going to take that blocker away. You can now get this work done because you have basically an assistant at your hands that’s allowing you to do so.
James Governor: Yeah, I think that’s a brilliant way of looking at it, like having that assistant with us all the time. Look, I’m half British and half American. So the British side of me is like, it’s going to take all the jobs, everything is screwed, everything’s bad. But the optimistic American is like, ah, this is going to make us more productive. I think you’re right. There’s a lot of work still to be done in IT and better assistance. And I’m also, anything that can help us learn is always attractive. Anything that can make us just better at learning new things, here is an adjacent skill that I want to learn. I would like that assistant. One would hope, look, I mean, Excel spreadsheets did not get rid of accountants or people that wanted to calculate things. I tend to think that there’s going to be a period of adjustment and I’m sure there’s going to be some pain. But every time we throw productivity enhancements from automation at humans, we absorb it, we become more productive and we move on.
Kat Gaines: Yeah, I agree. I think historically there has been a lot of fear every time there’s new technology. I’m right there with you, I am a skeptic in my own regard. But I think that’s the point, that’s the idea. Everyone needs to have an amount of healthy skepticism here, be paying attention, be making adjustments as we go along, and really just keep it underneath our supervision, so to speak. And make sure that you really are involved in what’s going on, that you don’t just implement a tool and say, “Okay, well I’m going to take this and I’m going to sit back and just let it do everything.” You have to stay involved in your work and what’s happening. And I think that brings me to another question around where we want to go with this. What types of problems are we going to be expected to solve if we’re looking at any company out there, if we’re looking at not just … when I say we, I’m not saying PagerDuty, specifically. But I’m saying anyone in our space, what types of things are going to be the expectation in terms of, well, where do we need to introduce that efficiency? Where does that need to come in? And what does it need to look like in those offerings?
James Governor: Yeah, I think that’s a really good question. And for me it is, again … I mean, we’ve been in an era where … I mean, it’s like a meme, it’s a joke. And/or just the truth that developers will be like, I forgot how to do something, so I went and looked it up. So we know that. But they’re also people just going to Stack Overflow and literally cutting and pasting code there. They have no idea where that came from or they’re just like, that’s somebody that answered the question and I’m going to put this into my code. And so that’s not really that different from the asking Copilot or asking ChatGPT, using Copilot or asking ChatGPT to generate some code that would be useful. I do think these engines are much more effective than they have been. But then I think there was a really interesting set of questions about, how do we optimize that? It goes back to Simon Willison being like, well, I think it kind of works, but I want it to have a slightly different kind of output. Or I need to observe the behavior of this system. I mean, I’d like to do it as a feature flag. I mean, there’s so much of the day two about how do we understand the behavior of the system in order to make it more effective, that I think there’s a whole world of interesting work to be done in and around that. Because writing the software is not the most important part, I don’t think. It’s super important, but it’s almost managing the software, understanding its behavior, making sure it’s secure. There’s all sorts of new security questions and vectors around prompt injection. I think that’s where I just see so much of the interesting work ahead of us. Yeah, there are a lot of blockers potentially, as well, in our use of the technology. But I think societally, it’s going to be quite interesting to work out.
Kat Gaines: Yeah, I agree. I think what you were mentioning a moment ago about Stack Overflow is a good reminder to everyone: vet your Stack Overflow copy and paste kids. Don’t just blindly dump it into something. It’s really tempting, but I think it’s a trap that everyone does fall into. And it’s the same thing with AI. And I think that’s the other thing that I’m seeing too, is that a really sweet spot for the technology can be a first draft of something. So you’re writing a first draft of an incident review, you’re writing a first draft of … I was trying to just kick around ideas for a title for a talk a couple of months ago. And I realized that like, oh, that’s actually a helpful space for something like that. When I need to dump the contents of my brain and say, “Organize this into something that makes sense.” It’s definitely not a finished product, but it’s a first draft-
James Governor: Yeah, for sure.
Kat Gaines: -on where I can go with it. And just helping the creative process, so to speak along. And being aware that you’re not taking a finished product out of it most of the time.
James Governor: Yeah, I think that’s, again, a really good analogy that creative work software. Software is creative work, but same thing. So treat it as a first draft, it’s not the end, and that’s not the finished product.
Kat Gaines: Yeah, exactly. And I think that’s where people get a little bit too caught up. I was hearing someone tell me a story the other day about how they had to talk down their CEO from wanting their company to drop absolutely everything just to work on and find out where we can fit in AI. And they had to come back and say, hey, here’s the priority work that we’re doing, and here’s where … we don’t have something that fits into this right now. So why don’t we put it on the back burner, it’s great to be aware of. But we need to get this other work done because this is what our users are expecting of us.
James Governor: I think I saw something today, it was something like 30% of job ads in European Banking at the moment include the word AI now.
Kat Gaines: Oh, wow. Yeah.
James Governor: There’s definitely an over-rotation, which is surprising given this was on the same day that we had this publication that some of the EU’s potential restrictions on AI could make things very, very interesting for the software industry and regulations. Tech already was like, this GDPR thing sucks. Why do we have to do this? But this would be a whole nother level that they’re talking about, that you could not use AI to do emotional sentiment analysis of employees. And if you’re a software company, those are the kinds of things that you’re like, wouldn’t it be interesting if we could do this? Are you going to only sell this in Asia and the states, but not in Europe? How do you manage that? And so-
Kat Gaines: How do you regulate that? Yeah.
James Governor: Yeah. So I think regulation, it’s going to happen pretty quickly. I mean, obviously, we’ve had Sam Altman be like, “Hey, I’m in the castle. Let’s raise the drawbridge. Let’s have regulation now. Of course you could trust OpenAI.”
Kat Gaines: Totally.
James Governor: Totally, yeah. So I think people will be like, oh, Europe are idiots. They’ve created this rod from their own backs. I do think that generally markets are kind of made by regulation in many cases. And that stuff will have to work out. Those are software challenges. But yeah, I mean I don’t think it’s a done deal and neither should it be. A degree of caution, I think, is reasonable. I mean, if we go back to the most obvious example would just be, redlining in the states. Where black people were disadvantaged in property markets because of algorithms. And you don’t want to have to wait 20 years to identify that that was a problem. That’s the sort of thing that you should be thinking about beforehand. And if you’re like, oh, it doesn’t matter, we’ll work it out later. That always comes from a position of privilege. So I think some caution is probably merited. And if that creates challenges for IT, that’s great, that’s what we’re here for. That’s our job is to solve and work around and deal with the sorts of challenges that we face.
Kat Gaines: Oh, do you mean that we’ll all still have jobs if we have to-
James Governor: We’ll all still have jobs.
Kat Gaines: -solve interesting problems around AI.
James Governor: You see what I did there?
Kat Gaines: Amazing. Really, yeah, I think you’re absolutely right. I think that you don’t want to wait 20 years to find out that those problems are problems. If you can, it comes from a place of privilege. And if you can, it’s going to come back to bite you eventually. There’s no way that that’s not going to backfire. And it’s a really sticky place, that we are all going to have to figure out where those types of laws and regulations are introduced, how we work around them. If you have a user who says, “Well, why can’t I have that thing?” And you have to say to them, “Well, your country doesn’t allow it.” You also have to figure out your communications around that, how you manage that. There is so much that has to go into it, and I think it’s just one of the really interesting problems that we should be excited to solve these problems. Because we get to use cool new technology and also make sure that we’re doing it responsibly and ethically and in a way that makes sense.
James Governor: Absolutely. And as you say, customer support, same thing. Like, oh, we should have sentiment analysis while we’re doing the customer support. Well, if you can’t, then those are things we’ll have to work with. And if you can, then that’s great. Let’s make sure that the people that once we’ve done the analysis, maybe they’re going to want to deal with the humans. So I don’t think we’re going to see the end of exceptions anytime soon.
Kat Gaines: Yeah, that customer support piece, I think, too … I think there is a genre where it’s being viewed as just chatbots and that’s where AI fits in. And that you can be doing different things, you can be doing sentiment analysis in all types of ways across a customer interaction. You can be looking at how can you predict whether or not someone might be asking to escalate a ticket or a case eventually, is there’s a risk there that they aren’t happy with what’s going on? You can be working on process building. There are a lot of funky integrations between support tools and others that you have to build process in that can be really manual and tedious for those teams a lot of the times. You can do things like QA, help working through understanding where agents are and where they need help. Which has been pretty manual to date, there are a bunch of tools for it. But it’s still a really manual process. I don’t know, there are a lot of really interesting tools out there doing these things for those audiences, specifically. And I think that’s the thing too, where you have an audience or a team that often isn’t prioritized with tooling or resources in a business, to be honest. And there’s a tool that’s saying, we’re going to eliminate some of those problems by making this easier for you.
James Governor: Yeah, definitely. Well, you wouldn’t work at PagerDuty if you didn’t see the value of automation. But also, the value of humans and the work they do. I mean, I think that’s probably baked into the original making on-call suck less.
Kat Gaines: Absolutely. It’s baked into the entire core of what we do and who we are.
James Governor: Exactly. Company mission is about automation and augmenting the work that practitioners and humans do.
Kat Gaines: The people are at the heart of it.
James Governor: I’m sure there’s a bunch of exciting stuff to come from y’all going forward.
Kat Gaines: Yeah, definitely. I’m really excited to see where we go with those things too. I think that if we are to … we’ve kind of covered a lot of ground here in just about 25 minutes or so. If we are to leave our listeners with a couple of things they could check out to either get up to speed or just get more context on some of the things we were discussing, what would you tell them to go look at?
James Governor: Well, I mean, I think from our perspective at RedMonk, one of the things that we look at a lot is developer experience. And in particular, the developer experience gap. Where, yeah, we’ve lived in this golden era where developers can choose all of these tools, build things. They’ve got so much opportunity and it goes back from an organizational perspective, maybe [inaudible 00:22:56] teams, whatever else. But then you’ve got these things that you built and you end up maintaining them yourself. So it’s like you wanted to work on day one and then you were working on day two, three and on. There’s a gap there, because the developer really just wants to be doing the creative work. And so developer experience gap is something we’ve spent time looking at, and I think what’s really interesting for us from that perspective is, what are the implications of developer experience and AI? Because that’s some of the things we’ve been looking at in terms of publishing. I think there’s a few posts that I’d love for listeners to check out, so we’ll share some links. But I think developer experience, developer experience gap and where that meets AI, that’s one of the big research areas for us, as a firm, for the foreseeable future.
Kat Gaines: Yeah, makes sense. Yeah, folks, check out those links. We have them in the resources section of the text description of the podcast. James, before we finish up here, there are a couple things that we do ask every guest on the show. So I think I’ll twist the questions a little bit. We usually talk about this in relation to running software in production. But I think in relation to our topic, what is one thing you wish you would’ve known sooner?
James Governor: It’s a great thought experiment, but we never have that luxury. For me, it’s more, I’m just so excited by what’s going on right now. I mean, I’ve been in IT, I have watched revolutions happen. And as a consultant and a researcher, I can be like, oh, we have this good idea. I’m just going to continue to sell that for the next five, ten years, whatever. Or maybe I can get through to the end of my career, doing the things I’ve already done. I don’t need to learn a new thing, I don’t need to, but-
Kat Gaines: Time to coast.
James Governor: Yeah, totally coast. But that’s not me. And so is it existential, the impacts of this wave of technology on what we do? Sure. But that’s exciting. Everything changed in November, so, yeah, no, I think you just have to ride that wave. I mean, other than stupid stuff, I mean much though I hate Bitcoin, should I have bought it when it was two pennies or something? I would still be working, even though I’d be extremely rich. Or should I have bought Apple in 1999? I mean, those are just dumb things. I’m excited. I don’t know if there’s anything I would want to know earlier than … we grow into things. I wish when I was younger, I wasn’t as temperamental as I was. But I’m not sure that’s answering your question.
Kat Gaines: Not quite, but I agree with you. I think that it’s sometimes just fun to find out and see what’s happening around these trends. And then is there anything around this conversation, around AI that you’re glad I didn’t ask about, that we didn’t discuss?
James Governor: There’s been so much. It’s almost, again, so much AI washing at the moment. Everybody that comes to us is like, we want to talk about this AI thing. And very often, there’s literally no there, there. And in a world where it is easier to build things, it’s more … just a lot of conversations at the moment. I’m like, well, just show me, don’t tell me. So I am glad that you didn’t ask me my bank account details and my mother’s maiden name.
Kat Gaines: I’m glad I didn’t ask that too.
James Governor: There you go.
Kat Gaines: There we are. Yeah. Okay, perfect. So I think with that, we’re going to go ahead and wrap up. James, thank you so much for joining us on the show. This has been a great conversation.
James Governor: Awesome to talk to you. Thanks so much for inviting me.
Kat Gaines: All right, folks, that is it for our episode. And again, this is Kat Gaines wishing you an uneventful day. That does it for another installment of Page It to the Limit. We’d like to thank our sponsor PagerDuty for making the podcast possible. Remember to subscribe in your favorite podcatcher, if you like what you’ve heard. You can find our show notes at PageIttotheLimit.com and you can reach us on Twitter @PageIt2theLimit using the number two. Thank you so much for joining us, and remember, uneventful days are beautiful days.
James Governor is co-founder of RedMonk, the only developer-focused industry analyst firm. Based in London, he advises clients on developer-led technology adoption, cloud, open source, community and technology strategy. Came up with “Progressive Delivery.”
Kat is a developer advocate at PagerDuty. She enjoys talking and thinking about incident response, customer support, and automating the creation of a delightful end-user and employee experience. She previously ran Global Customer Support at PagerDuty, and as a result it’s hard to get her to stop talking about the potential career paths for tech support professionals. In her spare time, Kat is a mediocre plant parent and a slightly less mediocre pet parent to two rabbits, Lupin and Ginny.