Let’s just use AI to figure out the quality piece! I imagine more than a few developers or engineering leaders have muttered this already this year– and I don’t necessarily blame them. But it also remains to be seen if this is actually going to work.
One of the trends we are seeing in engineering organizations is that quality is now often falling on the developers. There are fewer separate departments (my team writes code, and your team tests it), which greatly impacts quality as a discipline. We see developers turning to AI to try and bridge the gap between writing and testing code
It’s as if we are running an industry experiment, and the results are still pending. While we all wait at the edge of our seats, I wanted to share some tangible ways that software testers can use and think about AI right now. Spoiler: it’s not writing tests for you.
As with any testing strategy, prioritization should be one of the first things you think about. But what are we prioritizing when we think that AI is going to help write the code and tests? I argue that it’s not the right one.
The bottleneck isn’t writing tests. it’s maintaining them. If your AI solution isn’t helping you maintain tests, write them in an extensible way, and help you understand what’s going on with the tests, then AI solutions for writing tests won’t be a successful long-term play.
Maintaining AI-generated tests is actually harder because you didn’t write them. You have to take a step back and first understand what it’s saying and why it arrived at its output. I use AI a lot, but I’m constantly asking questions about the results before I move on. Working with AI is still a conversation, not a command where I tell it: "Hey, do this," and it’s done.
A better way to think about AI and software testing is to realize that moving from manual testing to automated is not just a rip and replacement. You can’t just do the same manual tests now with a computer. Humans will need to rethink what a computer is good at and how teams can scale with the computer in a way that wouldn’t make sense for a human but does for automation.
Splitting tests up and running tests in parallel will be the same thing with AI. The method needs to be changed, and teams will have to collaborate with AI to make progress. Firing everyone except one person and expecting them to do everything? We are just not there yet.
I’m an advocate for AI, but only when it’s used to improve workflows and quality. While it’s not there yet, it could someday be. Until then, I’m going to continue to use AI to maintain, provide context, and guide testing strategies. Learn more about how mannual testing still fits within an automated testing strategy.