Harnessing AI for the Insightful Analysis of Automated Test Outcomes
Follow us
Simplifying Automated Test Analysis with AI
Diving into the nitty-gritty of automated testing can be a daunting task, especially when you're dealing with thousands of test scenarios. Imagine running about 4,000 tests each night and ending up with around 200 failures to sift through daily. It's a massive undertaking, and that's where artificial intelligence steps in to lend a hand.
How AI is Changing the Game
At a recent QA Challenge Accepted event, Maroš Kutschy shared how artificial intelligence is revolutionizing their approach to analyzing automated test results. The primary goal? Save time, cut down on human errors, and zero in on fresh failures. The solution they found was ReportPortal, a tool that uses AI to make sense of automated test results.
The Role of ReportPortal
ReportPortal is an on-premise tool that testers use daily. It helps them quickly identify which test failures need attention. What makes it stand out is its ability to categorize failures. If a failure occurred before and was analyzed, ReportPortal remembers the outcome and classifies it accordingly. This means testers only need to focus on new issues, significantly reducing the workload.
Kutschy explained that the tool provides a clear picture of why tests are failing—be it a product bug, an automation glitch, or an environmental hiccup. This real-time visibility into testing outcomes helps teams decide whether an application is ready for release.
Learning and Adapting with AI
One of the key takeaways from Kutschy's experience is that AI is only as good as the data it's trained on. If testers make incorrect decisions, the AI will learn those mistakes. However, there's room for correction. If a failure is misclassified, testers can manually adjust the decision, allowing the AI to "unlearn" the error.
The introduction of ReportPortal wasn't without its challenges. Initially, testers needed to categorize all existing failures, which involved assigning the correct status and linking them to Jira tickets. After a trial period, the feedback was overwhelmingly positive, and the tool became a staple in their testing process.
Trusting Artificial Intelligence
Kutschy emphasized the importance of verifying AI's accuracy before relying on it completely. They had to ensure that ReportPortal made the right calls, which sometimes required tweaking settings and handling stack traces effectively.
The journey taught them that AI isn't just for creating test automation code; it's incredibly useful for analyzing test results too. AI, including generative models, presents a wealth of opportunities in testing.
In summary, using AI for analyzing automated test results can be a game-changer. When implemented correctly, it saves time and reduces errors, allowing teams to focus on what truly matters—developing robust, bug-free software.