This study investigates how teams in the RoboCup Standard Platform League (SPL) manage software quality assurance (QA) and performance measurement, both critical to achieving RoboCup’s ambitious goals. Through qualitative interviews with eight SPL teams, we analyzed their practices, challenges, and strategies for balancing innovation, education, and competition.
The findings reveal that manual QA approaches, such as expert judgment during test games, are widely employed and remain a cornerstone even for the league’s top-performing teams. Only a minority of teams have implemented more structured and automated mechanisms akin to those used in software engineering. Furthermore, almost none of the teams engage in performance measurement beyond manual assessment through observation. We explore potential explanations for these findings and conclude that there is ample room for improvement towards implementing more structured and automated approaches to quality assurance and performance measurement. This study highlights the potential for leaguewide initiatives to promote standardized performance metrics and tools that enable teams to adopt more structured QA practices. By fostering automation and consistency, RoboCup SPL can better support teams in achieving both educational objectives and competitive excellence, paving the way toward the league’s long-term vision.
These results hold practical significance for shaping the roadmap of RoboCup and offer valuable insights for establishing team-based quality assurance and performance measurement practices.
In case of any questions please contact me using the contact information provided in the paper.
Download: Full text PDF