Publications‎ > ‎

Designing Human Benchmark Experiments for Testing Software Agents


Background: The use of software agents is becoming increasingly common in the engineering of software systems. We explore the use of people in setting benchmarks by which software agents can be evaluated. In our case studies, we address the domain of instructable software agents (e-students) as proposed by the Bootstrapped Learning project (Oblinger, 2006).

Aim: Our aim is to refine requirements, problem solving strategies, and evaluation methodologies for e-students, paving the way for rigorous experiments comparing e-student performance with human benchmarks.

Method: Little was known about what factors would be critical, so our empirical approach is exploratory case studies. In two studies covering three distinct groups, we use human subjects to develop an evaluation curriculum for e-students, collecting quantitative data through online quizzes and tests and qualitative data through observation.

Results: Though we collect quantitative data, our most important results are qualitative. We uncover and address several intrinsic challenges in comparing software agents with humans, including the greater semantic understanding of humans, the eidetic memory of e-students, and the importance of various study parameters (including timing issues and lesson complexity) to human performance.

Conclusions: Important future work will be controlled experiments based on the experience of these case studies. These will provide benchmark human performance results for specific problem domains for comparison to e-student results.

R.D. Grant, D. DeAngelis, D. Luu, D.E. Perry, and K. Ryall. Designing Human Benchmark Experiments for Testing Software Agents. Proceedings of the 2011 International Conference on Evaluation and Assessment in Software Engineering (EASE-2011); Durham, U.K.; April 11-12, 2011.

Dave DeAngelis,
Apr 21, 2011, 6:57 PM
Dave DeAngelis,
Mar 1, 2011, 10:37 AM