Stage three of the design brief has been fairly trouble free, though it had some challenges. The main task has been to perform actual user testing with our target audience, and this meant dealing with a bunch of children.
The best laid plans
In developing the plan, we decided to test a series of tasks which would effectively test how intuitive the navigation structure was. I documented the plan on the basis of these tasks, then developed a scorecard which allowed us to simply quantify some subjective measures of how well the user performed the tasks.
Later, it became apparent that we really needed to test the game play and the customisable character, which would also add to the child’s enjoyment of the test process. So prototypes were quickly developed for both of these, but the scorecard was never updated to add these tasks.
The day of reckoning
Scott had managed to rope in a number of family members to help with entertaining the children. In this way, we were able to focus upon our testing, while the children were kept occupied. It was immediately apparent that the children needed to be kept away from the testing for two reasons.
- they may distract the test subject
- they may see elements of the test, denying us the ability to monitor their initial reaction to it
The first surprise became apparent before our test subjects had even arrived; a communications mixup meant that we had the script for our application, but not the script for testing. We quickly fleshed out a script, then began testing… late.
The first test
Our first test subject was a 10 year old girl.
Right off the bat we suspected that our list of test tasks was too long. Our first test took over 25 minutes, but taught us a number of things.
- Our test needed to be shorter to maintain the test subjects interest (which meant we were unable to test everything we wanted to).
- Our test script needed to be more rigidly followed.
- The scorecards were missing sections for the recently added tasks.
Subsequent tests and other discoveries
After a quick team meeting, we altered our plan slightly, then continued with a reduced task list. Subsequent tests ran much more smoothly, with 6-9 minute durations.
The order and tasks that we used worked well, and the use of one of the games as a last test acted like a reward for the children. It was very gratifying to see their enjoyment of the simple game, and surprising how much of our research was proven by the tests. The navigation through the game seemed to be as intuitive as expected, except that the names on the main menu buttons needs to be improved. We also noted that some of the buttons were too small even for their child fingers!
Our organisation, or lack their of, was the major shortfall of the process. If our team performed testing again, we would be far more efficient and effective.
A dry run may have highlighted many of the problems, but the quick timeframes meant that we were not able to organise for this to happen.
More subjects would have allowed better statistical data, and the ability to test more tasks with different groups. But, this would have taken more time, and we had a limited supply of both time and children.
Overall, we managed to get some useful insight into how our app worked in the hands of our target audience. By including the resultant changes in our final design, it is without a doubt, better for it.