PowerPAC Usability Testing
Through May 2020, CCS will test the following (1) an alternative Find It button, (2) single volume record v. multivolume record cataloging of graphic novels, and (3) patrons' use of Full Display. Usability reports for all three topics will be available in early May 2020.
We are excited to include children and teens in our user testing as we research patron's ability to find, locate, and place holds on graphic novels cataloged as single volume records and multivolume records. Examples of single volume graphic novel records include Amulet and Sandman. Examples of multivolume graphic novel records include My Hero Academia and Saga.
In Fall 2019, CCS conducted three rounds of usability testing for the PowerPAC. The User Experience Advisory Group recommended changes based on usability test findings. Recommendations will be implemented throughout February and March 2020. Read the implementation plan here.
User Experience Reports
Linked below are the findings and recommendations from each round of usability testing.
Alternative Find It Button
User Experience Report coming March 2020
Single Volume Record v. Multivolume Record Cataloging of Graphic Novels
User Experience Report coming May 2020
User Experience Report coming May 2020
Usability Testing Schedule
Thursday, February 20 - Algonquin Area Public Library (teens)
Monday, March 16 - Fremont Public Library (children, teens, adults)
Monday, March 23 - Fremont Public Library (children, teens, adults)
Tuesday, March 24 - Fremont Public Library (children, teens, adults)
Saturday, March 28 - Niles-Maine District Library (teens)
Saturday, April 4 - Evanston Public Library (children)
Monday, April 6 - Evanston Public Library (children)
Tuesday, April 7 - Lake Villa District Library (teens)
We are still looking for opportunities to test with adults. If you have an adult graphic novel club or similar group, please contact Kathleen Weiss, User Experience Specialist at
Usability Testing FAQ
What is usability testing?
Usability testing is watching people try to use something you’ve created with the intention of (a) making it easier to use or (b) proving that it is easy to use.
Why do usability testing?
Usability testing provides strong research data because it is based on behavior -- what people do rather than what they say they do. By observing patrons attempt to use the PowerPAC, we see the interface from the users’ perspective. Since average users lack insider knowledge about how the system is “supposed” to work, they often encounter problems that staff didn’t foresee. We learn the most about our users by watching them use the PowerPAC.
How many users will be included in the testing?
For quantitative studies a minimum of 20 participants will be included to to ensure a 90% confidence interval. Many users are required because of substantial individual differences in user performance. Example of these studies include surveys, open-ended preference explanation, A/B testing, or multivariate testing.
For qualitative studies a minimum of 4 - 5 users will be included.  As you add more users you learn less and less because you will keep seeing the same things again. The ultimate user experience is improved much more by 3 studies with 5 users than by a single large study. Examples of this may include usability lab studies.
What methods will CCS use for usability testing?
Usability Lab Studies: Participants are brought into a lab, one-on-one with a facilitator and given a set of scenarios that lead to tasks and usage. Participants “think out loud” while a facilitator records their observations. The combination of watching participants use the PowerPAC and hearing what they’re thinking while they do it allows us to see the catalog through someone else’s eyes and mind. This produces design insights you can’t get any other way.
A/B Testing: Half of participants see one version of a page (Design A) and the other half see a slightly different version of the page (Design B). The “winner” of an A/B test is the design that best drives the behavior you want.
Multivariate Design Testing: Participants view one or more possible visual designs to help them identify what they like (or dislike) about each variation.
Open-Ended Preference Explanation: Participants explain why they like (or dislike) a design.
Survey: Participants’ self-report data is used to measure and categorize attitudes that can help track or uncover important issues to address.