PowerPAC Usability Testing Paused

In-person usability testing is temporarily paused due to library closures related to COVID-19.

Future PowerPAC Usability Testing

CCS will test the following (1) single volume record v. multivolume record cataloging of graphic novels, and (2) patrons' use of Full Display. Usability reports for this will be available after testing is completed. 

We are excited to include children and teens in our user testing as we research patron's ability to find, locate, and place holds on graphic novels cataloged as single volume records and multivolume records. Examples of single volume graphic novel records include Amulet and Sandman. Examples of multivolume graphic novel records include My Hero Academia and Saga

 

In Fall 2019, CCS conducted three rounds of usability testing for the PowerPAC. The User Experience Advisory Group recommended changes based on usability test findings. Read the implementation plan here.

User Experience Reports

Linked below are the findings and recommendations from each round of usability testing.

Alternative Find It Button

 

Single Volume Record v. Multivolume Record Cataloging of Graphic Novels 

  • User Experience Report: release date TBD

 

Full Display

  • User Experience Report: release date TBD

 

Round 1

Round 2

Round 3

Usability Testing Schedule

  • TBD

We are still looking for opportunities to test with children, teens, and adults. If you have an adult graphic novel club or similar group, please contact Kathleen Weiss, User Experience Specialist at kweiss@ccslib.org 

Usability Testing FAQ

What is usability testing? 

Usability testing is watching people try to use something you’ve created with the intention of (a) making it easier to use or (b) proving that it is easy to use.[1]  

Why do usability testing?

Usability testing provides strong research data because it is based on behavior -- what people do rather than what they say they do.[2] By observing patrons attempt to use the PowerPAC, we see the interface from the users’ perspective. Since average users lack insider knowledge about how the system is “supposed” to work, they often encounter problems that staff didn’t foresee. We learn the most about our users by watching them use the PowerPAC.  

How many users will be included in the testing? 

For quantitative studies a minimum of 20 participants will be included to to ensure a 90% confidence interval.[3] Many users are required because of substantial individual differences in user performance. Example of these studies include surveys, open-ended preference explanation, A/B testing, or multivariate testing. 

For qualitative studies a minimum of 4 - 5 users will be included. [1] As you add more users you learn less and less because you will keep seeing the same things again.  The ultimate user experience is improved much more by 3 studies with 5 users than by a single large study.[4] Examples of this may include usability lab studies. 

 

 

 

 

 

 

 

 

 

What methods will CCS use for usability testing?

  • Usability Lab Studies: Participants are brought into a lab, one-on-one with a facilitator and given a set of scenarios that lead to tasks and usage. Participants “think out loud” while a facilitator records their observations. The combination of watching participants use the PowerPAC and hearing what they’re thinking while they do it allows us to see the catalog through someone else’s eyes and mind. This produces design insights you can’t get any other way.[1]  

  • A/B Testing: Half of participants see one version of a page (Design A) and the other half see a slightly different version of the page (Design B). The “winner” of an A/B test is the design that best drives the behavior you want.[5]  

  • Multivariate Design Testing: Participants view one or more possible visual designs to help them identify what they like (or dislike) about each variation.[5]  

  • Open-Ended Preference Explanation: Participants explain why they like (or dislike) a design.[5]  

  • Survey: Participants’ self-report data is used to measure and categorize attitudes that can help track or uncover important issues to address.[5]  

References

[1] Rocket Surgery Made Easy, Steve Krug

[2] You Are Not the User: The False Consensus Effect, Raluca Budiu, Nielsen Norman Group 

[3] Quantitative Studies: How Many Users to Test?, Jakob Nielsen, Nielsen Norman Group 

[4] ‘But You Tested with Only Five Users!’: Responding to Skepticism From Small Studies, Kathryn Whitenton, Nielsen Norman Group 

[5] UX Research Cheat Sheet, Susan Farrell, Nielsen Norman Group