User Experience Research at CCS 

Behavior Analysis for Full Display 

Determining which MARC fields to display in Full Display has been a challenge. Before determining which fields to include, CCS sought to determine the extent to which Full Display is utilized. Using Google Analytics, CCS conducted a behavior analysis of webpage views in the Glencoe Public Library PowerPAC. Full Display accounts for less than 5% of all page views.  The Behavior Analysis for Full Display resulted in a recommendation to pause research and changes to Full Display.

User Experience Report

Linked below is the user experience report for the Behavior Analysis for Full Display:



User Experience Reports

User Testing for the CCS Website Redesign Project

CCS conducted remote user testing with member library staff in April 2020 using the CCS website ( and Learning Portal ( in preparation for a website redesign project. User testing resulted in a proposed  header design as well as usability recommendations for the new website.

Linked below are the user experience reports for the CCS website redesign project:

Usability Testing with CCS Website and Learning Portal

Card Sort Activity for Proposed Header



User Testing for PowerPAC

CCS conducted usability testing with library patrons in 2019 using the patron facing catalog called the PowerPAC. User testing resulted in usability recommendations that were implemented in early 2020.

PowerPAC User Experience Reports

Linked below are the findings and recommendations from each round of usability testing.

Alternative Find It Button


Round 1

Round 2

Round 3

Usability Testing FAQ

What is usability testing? 

Usability testing is watching people try to use something you’ve created with the intention of (a) making it easier to use or (b) proving that it is easy to use.[1]  

Why do usability testing?

Usability testing provides strong research data because it is based on behavior -- what people do rather than what they say they do.[2] By observing patrons attempt to use the PowerPAC, we see the interface from the users’ perspective. Since average users lack insider knowledge about how the system is “supposed” to work, they often encounter problems that staff didn’t foresee. We learn the most about our users by watching them use the PowerPAC.  

How many users will be included in the testing? 

​For quantitative studies a minimum of 20 participants will be included to to ensure a 90% confidence interval.[3] Many users are required because of substantial individual differences in user performance. Example of these studies include surveys, open-ended preference explanation, A/B testing, or multivariate testing. 

For qualitative studies a minimum of 4 - 5 users will be included. [1] As you add more users you learn less and less because you will keep seeing the same things again.  The ultimate user experience is improved much more by 3 studies with 5 users than by a single large study.[4] Examples of this may include usability lab studies. 










What methods does CCS use for usability testing?

  • Usability Lab Studies: Participants are brought into a lab, one-on-one with a facilitator and given a set of scenarios that lead to tasks and usage. Participants “think out loud” while a facilitator records their observations. The combination of watching participants use the PowerPAC and hearing what they’re thinking while they do it allows us to see the catalog through someone else’s eyes and mind. This produces design insights you can’t get any other way.[1]  

  • A/B Testing: Half of participants see one version of a page (Design A) and the other half see a slightly different version of the page (Design B). The “winner” of an A/B test is the design that best drives the behavior you want.[5]  

  • Multivariate Design Testing: Participants view one or more possible visual designs to help them identify what they like (or dislike) about each variation.[5]  

  • Open-Ended Preference Explanation: Participants explain why they like (or dislike) a design.[5]  

  • Survey: Participants’ self-report data is used to measure and categorize attitudes that can help track or uncover important issues to address.[5]  


[1] Rocket Surgery Made Easy, Steve Krug

[2] You Are Not the User: The False Consensus Effect, Raluca Budiu, Nielsen Norman Group 

[3] Quantitative Studies: How Many Users to Test?, Jakob Nielsen, Nielsen Norman Group 

[4] ‘But You Tested with Only Five Users!’: Responding to Skepticism From Small Studies, Kathryn Whitenton, Nielsen Norman Group 

[5] UX Research Cheat Sheet, Susan Farrell, Nielsen Norman Group