Usability Testing Report
We conducted two rounds of usability testing with three participants (Sumeda, Elijah, and Anthony). Participants explored the prototype, completed tasks related to tracking AI usage and setting goals, and shared feedback about clarity, usefulness, and overall interaction with the system.
You can see the script here!
Below are the top issues we identified during testing.
1. Duck’s Purpose is Unclear
Severity: Severe
Multiple participants liked the duck visually but did not understand what it represented or why it was always present on screen. Some interpreted it as a reminder about AI usage, while others felt it was distracting or unnecessary.
Planned fix:
Clarify the duck’s role during onboarding and through initial tooltips. We are also considering having the duck appear primarily when the user is actively using AI tools rather than being visible all the time, but always have the option to have 0 ducks on screen.
2. Onboarding Text and Interface Visibility
Severity: Severe
Participants reported difficulty reading onboarding text because it was too small or not clearly visible. This made the introduction to the product confusing and caused users to miss key information about how the system works.
Planned fix:
Increase text size and improve visual contrast. We will also simplify the onboarding flow so the explanation of key features (duck, stats, privacy) appears more clearly and sequentially.
3. Stats Timeframe is Ambiguous
Severity: Moderate
Participants were unsure whether statistics (prompts, sessions, usage percentage) represented daily usage, weekly usage, or lifetime totals. This caused confusion when interpreting the dashboard.
Planned fix:
Label the metrics clearly (e.g., “Prompts Today” or “Daily Usage”) and provide short explanations for metrics like “sessions.”
4. “Sources” Label is Confusing
Severity: Moderate
Several participants interpreted the “Sources” section as academic references rather than AI tools. This made it difficult to understand that the section represents usage by AI platform.
Planned fix:
Rename the section to something clearer such as “AI Tools”, “Tool Usage,” or “Chatbots.”
5. Reflection Feature Hard to Find
Severity: Moderate
Participants had difficulty locating the reflection prompts and were unsure when or how to use them. However, once discovered, participants said the feature could be valuable for encouraging pauses in AI usage.
Planned fix:
Make the reflection feature easier to access and provide optional prompts triggered during usage sessions.
6. Environmental Impact Curiosity
Severity: Trivial
Some participants expressed curiosity about the environmental framing of “sips” and suggested showing a tangible estimate of water or environmental impact.
Planned fix:
We may add a lightweight estimate to reinforce the metaphor and increase awareness of resource usage.
7. Privacy Concerns About Tracking
Severity: Moderate
One participant expressed concern about whether the system tracks actual prompts or conversations, especially when using AI for personal topics.
Planned fix:
Emphasize privacy protections during onboarding and clarify that the tool tracks usage metadata rather than chat content.
Overall, testers found the concept engaging and appreciated the awareness-building dashboard and goal-setting features! Our next iteration will focus primarily on clarifying the duck’s role, improving onboarding, and making the statistics easier to interpret!
