Top Issues Identified in Testing
1. Confusing Navigation from Homepage → AI Chat
Severity: Severe
Both testers were confused or surprised when logging a planned purchase redirected them to the AI chat. It was unclear whether the action was complete or why the AI was involved. One tester expected the purchase to simply be added to a list rather than opening a new interface.
Plan to Address:
We will clarify the transition by either embedding chat directly within the homepage or adding clearer confirmation. We will also make the AI’s role more explicit during onboarding so the redirect feels intentional, not surprising.
2. Reviewing Spending with AI Was Not Discoverable
Severity: Severe
When asked to review spending with the AI, both testers navigated to Profile instead of the chat. The AI did not feel like the central interaction point. This indicates weak hierarchy and unclear feature ownership.
Plan to Address:
We will strengthen visual hierarchy on the homepage and make the AI entry point more prominent. Onboarding will also emphasize that reflection happens primarily through the AI.
3. Homepage Hierarchy & Feature Clarity
Severity: Moderate
Testers were unsure what each section of the homepage was meant for. The distinction between logging, planning, insights, and breakdowns was not immediately clear. The highlighted insight also felt visually hidden and not positioned as the main takeaway.
Plan to Address:
We will improve visual hierarchy by elevating highlighted insights, simplifying sections, and clarifying labels. We may reorganize the layout to ensure the primary action is most visually dominant.
4. Unclear Purpose of Location & AI Role During Onboarding
Severity: Moderate
Testers questioned why location access was needed. There was also confusion about what the AI actually does. Some users saw the app more as a tracker than a reflective coach.
Plan to Address:
We will add short explanatory copy during onboarding (e.g., “Location helps identify spending patterns by place” and “Sage helps you reflect on impulse decisions in real time”). The value proposition will be clearer upfront.
5. Lack of Confirmation Feedback After Actions
Severity: Moderate
After submitting a planned purchase, users were unsure if the action was complete. One tester expected a visible running list or confirmation message.
Plan to Address:
We will add lightweight confirmation feedback (message or visual update to a list). Clear state changes will reduce uncertainty.
6. Progress Visibility During Onboarding
Severity: Trivial
One tester suggested a clearer progress indicator during onboarding and optional skip buttons.
Plan to Address:
We will add a simple progress bar to improve orientation.
7. Bank Integration Error Messaging
Severity: Trivial
One tester did not notice an error message when entering incorrect bank credentials and was redirected without clear feedback.
Plan to Address:
We will improve error visibility and ensure messages are clear and persistent until resolved.
Overall, the largest issues relate to AI discoverability, homepage hierarchy, and clarity of system feedback. Our revisions will prioritize strengthening the AI’s central role and improving navigation clarity.
