final reflection

Final Reflection: Sticki and What I Learned About Designing for Human Behavior

Before this class, I thought behavior change design was mostly about motivation — that if you could just get someone excited enough about a goal, they’d follow through. I assumed flaking on plans was largely a character issue, a matter of someone not caring enough. I also thought design was primarily about aesthetics and usability, not psychology. This class fundamentally challenged all three of those assumptions.

What I Did and What I Experienced

Working on Sticki was one of the most research-intensive design experiences I’ve had. From the baseline study to the intervention, the process was genuinely iterative in a way that felt different from other project-based courses. 

Our baseline study was humbling. Several people had no idea how often they actually rescheduled or cancelled plans until they had to log it every day. That finding, that self-awareness alone could be a catalyst for behavior change, became one of the most generative insights. It made me realize that one of the most powerful interventions isn’t telling people what to do, but giving them an honest mirror.

What I loved most was the ideation phase. Starting from a “dark horse” idea as ethically dubious as a public shaming leaderboard and tracing how it transformed, through reflection, research, and critique, into the private FlakeScore felt like real design thinking in action. The final product was more nuanced and more useful precisely because we had interrogated the uncomfortable version first.

What was harder was the prototyping process. Vibecoding let us move fast, but it created distance from the implementation. Some screens felt constrained by early AI-generated decisions, and I sometimes struggled to articulate why something wasn’t working, only that it wasn’t. That gap between vision and execution is something I want to close in future work.

Ethical Considerations

The behavior change mechanisms in Sticki — automated plan detection, risk labeling, and the FlakeScore — occupy a nuanced ethical space. They are nudges in the classical sense: they don’t restrict choice, but they do restructure the decision environment. What makes them acceptable, in my view, is transparency and consent. Users know the app is reading their messages; they opted in; the score is private.

But I can imagine these mechanisms becoming manipulative under different conditions. If the FlakeScore were visible to others, or if score changes triggered notifications designed to provoke anxiety, it could shift from accountability to coercion. For users already prone to social anxiety or perfectionism, even a private score could become a source of shame rather than insight. That’s a real risk we acknowledged but didn’t fully design around.

On privacy: the app’s core value proposition depends on reading private messages, which is a significant ask. We tried to address this through explicit consent flows and a clear privacy policy, but “contextual integrity” — the idea that information should flow in ways that match the norms of the context it came from — is genuinely hard to preserve when a bot enters a private conversation. In a future version with commercial pressures or a data monetization model, those protections could erode quickly. The safeguard isn’t just a policy; it’s building a business model that doesn’t require selling what it promised to protect.

How My Thinking Has Evolved

I came in thinking motivation was the main lever for behavior change. I leave knowing that context, structure, and specificity are often more powerful.

Next Time

Next time I’ll remember that the most ethical design decisions aren’t made at the end of a project. They’re made at every step, including who you recruit, what you measure, and what you choose not to build.

 

Avatar

About the author

Leave a Reply