Looking back to Week 1, I feel like I’ve come a long way. When I walked into the first class, I had a surface-level understanding of what behavioral design actually meant and how it applied to building things people use. Over the quarter, I picked up a set of techniques and frameworks that I genuinely think extend beyond just product design. The way I think about who I’m building for and the way I run conversations in my startup have all shifted.
Before this class, I thought behavioral design was mostly about cleverly deceiving users into doing what you want, though I guess you can argue that a part of it is. I genuinely believed it was a toolkit of psychological hacks that designers could leverage to steer people in a direction. I also assumed there would be a lot of focus on UI and UX patterns specifically, the historical patterns, the persuasive flows, basically the things that make people click “yes” when they mean “no.” I thought the class would be very technical in that sense, but to my surprise, it was quite different.
In hindsight, I did do a fair amount of behavioral design work as a student founder, which honestly shaped how I approach many things. As a founder, you spend a lot of time convincing yourself that you know your user. You build mental models, you run through scenarios in your head, and you move fast. This class forced me to slow down and actually pressure-test those assumptions in ways I wasn’t doing before, specifically the aspect of truly questioning whether you know your user.
On a related note, what I loved most was the Thursday ethics discussions. They were genuinely some of the most interesting conversations I had this quarter because the questions were very specific scenarios beyond just a “is this good or bad” territory. I also loved how the class made it obvious that the best insights come from paying attention to the most ordinary behaviors. The milkshake story, for example, where the consultant noticed people were buying milkshakes on their morning commutes, not because they were hungry but because they were bored, stuck with me. That’s definitely something I’ll always remember from this class. It’s a perfect example of how the answer is right in front of you if you actually stop and look. Most of the time, we just don’t bother to look, or even more dangerous, we convince ourselves that we know the base truth when we know absolutely nothing. Thus, it’s very critical to test our hypotheses in the real world, instead of leaving them written on paper. On the flip side, one thing I did feel was that some of the in-class activities felt repetitive. There were a lot of post-it exercises and charting activities that blurred together after a while. I think the distinctions between them weren’t always made clear enough; maybe I missed something here, but it sometimes felt like we were doing the same exercise with a slightly different label.
In terms of what worked and didn’t work, I really valued the group work. Getting perspectives from other people changed the way I was thinking in real time, which is something that just doesn’t happen when you’re working alone. One specific thing that didn’t work was related to user research, specifically the more generic interviews. There were moments where we felt that users were giving us answers they thought we wanted to hear rather than what they actually believed. And in many cases, users didn’t have true clarity in what they, themselves, believed.
The tools I’ll carry forward are definitely the user journey mapping and rapid prototyping. Drawing out a user map rather than just keeping it in my head was very eye-opening. You catch things you never would have caught otherwise. Same with quick mockups; getting something tangible in front of a real user and watching how they interact with it is irreplaceable.
One specific problem we ran into during the project was with our API integration. The original plan for Unrot was to pull a user’s existing + current chat logs from LLMs like ChatGPT and analyze them. We quickly realized that most of these LLM APIs don’t give you access to consumer chat history in any practical way, so we pivoted. We rebuilt the entire ChatGPT-style interface within our own web app, which meant all interactions happened natively on our platform. That solved the data access issue and actually gave us more control over the experience.
What’s still unresolved is whether any aspect of our product works over time. All of our user research was point-in-time, where someone used the app for 20 minutes and gave us feedback. Though that’s fine, it doesn’t tell us anything about whether behavioral change actually happens after two weeks or a month of use. That’s a big open question that we would need to look into in the future. This work definitely connected to my life outside class in a meaningful way, specifically through my startup. As a founder, you tend to build in your own image and assume your instincts about users are correct, as I mentioned above. This class gave me a more rigorous process to replace that assumption with actual evidence.
Shifting gears a little to the ethical side of things, Unrot uses two main nudging mechanisms: a visual animated brain that reacts to user behavior and text-based feedback that either encourages or calls out how the user is engaging with the LLM. These feel like acceptable nudges because they’re pretty transparent, not hidden, and not exploiting fear/urgency. The user knows exactly what the app is doing and why, but I do think there’s a manipulation risk when it comes to younger users. Students who are kids and teenagers are more emotionally susceptible to an animated character that “cares” about how they’re doing. The emotional attachment that makes the brain engaging for a college student could become something more coercive for a 12-year-old.
Relating to privacy, none of the user’s actual chat content is stored anywhere. It goes into an enterprise-grade LLM and a score and feedback come back out. We don’t see the data and neither does anyone else. The definition of privacy I’m working with here is basically that no third party can access your information. That said, if we ever wanted to scale or monetize, the temptation to store logs for training data or personalization would become very real. And for well-being, Unrot is most aligned with what you’d call objective list theory. The idea is that there are things that are genuinely good for people, regardless of whether they feel good in the moment. Our app is designed to push users toward those outcomes because we truly believe that outsourcing human thinking and reasoning to an AI will be dangerous, regardless of how seamless and frictionless the process seems in the moment. The risk, though, is that gamification can backfire. If someone becomes anxious about their brain score, or starts optimizing for the score rather than for actual learning, we’ve just replaced one unhealthy pattern with another.
Now, I think behavioral design is less about manipulation and more about responsibility. The line between a nudge and a manipulation is thinner than I expected, and it shifts depending on who your user is and what they’re vulnerable to. Next time I’m faced with building something that tries to change behavior, I’ll continue to draw things out, literally, because that habit alone changed how I think. Overall, there were many tangible and intangible things I learned in this class. Thank you so much for such an amazing quarter!
