My experience in CS 247B was pretty good overall. I got lucky with my team and thankfully there were no major tensions or disagreements, which made a significant difference in how the quarter felt. Before this class, I thought we would spend a lot of time going in depth on specific UI decisions and how certain flows directly drive behavior change. I also assumed we would be coding and deploying a fully functional app on our own by the end of the quarter.
What worked well for me was the structure of the class. Assignments were released early and we had clear visibility into major deadlines like the baseline and intervention studies. That helped our team divide work effectively, communicate about weekly workloads, and stay ahead instead of constantly reacting. From a project management standpoint, that clarity was extremely helpful.
What I struggled with was seeing the value in the baseline and intervention studies. The baseline diary study in particular felt difficult to execute. Asking participants to log frequently without compensation felt like a big request. Unless you had close friends or family who fit your target user, it was challenging to recruit people who would consistently participate without dropping off. I also questioned whether the baseline was truly a baseline. Several participants shared that simply logging their spending made them more aware of their purchases. In that sense, the logging itself acted as a mild intervention. It made me wonder how accurate the “true” baseline behavior really was.
More broadly, the two studies sometimes felt closer to psychology research than computer science. For our team, we were studying impulse spending and unpacking the thought processes behind it. I originally expected that we would rely on existing research about what helps impulse spenders and focus primarily on designing and implementing a solution that applies those findings. Instead, we were generating behavioral insights ourselves. While that was valuable, it was different from what I anticipated.
From an ethical standpoint, this class shifted my thinking significantly. Our app, Sage, requires access to bank information and location data. We designed it so that personal information, spending data, and location hotspots are not shared externally. The purpose of collecting this data is to increase user awareness. During our baseline study, we saw that visibility alone helped users recognize how small unplanned purchases accumulate.
However, I can also see how this type of app could become problematic in the future. If Sage were deployed publicly and revenue became a priority, there could be pressure to monetize user data. Selling spending patterns or location-based insights to third parties would directly contradict the app’s mission and could even be used to encourage more spending. That tension between behavioral support and profit incentives is real. It made me think more critically about whether certain products should exist and how easily their original purpose can drift.
In terms of design justice and inclusion, we tried to be intentional when designing Sage. We maintained strong color contrast for accessibility, allowed users to turn off notifications and location tracking, and gave users control over the AI’s personality. Some users may want a strict coach while others may want an affirming guide. Giving users that control reduces the risk of the system feeling judgmental or manipulative. We also considered cognitive load for users who check finances once a month versus others who open the app in moments of guilt or stress. Our goal was to accommodate both patterns without overwhelming either group.
After taking this class, I now think much more critically about the ethical layer of product design. Before, I primarily focused on technical execution. Now, I think about nudging, manipulation, privacy tradeoffs, and whether a behavior change mechanism is genuinely in the user’s interest.
Next time I am designing an app, I will ask earlier whether the product promotes well being and whether its incentives are aligned with users long term. Even in engineering roles, where the focus tends to be technical, I want to carry forward ideas like value sensitive design and universal design. Years from now, I think what I will remember most is not a specific flow we designed, but the realization that small interface decisions can shape behavior in powerful ways.

You are really asking the right questions! You should take some psychology classes… after all B=f(P+E) so understanding P will only help you create better software. Really good questioning of the foundations of the class, and I’ll think about how to make next year even better. Thank you!