Final Reflection

I entered this class interested in behavior change from a design research and personal point of view. Before this class, I thought that behavior change required both an interest in changing and willpower.  However, this class informed me about the importance of habits and contextual cues in shaping behavior change. I was able to implement these principles not only in the project we created for this class, but also in my everyday life. 

Another large takeaway I gained from this class was understanding how AI can be used in design-related workflows. My previous design classes did not use AI, so this was my first time experimenting with how these tools can be useful (or not useful) in this context. Playing around with vibe code platforms like Balsamiq, Claude, and Replika I did find it helpful how quick it could prototype a functional application. However, I found that it also would sometimes produce overly complicated UI that violated UX and accessibility principles. In another sense, it created app slop. Then, our job became to fix what the AI created, iteratively creating prompts to simplify the UI. Which felt different from previous forms of design/engineering I’ve done in the past. It felt different, as though the skill I needed for an activity like this became how well I could prompt something. I’m still reflecting on how I feel about this, but I’m glad to have experienced this skill because it feels like the direction that the industry is headed in. 

One ethical consideration I’ve thought about throughout this class revolves around privacy. Many tech-based solutions to behavior change involve some sort of tracking. Whether that’s through collecting data in the moment or relying on the user to input their data. For example, my Pikmin Bloom application has all of my walking data from the past 6 months. If not encrypted securly, these applications store incredibly identifying and sensitive information that can be used for malicious purposes. Such as the Flo, the period-tracking app, facing legal action for sharing sensitive user health data (menstruation and pregnancy info) with third parties. Moreover, people can use this information to nudge users in behavioral directions that might not be in the user’s best interest. For example, how could an application know what’s best for a user without full context to a user’s life? When I play Pikmin Bloom, it doesn’t account for moments when I’m in an area with no sidewalks. My Streaks App doesn’t know when life becomes too much and I can’t keep up with a habit I input. But also, I don’t think I would want to know all of these things. I bring this up because as people explore AI based systems that propose surveillance based systems that aim to solve behavioral change issues, I hope privacy concerns are explored with the same rigor. 

My group’s application combined financial reflection with LLMs to encourage behavior change on impulse spending. This gave our application the benefit of combining quantitative insights of user spending (bank application, user-set spending limits) with qualitative data (user reflection). While this approach would likely give more useful feedback to users on their financial wellness journey, and our application includes a terms and conditions that describes how we would encrypt their data, if an application existed like this in the real world, I’d be concerned for the technology to fall in the wrong hands and this data would be used to harm users. For example, would it impact users creditworthiness or ability to take out a load? While these practical concerns are a bit beyond the scope of this class, it is something worth thinking about. 

As I’m still informing my own understanding of AI, design, and behavior change, in the future, I will continue to iterate on my own working understanding of when this technology can be useful and when it is harmful. 

Avatar

About the author

Leave a Reply