
When I think of “designing for behavior change,” I think of various tech companies – some that try to encourage users to build good habits, and others who take advantage of human psychology and try to manipulate users to build bad habits for the company’s financial gain. Either way, I assumed that designing for behavior change would be more straightforward than it actually was; but, in reality, everything is interconnected and nuanced – we have to think through the design of a whole system, with information and an interface, and the unintended consequences of certain flows of interactions and data / context in order to create a design that is both useful, appropriate, and robust. Any team that was designing for a dietary behavior change, for example, couldn’t just add a simple rewards system to nudge users towards a certain diet – diets are personal, and pitting users’ diets against one another can actually become extremely unhealthy and toxic. How, then, do you nudge users?
What worked and didn’t work for you about the approach we followed?
The approach we followed exposed us to a lot of parts in the design process / what needs to be considered and thought through when building a product. What I found difficult, though, was how everything was squished into a span of 10 weeks (if only we had more time)! Because of the 10-week limit, I felt as though some processes were forced or rushed. Were we really able to find the right participants within only Stanford? Were we able to conduct the best intervention and baseline studies over the course of a few days? I’m curious what other results would have turned up if we had the time / space to expand beyond classroom constraints.
What underlying issues (personal, interpersonal, societal) surfaced as a result of this experience?
Throughout the weekly ethical question(s) discussions, I was reminded of how designing and building at companies is almost always a wicked problem – a problem that feels impossible to solve because of contradictory or changing requirements. Companies will always come across decisions where they either do what is best for users / society, or do what will help the company financially. Companies will always have people that argue for each side of the decision, and I don’t think it’s possible to stop people from wanting to make money – selfish people will always exist. It might be a sad reality, but projects and discussions like these help expose designers to the different ways we can start to approach these types of questions and potential solutions, and they help expose us to even more consequences we should be thinking of.
What mechanisms does your project use to change behavior? What makes them acceptable nudges? Are there users for whom or use-cases of your project for which your mechanisms might become manipulative and why?
As mentioned above, trying to change dietary behavior is really nuanced – it’s a fine line between healthy encouragement vs creating a toxic environment. When considering mechanisms in our app, we asked ourselves what ways can we avoid our solution being manipulative, while still prompting the behavior change? We talked about our goal to get users to increase their vegetable intake. In this case, manipulation looks like telling the user to eat vegetables in the following ways: eat a specific kind, or a specific amount, or vegetables prepared in a specific way. We instead ideated a solution that leaves a lot of room for autonomy in decision-making.
We nudge our users to post their meal approximately once a day, but if they don’t, they are not penalized, instead they are provided with a fun nutritional fact about vegetables. Additionally, users are given autonomy over the time windows during which they will receive their randomized notifications by selecting their approximate meal time windows during onboarding. When they do post after a randomized notification, there is no judgment on whether the meal meets a vegetable-standard. Instead, users are invited to look through their Memories and determine for themselves how much progress they have made during their usage of the app. This is how we aimed to steer away from manipulation and towards autonomy.
How does your project promote well-being according to one of the three theories of well-being? What about your project might pose a risk to well-being of the users?
Our team’s app aligns with the hedonic theory of well-being, which refers to pleasure and satisfaction. As mentioned above, it’s a slippery slope designing for dietary behavior change, though, because of how personal diets are.
Going into another design project, I would aim to find another incredible team / team members who are all equally as passionate about thinking through design decisions from all angles and who examine the ethical consequences / current issues that arrive from unintended consequences, too.
