This class was awesome! It was refreshing to be in a CS class that didn’t suck up my entire life — and actually gave me really good tips for how to live life in general. I do think I came into this class with a lot of the knowledge that the class taught me — i.e. how habits and such worked, how to sketchnote, how to design. I’d already started sketchnoting because of 247G (and have done it for the past 2 years of my Stanford career), and I’d already done a lot of freelance design. On the habit side, I’m the kind of person to have 2 alarm clocks to resolve my phone-sleeping issues, blockers for all social medias on all my devices (permanently), a homemade “phone jail”, etc. etc. (I think I read about B. J. Fogg’s stuff somewhere.)
However, I’d never had formal language to describe how I designed, how these habits worked. I was delighted by the B=MAT formula — which I find really useful in how it categorizes certain areas of behavior change, like “coach” or “spark” or whatever. I think it’s going to be useful for things other than self-optimization or self-actualization, too, i.e. mass social change, how to get people to protest, how to get people to reject capitalism…. half kidding on that. I really liked the ethics readings, especially the privacy one, because that was stuff I wanted to learn more about and think about, and I liked how it was actually built into the class instead of being a thing that came on the side.
I think from this class, I’ll continue to interview people — a skill I really enjoy. I think it’s useful and it’s the best part of user research, because after that I don’t like the whole using their experiences and making it into a product bit, which I get is the point but also probably not what I want to do as a job. I thought the formal process of sticky-noting and such is useful, but doesn’t always generate the most interesting ideas — especially because we’re in a class setting.
Indeed, I’d hoped the class would be more radical in its aspirations, which seems limited to self-actualization, self-optimization, making products that make peoples’ lives marginally easier in order to justify living in a techno-dystopian society where all we do is drive around in cars. And live alone. Behavior change could mean revolution — but in this context it means making something that’s fun to play with. And let’s not kid ourselves that these products would actually do anything in the world. But it was fun to make it, and to think about the principles of behavior change.
I also wish that people took these problems more seriously, since most of the discussion posts feel almost too obviously ChatGPTed. I think my personal conviction of not using AI also showed: it was hard to see that, since we made our product using Figma, that it wasn’t as developed/didn’t have as many features — comparatively, it was a worse product. It’s just so much easier to use AI, so much faster and more efficient… Feels like this is just how life is going to be from now on. You can’t not use it.
In the project, I kind of wish we didn’t make what we eventually ended up making: something that justifies AI use, gamifies it. Rather, I wish we’d made an experiential game about the harms that come with the process of building/making AI, something more narrative. That’s obviously not in the scope of this class, but I think that would be a thing that would be better for this world. The whole “AI companion” thing feels like another mediocre, silly attempt to mask the true horror of the systems we have built. It feels like I’ve made another thing that could be easily subsumed into the arm of capitalism. It doesn’t really challenge anything — only continues the motion of the things that are already happening.
So as for the ethics of our product, I guess it tries to add friction to the process of using AI, but it also adds some delight. I think the ethics of a digital companion are kind of fraught because — if this project were to be developed at scale — it would probably end up trying to be as addictive as possible, as delightful as possible. And we’re trying to reduce people using AI — but actually making it delightful. I also don’t think it addresses the core reason why people are using AI, aka. because you just need to be faster and better and more efficient etc. etc. because everyone else is. That, we’re probably not going to solve with an app.
Now I think that product design is definitely not the path for me, but I do enjoy making experiences — aka. games. Especially narrative games which try to say something (and not just make something easier/better). I think I’ll try to implement behavior change in ways that aren’t product-related, but maybe more activism-related. I will try to look at my phone less by buying a watch. I will probably keep sketchnoting. I will opt out of all cookies and make things that help us resist the force of BIG TECH. Maybe make an app for that, even.
I do think this project helped me learn to let go a bit of perfection — which was something I struggled with in 147. Our team communicated a lot better than last quarter’s team, and overall, it was much more chill and awesome. I liked the fact that this class doesn’t push you to your absolute emotional and physical brink, and studio time especially helps (A LOT). This team and this class taught me to be much more CHILL. Because I don’t think I’m very CHILL. Aka. to communicate better. Which was awesome.
Next time, I think I want to be an even better, more CHILL teammate, communicating better with people, being okay with not everything being perfect, etc. etc. I need to not fix things myself — take over other parts of the work, add extra work for myself — even if it’s not THE MOST AWESOME thing I’ve ever seen. And maybe I’ll use BMAT to help me with that.
