Final Reflection – Emily Yang

Introduction

Before this class, I knew that changing anyone’s behavior would be a challenge, and I was really curious to see what the process is like to design for behavior change in just a short 10 week timeframe. I knew that this class was also known for conducting longitudinal user studies and was interested to see what that entailed. Plus, I took Christina’s CS 247I class last quarter and really enjoyed her teaching, so I was excited for the chance to take another class taught by her again!

 

My Experience

Looking back now that the 10 weeks are over, I can definitely say that this class gave me a wider perspective on how to approach designing a solution, whether it’s all the graphs and tools we learned about and created, or all the ethics discussions we had in class. I really enjoyed that we started the class tracking one of our own habits of our choosing to track for a few days. This really helped us experience a portion of what our participants would go through if we don’t design the study in a way that’s also considerate of their time. I also really liked creating personas for our solution and having that help us hone in on a particular user. One thing about the class that didn’t work as well for me was when some concepts we were learning in the middle of the quarter felt blurred together (such as which items were the storyboard vs. user story vs. journey maps, or any of the graphs that happened to fall within that time window that we covered). While they still proved to be immensely helpful in contributing to my team’s final solution, I do wish we had a place that clearly and explicitly labeled out the diagrams and models to resolve that confusion, and their purpose in the overall process. 

In terms of my team’s project, overall I think it turned out well, though there are areas where there could have been more improvement. An issue that arose for us was that while we had more than enough participants to conduct our baseline study with, our participant count slowly started to decrease during our intervention study, and even more so during our assumption testing. While we were still able to carry out the intervention study and testing, a part of me is not sure if we truly got good data to work with, which would then affect how we approached our solution. Another issue that we encountered was that when it came to making the Figma prototype, half of our team was not familiar or not comfortable with the platform. The way that our solution is set up is that since it’s meant to be extremely customizable to the user’s preferences and schedule, users can add, delete, or change their bot preferences as they wished. Unfortunately, this also meant that the amount of screens and flows that we would have to create for the many different permutations of these options would be endless and a huge time sink. In the end, we could only really fully implement one potential flow to showcase how the process would look for the user without making half the team lose sleep for many consecutive nights.

 

Ethical Considerations

For ethical considerations, our project falls very heavily within week 3’s nudging and manipulation discussion. A huge part of our solution is that we would have a bot that would post on a user’s social media feed or story, heavily reminding them to do something else other than scrolling. Our team discussed a lot about whether this manipulation could be considered ethical or not. In the end, we thought that since almost everything online could be considered as a manipulation, another way to look at this topic would be to ask whether there’s consent on the user’s end and if there’s transparency in the product. We believe that if we are able to be transparent enough to our user and ensure that this is a service they truly want, then we could say that this is an acceptable nudge. We also tried to make it as easy as possible for a user to disable their bot or even just unfollow the bot if they find it becoming too disruptive on their feed, in order to avoid our mechanism becoming too manipulative.

Another ethical consideration we had was privacy. Originally, we imagined our product to be added on to a user’s social media account, almost like an extension. Since our app focuses so much on social media, there was concern that if a hacker were to hack the user’s SpamMe account, that they would gain access to all of their accounts. In addition, a user would be giving SpamMe their information such as their schedule. We had some discussion around what to do to address this issue, and then we decided that SpamMe would instead be created as a standalone bot account that exists to follow the user and the user would follow back. We chose this route because instead of having the bot extended on the user’s social media account, following an account would give more of a barrier between our product and the user’s private information. We hope that this decision was a reasonable approach to address the issue of protecting a user’s privacy as best as possible, though I still do wonder what else we could do to protect our user’s privacy in a more robust manner.

 

Conclusion

Despite a crazy, hectic quarter, I’ve had a lot of fun in this class and learned so much from the teaching team, my teammates, and everyone else in the class. From the ethics discussions to interacting with classmates of so many different backgrounds, my way of thinking in approaching design has become more open, inclusive, and conscientious of others’ needs. Next time in the future when I’m faced with a similar situation, I will strive to work in a team that is diverse and continue to bring up topics in ethics such as the ones we covered in order to design and build products that can help the greater good.

 

Thank you to the teaching team for the amazing quarter!

Avatar

About the author