Before taking this class, I had a very loose understanding of what it meant to design a software product. I assumed the process was mostly intuitive—something driven by experimentation, trial-and-error, and personal judgment. If something worked, you kept it; if it didn’t, you moved on. There didn’t seem to be a need for a formal structure or defined methodology. However, working on our project, Clanker Clash, a web browser extension designed to promote intentional usage of large language models, completely reshaped that perspective. I now see design not as an unstructured creative process, but as a disciplined, iterative practice grounded in research, testing, and intentional decision-making.
One of the aspects I appreciated most about this class was the opportunity to be creative while working closely with a team over an extended period. Unlike shorter-term group projects, this experience required sustained collaboration, which made it especially rewarding. Watching our idea evolve from a vague concept in Week 1 when we didn’t even know what problem we wanted to solve to a fully realized product was both surprising and fulfilling. It demonstrated the power of collective thinking and iterative development. At the same time, I didn’t really vibe with certain aspects of the design process. In particular, I felt that we sometimes relied on too many design methods when a smaller subset would have been sufficient. While I understand the value of exposure to different frameworks, not all of them felt equally useful in practice.
Looking ahead, there are definitely tools and methods from this class that I would use again. Baseline studies and intervention studies, in particular, stood out as incredibly valuable. They allowed us to gather real user insights and evaluate whether our design decisions were actually effective. This grounding in data made our iterations more purposeful and less speculative. On the other hand, some methods like proto-personas did not resonate with me. I didn’t find them especially helpful in guiding our design decisions.
There were several surprises throughout the project. One of the biggest was how much we were able to accomplish as a team. Building a product from start to finish is a complex process, yet through collaboration and clear division of responsibilities, we were able to make consistent progress. Another surprising aspect was the really positive feedback from people who tested our product. However, not everything is fully resolved. There are still technical aspects of Clanker Clash that remain unfinished. For example, it would be exciting to integrate the extension directly with real LLM platforms, enabling a more seamless and impactful user experience. This represents a potential direction for future development.
This project also connects closely to my broader interests and work at Stanford and previous jobs, particularly in the area of AI trust and safety. Encouraging intentional and mindful use of LLMs aligns with efforts to reduce harmful or overreliant interactions with AI systems. By nudging users to think more critically about how and when they use these tools, our project contributes to a healthier relationship between humans and AI. This intersection made the project feel especially meaningful and relevant.
From an ethical standpoint, our project raised several important considerations. One key aspect is the use of gamification as a mechanism for behavior change. Clanker Clash incorporates a tug-of-war style game to encourage users to be more intentional with their AI usage. I believe this qualifies as an acceptable “nudge” because participation is entirely voluntary—users must choose to download and engage with the extension. However, it is still important to consider edge cases. For some users, especially those who are more susceptible to competitive or reward-driven systems, the game could potentially become manipulative if it encourages excessive engagement rather than mindful usage.
Privacy was another central concern in our design. We made a deliberate choice not to store or log users’ AI queries, which aligns with a definition of privacy centered on minimizing data collection and protecting sensitive user information. However, I recognize that future iterations of the project could introduce risks. For instance, adding features like public leaderboards or integrating with external APIs without strong privacy safeguards could compromise user data. Avoiding these pitfalls would require careful design decisions and possibly implementing strict data governance policies.
In terms of interface design and inclusivity, our team aimed to follow principles of universal and inclusive design. We chose a simple and widely understood game format (tug-of-war) to make the experience accessible to a broad audience. We also focused on using intuitive icons, clear buttons, and universally recognizable color schemes. Because our target audience is broad, we prioritized universality over specificity, aiming to create a design that could be easily understood and used by as many people as possible. That said, there is always a balance to strike, and future iterations could explore ways to better accommodate diverse user needs and edge cases.
Our project also ties into design for well-being. By encouraging users to be more intentional with their LLM usage, Clanker Clash supports a form of well-being grounded in self-regulation and mindful behavior. Instead of passively relying on AI, users are prompted to reflect on their usage patterns. However, there are also potential risks. If the gamification elements are not carefully balanced, they could lead to stress or over-engagement, which would undermine the goal of promoting well-being.
Overall, this class has significantly changed how I think about design. I no longer see it as unstructured or purely creative, but as a systematic process that combines creativity with research, testing, and ethical consideration. In the future, when faced with a similar challenge, I will approach it with a more structured mindset—leveraging the tools and methods that proved effective, while being more selective about those that did not. This experience has not only given me practical skills, but also a deeper appreciation for the complexity and responsibility involved in designing technology that shapes human behavior.
