Why I would not work at Facebook

Many of my friends interned at Meta this summer. When I asked them about what their jobs entailed, they just shrugged and described what kept them busy from 9 to 5: developing a payment interface for WhatsApp, designing buttons for Instagram, or refining the search feature for Facebook. I was surprised that products that we interact with on a daily basis – with functions that we take for granted – are actually comprised of groups of employees who spend months and years ensuring the functionality of one small aspect of the final product. Behind the Instagram like button, the Facebook feed, and WhatsApp interface designs are thousands of software engineers who spend their summers and youth powering the product. 

I spent my summer doing research on campus, developing an initial curiosity into a final research paper. It was a strenuous process: I spent weeks and months thinking about the core research questions; but it was rewarding – both the process of iterating through the research agenda and running experiments to test various hypotheses and the final product of an evaluation benchmark and findings were completely mine. I could proudly declare that it was the fruit of my intellectual endeavor, something that I produced. I wondered – could my friends also say that the Instagram like button was theirs, or that they owned the Facebook feed? With social media companies and now AI companies harnessing millions and billions of consumers worldwide, the empire behind these products expanded, while the tasks for each independent employee shrunk. Take OpenAI as an example: from a research lab comprising of only hundreds of researchers working at the frontier of language modeling, the company has developed into a gigantic hub that feeds thousands of employees, who each work on one aspect that contributes to the final product: ChatGPT. 

Now, unlike research, where I could take full autonomy over what I want to work on, industry jobs often encompass interacting with more intricate hierarchies and relationships: your boss might not even be able to decide what an interface should look like, or not even your boss’s boss. In these cases, the question of whether the work remains ethical is not only a matter of whether the employees themselves are doing the right work, but in a bigger picture – whether the product or company itself contributes to the greater good. And this is a very hard question to answer: most technologies are a double-edged sword, with some good and some bad. For example, at the expense of exploiting workers and consumers,  the product could also be bringing joy and entertainment to some. The chapter raises two important questions to weigh the pros and cons: first – How seriously wrong do I believe the company’s actions are? And secondly, How close is my work to those actions I believe wrong? While most work that software engineers or many product managers do in big tech companies like Meta do would not directly affect the outcome of a product, all the hours spent developing and debugging still contribute to something. As the chapter poses, any refusal to reflect on the bigger impacts of one’s work would be lying: 

But the decision to remain blind is difficult because pretending you don’t see essentially means you’re lying—lying to yourself.

If I cannot retain full control over what I work on, and if I cannot predict the ethical implications of the work I am contributing to, then even if I can free myself from blinkers on the horse and see things clearly, the lack of action or the complicity in a pipeline of unethical work still means that I am spending my time and energy on something that causes more harm than good. In this case, the answer is no, I would not work at Facebook.

 

Avatar

About the author