In Trust we Trust: Building a New Framework for Trust & Safety

About the author: Andrew Shu '21 was a summer policy intern for Trust Lab, a technology start-up based in Palo Alto, California. He is currently a second-year student in the Ford Dorsey Master's in International Policy (MIP) program at Stanford University and a Knight-Hennessy Scholar.

 

The benefits of user-generated content (UGC) platforms have driven adoption. From photo-sharing to aligning community interests, these social media platforms provide services which users of today take for granted. Yet at a time when information translates more readily into power and influence than ever before, UGC platforms are under immense scrutiny. Giants such as Facebook, YouTube, and Twitter shell out significant resources in response, bolstering their Trust & Safety departments to filter out pervasive misinformation, rampant hate speech, and a whole slew of egregious content.

The results leave much to be desired. Undoubtedly, users are witnessing fewer photoshopped images of the Trumps in Ku Klux Klan robes, but the Trust & Safety (T&S) dynamic is still heavily skewed. First and foremost, in the post-Cambridge Analytica era, big platforms suffer from a lack of credibility. Lead by NGO watchdogs, a general distrust from the public cannot be assuaged by platform-driven initiatives alone. Second, the platforms’ specialization in expanding their userbase means they are ill-equipped to tackle T&S issues, even upon incurring significant financial costs. It’s a poor equilibrium for both sides.

I began working as a summer associate for policy at Trust Lab, Inc., motivated by the unique vision the founders have for the Trust & Safety industry. Armed with years of leadership experience at large platforms’ T&S teams, the founders believed an independent and highly proficient third-party organization would be the panacea. Trust Lab was thus born: providing T&S solutions to platforms, generating performance metrics for NGOs and the public, and serving as a bridge across both parties and across different platforms.

It’s been eye-opening. Between research, drafting the company’s whitepapers, and rapidly iterating misinformation labeling schemes for large-scale use, my tasks consistently prove that there is, indeed, an interdependence between theory and product, between the traditional policy-oriented ideas and the quick & lean development cycle characteristic of start-ups. There’s the art of translating a theoretical topology into an actual, usable product. Then there’s the science (literally, by running regressions and manipulating data) of successively improving the product by trial. And it all comes together when the data suggests an improvement to the theory, feeding insights for Trust Lab’s policies. I’ve been privileged to witness the entirety of this cycle through my own work.

And although I came for the mission, my work is motivated just as much by Trust Lab’s vision as the intellect, insight, and personalities of the Trust Lab team. The founders have truly built up a company culture that directly reflects its ethos: Integrity, Commitment, Ambition, and Fun. Not a day has gone by where my interactions with the brilliant folks at Trust Lab did not manifest these traits, be it a Tuesday Teach & Learn, a Thursday Speaker Series, a group meeting to address immediate blockers, or a nice one-on-one check-in with a founder. I think this embodiment of values speaks not just to the longevity of a start-up company, but also to the very nature of the service and product Trust Lab will provide. Trust is embedded in the company’s DNA, and when that’s the case, there’s reason to be optimistic.