Smart,City,And,Abstract,Dot,Point,Connect,With,Gradient,Line

How to Build a Rights-Respecting Trust & Safety Program

December 16, 2022

Blogs Business and Human Rights

 

By Keri Lloyd

This year’s inaugural TrustCon, organized by the Trust & Safety Professionals Association (TSPA), was the first gathering of its kind and an immense achievement for the industry that plays such a central role in mitigating risks to people on and offline.

We at Article One had the pleasure of attending virtually and left with a renewed sense of understanding of how human rights and trust and safety support each other. Given thattrust and safety seeks to address harmful online behavior and its impacts on users and communities, it is no surprise conference sessions aligned with the expectations of the UN Guiding Principles on Business and Human Rights. Indeed, the conference underscored the importance of taking a people-first, rights-based approach.

With that in mind, we wanted to offer three key takeaways for trust and safety teams seeking to leverage best practices and proactively embed respect for human rights into product design and trust and safety programs.

I. Understand platform-level risks to effectively prioritize and proactively mitigate them. From targeted harassment risks in online communities to cheating as a potential signal of privacy and security risks on gaming platforms, experts made clear that salient human rights risks are unique to each platform and depend on their users and communities, capabilities, and business model. Mapping the universe of risks most relevant for your platform can enable teams to prioritize mitigations addressing more severe risks of harm and identify any gaps in the existing policies and programs. Wikimedia, for example, conducted a human rights impact assessment (HRIA) in partnership with Article One to understand the universe of risks associated with its platform. This exercise allowed them to focus resources on priority areas, including child rights. To align with best practice and meet the expectations of upcoming regulations including the EU Digital Services Act, the exercise should be informed by the UN Guiding Principles on Business and Human Rights.

Experts at TrustCon also emphasized the importance of proactively considering the potential impacts of product design and engineering decisions on human rights. Emerging regulatory frameworks and voluntary codes of practice, such as the Online Safety regime in the UK and Australian Safety by Design principles, expect companies to integrate safety, privacy and security considerations into product design. Speakers recommended establishing feedback loops with product-facing teams to find opportunities to proactively mitigate against risks in salient areas through design decisions, features, and tooling. At Article One, we’ve developed a responsible innovation (RI) methodology to do just that and partnered with companies to find the right RI approach for them, whether that is holding responsible foresight workshops with cross-functional teams or establishing a responsible innovation program 

II. Enable purposeful consultation through meaningful transparency. There is increasing public interest and discourse around trust and safety, including questions around legitimacy and calls for more transparency and accountability. Civil society and company representatives at TrustCon emphasized that meaningful transparency, or efforts by companies to provide users, impacted communities, civil society, and regulators with actionable information, is key to establishing legitimacy and accountability. Providing independent researcher access to data, public-facing assessments like country-level HRIAs, notices directly to users, and regularly released transparency reports are all mechanisms that enable rightsholders to provide feedback and seek remedy, and stakeholders to provide informed critiques or support existing efforts.

Experts also flagged that due to the prevalence of centralized trust and safety models, the industry currently lacks widely known best practices and resources for involving rightsholders and communities in rulemaking and enforcement. Teams seeking to account for local contexts and center rightsholders in governance models can therefore learn from alternate approaches like coordinated efforts with volunteer moderators, recently undertaken by Discord, and community-centric models of moderation, such as Wikimedia’s.  

III. Seek collaboration with an eye toward evolving risks. TrustCon sessions featured multiple references to informal networks of collaboration between practitioners and gratitude toward the TSPA and TrustCon for formalizing and expanding them. Experts emphasized that the pace of technological development, for example in areas such as mixed reality and content-generating artificial intelligence, coupled with growing global access will make collaboration even more important to address evolving and emerging risks to users and communities.  

Calls to action across sessions centered on greater industry collaboration with two key areas of opportunity in particular:  

  1. First, interoperability, seen as essential to realizing some platforms’ visions for the metaverse, will require sustained industry collaboration and transparency to ensure that rights are consistently respected across interwoven digital ecosystems. Ensuring that existing spaces for engagement on interoperability, such as the Metaverse Standards Forum, are spaces for collaborating on human rights standards, as well as technical standards, is one path forward for companies hoping to realize this call to action. 
  2. Second, there is an opportunity for companies to collectively innovate to create tools and technical means of addressing some of the most salient-cross industry risks, particularly the safety of the most vulnerable users and wellbeing of moderators and trust & safety professionals. For example, companies could partner on tools to algorithmically detect violative 3D objects to mitigate the risk of hateful content, or on techniques to reduce moderator exposure to harmful content, like interactive blurring. This collaboration could include the broader trust and safety ecosystem including online communities, civil society organizations, regulators, and academics, as Microsoft & Dartmouth College did on PhotoDNA.   

TrustCon succeeded in recognizing the work that experts and communities have been doing for over 20 years toward creating safe and inclusive online spaces. It made clear that rights-respecting online governance is best achieved through engagement, transparency, and collaboration. We are looking forward to hearing how companies have progressed in these areas at TrustCon 2024.

To learn more and explore ways your company can take these practices forward, you can get in touch with us at hello@articleoneadvisors.com