image

Five Steps Business Can Take to Meet Growing Expectations for Online Child Safety

March 29, 2023

Blogs

 

By Ilse Heine

Children’s rights apply in the digital environment. This was made explicit for the first time by the UN Committee on the Rights of the Child in General Comment No. 25, which celebrates its two year anniversary this month. While safety and privacy are critical for all users of digital services, children (18 and under) are particularly vulnerable on and offline, as they are in the earlier stages of their cognitive, social, and emotional development. General Comment 25 recognizes the right of the child to be protected in the digital space and defines the shared responsibilities of the state and business in this regard. As such, the General Comment set the stage for growing regulatory pressure on, as well as a greater focus by business to address the unique risks to children’s rights in the digital environment. 

Protecting children’s rights in the digital environment is more important than ever. The pandemic and the increasing reliance on digital solutions for learning and other key societal functions, as one UNICEF study observes, has had a profound impact on children’s education and well-being. According to one study, children with excessive screen time and underage social media access increased by 10 to 15% amid widespread lockdowns. Consequently, businesses have faced growing demands by various stakeholders globally, including regulators, to address their adverse impacts on children in the digital space. For instance, in 2021, the UK introduced the Age-Appropriate Design Code, which contains 15 standards that online services need to follow, as well as the Online Safety Bill, which also includes child safety provisions. In the U.S., there are two pending bills, including CTOPPA (or, COPPA 2.0) and the Kids Online Safety Act – the latter of which targets the website designs and features directed at children or used by children and teens. 

Based on our experience advising companies on safeguarding child rights, including in the digital environment, Article One recommends the following five steps for companies to align with the child rights and business principles, meet growing stakeholder expectations, and prepare for regulatory requirements on child online safety. 

1. Conduct a Child Rights Impact Assessment to understand full range of potential risks facing children online 

The pandemic has resulted in a significant uptick in the distribution and production of child sexual abuse material (CSAM), with the U.S. National Center for Missing and Exploited Children reporting an approximate 38% increase of CSAM in 2021 compared to 2020. While this undoubtedly deserves urgent solutions, there are still many other online risks to children that need attention, including but not limited to cyberbullying, harassment, exposure to harmful content, mis/disinformation, privacy violations, and grooming. Conducting a stand-alone child rights impact assessment (CRIA) to map actual and potential risks to children, and to prioritize the most salient risks is the most effective way for companies to surface and mitigate the full range of child rights risks online. 

In conducting the CRIA, one framework that can help inform risk categorization is the 4Cs framework. Developed by the knowledge platform, Children Online: Research and Evidence (CO:RE), the 4Cs classifies online risks to children as follows: 

• Contact: experiences and/or is targeted by potentially harmful contact (e.g., harassment, grooming, or stalking)

• Conduct: witnesses, participates in and/or is a victim of potentially harmful conduct (e.g., bullying, hostile peer activity)

• Content: engaging with and/or is exposed to potentially harmful content (e.g., violent or gory content)

• Contract: is party to and/or exploited by a potentially harmful contract (e.g., commercialization of children’s personal data) 

The 4Cs risk classification can help companies evaluate the range of risks that children may experience in relation to their digital products and/or services. This classification also recognizes important cross-cutting risks, including children’s privacy, health, and fair treatment and equal inclusion, which can occur in relation to any of the four categories.  

2. Distinguish between ‘risk’ and ‘harm’ with a focus on vulnerable groups 

When conducting a CRIA or making a business decision that could impact children, it is important for a company to distinguish between an online risk and harm, whereby risk is the probability of harm, while harm includes a range of negative consequences to the child’s emotional, physical, and mental wellbeing. In other words, a risk does not always translate into a harm, and not all children will be equally affected. Children can be affected differently by exposure to the same type of risk, depending on several diverse factors that make a child more resilient or vulnerable, including a child’s age, gender, country, digital literacy, and other offline vulnerabilities (e.g., living in care, income level). For instance, a report published by Plan International, which surveyed 14,000 girls and young women in 31 countries, found that more than half of respondents had been harassed and abused online, and that one in four felt physically unsafe as a result. Another report found that children who reported that they had been bothered or upset on the internet in 2020 varied by country from 7% (Slovakia) to 45% (Malta). As such, growing evidence points to the importance of evaluating where risks and the extent of harms may be heightened for certain children and developing tailored solutions accordingly. 

3. Actively engage with children 

It is critical to engage with children directly, both as part of a CRIA and on an ongoing basis, (where appropriate and feasible) to understand what they consider to be harmful and beneficial in the online environment. Very often, children are excluded from conversations on the very solutions aimed at their protection. Along these lines, the 5Rights Foundation notes, “children are often early adopters of emerging services and technologies and therefore the first to spot its contradictions and challenges, yet they are rarely asked their opinion, and are very often the last to be heard.” This is similarly echoed by the UN’s General Comment No. 25, which states that risk identification should also include “listening to [children’s] views on the nature of the particular risks that they face,” and “giving due weight to their views.” There are various methods to incorporate children’s feedback, including online surveys, face-to-face interviews, and focus groups and participatory exercises with children. For instance, Oko, an AI-based platform that facilitates small group learning activities, conducted in-person sessions with children, and engaged them as testers and design partners. In all cases, companies need to comply with relevant privacy regulations (e.g., GDPR, COPPA) and carefully follow ethical principles applicable to research involving children.   

4. Ensure that online restrictions are necessary, proportionate, and legitimate 

In protecting children online, companies should be careful to avoid measures that may unduly infringe on children’s rights and avoid, as one CO:RE analysis warns, “ the boomerang effect where solutions in one area can create limitations in another.” Indeed, General Comment No. 25 addresses the tension that can occur between different rights. With respect to content moderation and content controls, for example, the General Comment states that these measures “should be balanced with the right to protection against violations of children’s other rights, notably their right to freedom of expression and privacy.”  When a company is considering a restriction of a right, which is not absolute, it should meet the three-part test: legality, necessity, and proportionality. Moreover, the Convention on the Rights of the Child recognizes children need protection, but also, that they are individuals with rights, including autonomy and dignity. As such, children should be empowered to navigate online spaces safely (see, for instance, UNICEF’s efforts around digital literacy) while reaping the benefits of the online world. In practice, this requires companies to seek opportunities to advance children’s rights, while implementing meaningful due diligence and mitigating unintended consequences. 

5. Protect children by empowering them to engage safely online and adopt other risk mitigations, as appropriate 

There are several tools at companies’ disposal to empower children to engage safely with their digital products and services. For instance, a child safeguarding policy provides companies with a formal approach to manage its duty of care to do all it can to protect children from harm. YouTube has a dedicated child safety policy, which comprehensively outlines the company’s policies and measures related to child online protection and includes a child-friendly video version. Providing guidance and online resources to children and their parents can also empower them to safely navigate a digital product or service. A good example of this is LEGO’s guide for parents, called “The Digitally Smart Guide,” which covers key topics related to digital citizenship and online child safety. Young users can also learn about internet safety tips and tricks through LEGO’s interactive game, called Doom the Gloom, where children complete missions with Captain Safety, LEGO’s mascot and the game’s main character.  

Many experts also emphasize the importance of including children in the design process, particularly, though not exclusively, for technologies intended to be used by children. UNICEF took this approach with a digital solution, called Oky, for girls aged, 10-19 to learn about menstruation in fun and creative ways. In developing the app, UNICEF worked with girls in Indonesia and Mongolia to gather insights and input from adolescent girls and their online and offline lives. In another approach, H&M added a question to their Responsible AI Checklist, asking AI teams to thoroughly consider the effects of AI products on children and to ensure measures are included for the protection of children’s rights. Other potential mitigation measures include but are not limited to: capacity building and training of internal teams, hiring a child safeguarding expert, joining, or forming relevant industry association (e.g., the Tech Coalition on ending online child sexual abuse exploitation and abuse), and developing mechanisms (e.g., a Youth Board, creating a bridge between senior executives and the world of youth) to elevate children’s voices within the business. 

The risk mitigations adopted by a company will vary depending on the nature of its digital services or products, the company’s size and location, relevant child stakeholders, salient risks, and other contextual factors. Regulation also includes specific requirements that companies will need to meet. Nevertheless, particularly as obligations around child online safety rapidly move from voluntary to mandatory, it is imperative that business holistically assess and act on any adverse impacts on children in the digital space with which they are potentially involved. 

If you have any questions about how best to evaluate and integrate these recommendations into your company’s social responsibility strategy, please reach out to us at hello@articleoneadvisors.com.