The alarming rise of harmful AI-generated content online has led to urgent calls from the United Nations for comprehensive measures aimed at protecting children from abuse, exploitation, and potential psychological harm. Cosmas Zavazava, Director of the Telecommunication Development Bureau at the International Telecommunications Union (ITU), highlighted the myriad ways children are targeted online, including grooming, deepfakes, cyberbullying, and exposure to inappropriate content. He noted that during the COVID-19 pandemic, many children, particularly girls, faced severe online abuse that often resulted in physical harm.

Advocacy organizations warn that predators can exploit AI technologies to analyze children’s online behavior and emotional states, thus tailoring their grooming strategies. Furthermore, AI is increasingly facilitating the generation of explicit fake images of minors, contributing to a disturbing trend of sexual extortion. A 2025 report by the Childlight Global Child Safety Institute revealed a drastic rise in technology-facilitated child abuse cases in the U.S., jumping from 4,700 in 2023 to over 67,000 in 2024.

UN member states are now taking more significant steps to address this pressing issue. In late 2025, Australia emerged as the first country to prohibit social media accounts for children under 16, highlighting the risks posed by the content children are exposed to online. A government report indicated that a significant portion of children aged 10 to 15 encountered hateful, violent, or distressing content, with over half reporting instances of cyberbullying, primarily on social media platforms. Other nations, including Malaysia, the UK, France, and Canada, are considering similar regulations.

In early 2026, various UN bodies signed a Joint Statement on Artificial Intelligence and the Rights of the Child, unequivocally addressing the substantial risks posed by AI and society’s struggles in managing them. The statement underscored a widespread lack of AI literacy among children, parents, and educators. It also pointed out the insufficient technical training for policymakers regarding AI frameworks and data protection.

Tech companies are under increased scrutiny for their role in this landscape. The Joint Statement indicates that many AI tools are not adequately designed with children’s safety in mind. Zavazava expressed a desire for tech companies to engage cooperatively in addressing these challenges, asserting that it is possible to balance innovation and safety. He emphasized that responsible AI deployment can allow companies to thrive while safeguarding vulnerable populations.

The UN is calling for a collective responsibility across society, including tech developers, to ensure that digital products respect the rights of children. Though past concerns have been voiced regarding children’s rights in the digital age, including updated language added to the Convention on the Rights of the Child in 2021, the UN bodies assert that more thorough guidance is necessary for effective regulation. They have outlined detailed recommendations to fortify children’s online protection, emphasizing the need for collaboration among parents, educators, regulators, and the tech industry to create a safer digital environment.


Discover more from FijiGlobalNews

Subscribe to get the latest posts sent to your email.


Comments

Leave a comment

Latest News

Discover more from FijiGlobalNews

Subscribe now to keep reading and get access to the full archive.

Continue reading