Artificial Intelligence measures for child protection highlighted in social media reports
Children's heavy use of Instagram, TikTok, and YouTube exposes them to risks. YouTube's report shows 43.4% of removed content in early 2024 was related to children, highlighting the platform's focus on safety.
- Life
- A News
- Published Date: 01:03 | 25 August 2024
- Modified Date: 01:08 | 25 August 2024
The extensive time children spend on platforms like Instagram, TikTok, and YouTube can leave them vulnerable to serious rights violations.
YouTube's measures to protect children from risky content and its transparency report highlight the seriousness of the situation.
YouTube reported that 43.4% of the content removed in the first three months of 2024 was related to children.
Of these, 26.5% were classified as harmful and dangerous, and 9.5% were removed for containing violence.
Additionally, 4% of the 1,443,821,162 comments deleted from the platform directly endangered children's safety.
Owned by Meta, YouTube rapidly removes inappropriate content using AI algorithms and permanently bans accounts with repeated violations.
This process is a crucial part of efforts to make children's digital experiences safer. TikTok's transparency report presents a similar picture. In 2023, 34.5% of removed content was due to exploitation, 26.5% due to alcohol, tobacco, and drug use, and 39% was removed for obscenity and exposure.
The U.S., U.K., Pakistan, Canada, Bangladesh, Brazil, Türkiye, and Saudi Arabia are among the countries with the most content removed.
The potential risks and dangers in the digital world pose an increasing threat to children.
However, productive AI algorithms stand out as the most effective mechanism for protecting children.
To fully realize AI's potential, critical factors such as ethical responsibilities, privacy, and human oversight must be considered.
Transparent and responsible AI operation can ensure the best protection for children in the digital world.