TikTok Under Fire Over Teen Safety

 

A new investigation has accused TikTok of directing minors toward sexually explicit material through its search suggestions—sparking renewed scrutiny of how social media platforms handle child safety and age verification online.

Report Finds 13-Year-Old Test Accounts Exposed to Sexualized Content

According to a report released by UK watchdog Global Witness on October 3, researchers created seven TikTok accounts in the United Kingdom, each registered as belonging to a 13-year-old—the platform’s minimum age requirement. Despite using factory-reset phones with no search history and enabling TikTok’s restricted mode, the accounts were still met with “highly sexualized” search suggestions.

Global Witness stated that these inappropriate prompts appeared “the very first time the user clicked into the search bar” on several of the test accounts. The organization said the experiment showed TikTok’s algorithm was not simply displaying adult material but actively steering minors toward it, describing this as a “clear design failure” and potential violation of child protection standards.

The watchdog’s findings have reignited concerns about how tech companies manage the online experiences of minors, especially in light of similar lawsuits accusing TikTok of harming teenagers’ mental health.

TikTok Responds, Pledging Safety Improvements

In response to the report, TikTok said it had immediately investigated the claims, removed any content violating its policies, and made updates to its search suggestion system. A company spokesperson emphasized that TikTok has “more than 50 features and settings” aimed at supporting teen safety and that “nine in ten violative videos” are taken down before being viewed.

TikTok’s community guidelines prohibit nudity, sexual acts, or any sexually suggestive behavior involving minors. The company says it removes approximately 6 million underage accounts each month through age detection technologies and human moderation.

TikTok also noted that it has taken major steps to comply with the UK’s Online Safety Act, a landmark law requiring platforms accessible in the UK to prevent children from viewing harmful material, including pornography and self-harm content. The company says it has worked with Ofcom—the UK communications regulator—since 2020 to strengthen protections for users under 18.

Experts Warn of Legal and Ethical Implications

Legal experts and online safety advocates argue that the Global Witness findings could amount to a breach of the UK’s Online Safety Act. Media lawyer Mark Stephens described the results as “a clear violation” of the law, suggesting TikTok could face penalties if regulators confirm the findings.

Critics of the act, however, such as the Electronic Frontier Foundation, warn that mandatory age verification could compromise users’ privacy across all age groups.

Beyond the legal implications, the report underscores a growing ethical challenge for tech companies: how to maintain engagement-driven algorithms without compromising the welfare of young users. With increasing global attention on online child protection, TikTok, YouTube, and Instagram are under mounting pressure to balance business incentives with their duty of care.

As governments tighten regulations and watchdogs continue investigations, TikTok’s case may serve as a turning point in defining accountability for algorithmic influence—especially when it comes to safeguarding children in the digital age.

Leave comment

Your email address will not be published. Required fields are marked with *.