What Happened to Cai’s Social Media Feed?
It was 2022, and 16-year-old Cai was doing what most teenagers do—scrolling through his phone. Initially, his social media feed offered the usual lighthearted content, including a cute dog video. But then, something changed. “Out of nowhere,” Cai recalls, “I was recommended videos of someone being hit by a car, a monologue from an influencer sharing misogynistic views, and clips of violent fights.” Disturbed by the sudden shift, Cai asked, “Why me?”
Cai’s experience is typical. Across the globe in Dublin, Andrew Kaung, an analyst working on user safety at TikTok from December 2020 to June 2022, investigated what content was recommended to UK users—specifically teenagers like Cai. Andrew’s findings were alarming. He discovered that some teenage boys were being shown violent and pornographic content, as well as videos promoting misogynistic views. Meanwhile, teenage girls were often recommended for entirely different content based on interests such as music and makeup.
Can AI Tools Protect Teenagers?
Social media companies like TikTok and Instagram rely heavily on artificial intelligence (AI) tools to monitor content. These AI systems are designed to remove harmful content and flag other potentially dangerous material for human moderators. According to TikTok, AI or human moderators take down 99% of the content it removes for violating rules before it reaches 10,000 views.
However, Andrew Kaung witnessed the limitations of these AI systems firsthand. “During my time at TikTok, videos that weren’t removed or flagged by AI—or reported by users—would only be reviewed manually after reaching a certain threshold, which at one point was set to 10,000 views,” he says. This moderation delay meant some harmful content could easily slip through the cracks, exposing younger users to disturbing material before human eyes could catch it.
The same issue existed during Andrew’s previous role at Meta, which owns Instagram and Facebook. While Meta’s AI also flagged most harmful videos, it relied heavily on users to report content that had already been viewed. “I raised concerns at both companies but was often met with inaction,” Andrew admits. “The amount of work involved or the cost seemed to be the main reason nothing changed quickly,” Andrew says. Younger users, like Cai, were left vulnerable during the interim despite some improvements made over time.
What Is Ofcom's View on Social Media Companies?
Regulators echo these concerns. UK media regulator Ofcom confirms that algorithms from all major social media platforms have unintentionally recommended harmful content to children. “Companies have been turning a blind eye and have been treating children as they treat adults,” says Almudena Lara, Ofcom’s online safety policy development director. Ofcom warns that the algorithmic pathways pushing hate and violence toward teenage boys have received far less attention than those affecting young girls, such as content promoting eating disorders and self-harm.
How Has Cai's Experience Impacted His View of Social Media?
Despite social media companies’ attempts to safeguard teens, Cai’s experience suggests the systems in place are only sometimes effective. TikTok claims it uses “industry-leading” safety settings and employs over 40,000 people who are dedicated to keeping users safe. Similarly, Meta asserts that it has over 50 tools, resources, and features to create positive, age-appropriate teen experiences. Yet Cai still needs to be convinced.
“I tried using tools on both Instagram and TikTok to say I wasn’t interested in violent or misogynistic content,” Cai explains, “but the same kind of videos kept coming,” Cai admits that his interest in UFC—Ultimate Fighting Championship—likely influenced some of the content he was shown. Still, he found himself watching increasingly extreme videos from controversial influencers. “You get the picture in your head, and you can’t get it out. It stains your brain, and you think about it for the rest of the day.”
While girls his age were being recommended videos about music and makeup, Cai noticed his feeds were filled with violent and misogynistic content. Even now, at 18, he continues to see disturbing material on both Instagram and TikTok. As we scroll through his Instagram Reels, Cai points out an image making light of domestic violence, showing two characters side by side—one with bruises—under the caption, “My Love Language.” Another video depicts someone being run over by a lorry.
Cai worries about the impact these videos have on others. He shares a story about a friend who became increasingly drawn to content from a controversial influencer and eventually started adopting misogynistic views. “He took it too far,” Cai says. “He started saying things about women that were shocking. It’s like you have to give your friend a reality check.”
Despite his efforts to reset the algorithms by commenting on posts he disliked and undoing accidental likes, Cai says the violent and hateful content continues to dominate his feeds. “It feels like no matter what I do, the algorithm just keeps pushing the same stuff at me.”
How Do Social Media Algorithms Function?
So, how do these algorithms work, and why are they so hard to influence? According to Andrew Kaung, it all boils down to engagement. “The algorithms are fueled by engagement, regardless of whether it’s positive or negative,” he explains. That could be one reason Cai’s attempts to manipulate the algorithms failed. When users sign up, they specify their likes and interests, which the algorithm initially uses to recommend content. But beyond that, the system begins to serve up content based on the behavior of similar users.
TikTok claims its algorithms are not influenced by gender. However, Andrew argues that teenagers’ interest in signing up can create a gendered effect. For instance, teenage boys who show an interest in sports or action might inadvertently be pushed toward violent content because other adolescent boys with similar preferences have engaged with it. On the other hand, teenage girls who select interests like “pop singers, songs, and makeup” might avoid this content altogether. “Reinforcement learning,” a method where AI learns by trial and error, drives these algorithms, Andrew says. “The system is designed to maximize engagement by showing users videos they are likely to spend more time watching, commenting on, or liking—all to keep them coming back for more.”
One major issue Andrew identified during his time at TikTok was that the teams responsible for training and coding the algorithms were only sometimes aware of the exact nature of the recommended content. “They see the number of viewers, the age, the trend, that very abstract data. They wouldn’t necessarily be actually exposed to the content,” Andrew explains. In 2022, he and a colleague decided to examine the types of videos recommended to a range of users, including 16-year-olds. They were concerned about violent and harmful content being served to teenagers and proposed that TikTok update its moderation system.
Their suggestions included labeling videos so employees could see why certain content was harmful—whether it involved extreme violence, abuse, or pornography—and hiring more specialized moderators. Unfortunately, Andrew says their recommendations were rejected at the time. TikTok, however, insists that it had specialist moderators in place during that period and continues to hire more as the platform grows. The company also states that it separates different types of harmful content into what it calls “queues” for moderators.
Is It Possible to Ask Social Media Companies to Regulate Themselves?
Andrew reflects on his time inside TikTok and Meta, expressing frustration at the difficulty of making meaningful changes. “We are asking a private company whose interest is to promote their products to moderate themselves, which is like asking a tiger not to eat you,” he says. Despite his belief that teens’ lives would be better without smartphones, Andrew acknowledges that banning social media is not a feasible solution for many young people.
Cai agrees. For him, his phone is an integral part of his life—an essential tool for chatting with friends, navigating when he’s out, and even paying for things. Instead of banning phones, Cai argues that social media companies should better listen to what teenagers don’t want to see. “I feel like social media companies don’t respect your opinion as long as it makes them money,” Cai says. He wants the platforms to make their tools for indicating preferences more effective and responsive to users’ needs.
Will New Regulations Improve Online Safety for Teens?
In the UK, the new legislation aims to tackle these concerns by forcing social media companies to verify users’ ages and prevent sites from recommending porn or other harmful content to young people. Ofcom, the UK’s media regulator, will be responsible for enforcing this new law, which is set to come into effect in 2025. Almudena Lara from Ofcom stresses the importance of addressing the harm caused by algorithms, particularly to teenage boys. “It tends to be a minority of [children] that get exposed to the most harmful content,” Lara explains. “But we know, however, that once you are exposed to that harmful content, it becomes unavoidable.”
TikTok maintains that it uses “innovative technology” and provides “industry-leading” safety and privacy settings for teens. Meta, too, highlights its extensive resources and features to create positive experiences for young users. Yet, for Cai and others like him, the current systems still need to catch up. With new regulations on the horizon, there is hope that social media companies will be held accountable for the safety of their youngest users. Until then, teens like Cai remain caught in a digital world where harmful content can be just one scroll away