Advertisement

TikTok algorithm showing teenage boys 'more graphic, violent content than girls' - Former employee

Andrew Kaung held the role of User Safety Analyst in TikTok's Dublin office from December 2020 until June 2022
Jack Quann
Jack Quann

15.31 4 Sep 2024


Share this article


TikTok algorithm showing teenage boys 'more graphic, violent content than girls' - Former employee


Jack Quann
Jack Quann

15.31 4 Sep 2024


Share this article


Teenage boys see more graphic, violent and misogynistic content than girls on TikTok, a former safety analyst for the social media giant has warned.

Andrew Kaung held the role of User Safety Analyst in TikTok's Dublin office from December 2020 until June 2022.

He told Lunchtime Live that during an internal investigation, he discovered the algorithm was showing harmful content to both boys and girls.

Advertisement

"That's when I found out that teenage boys [tend to be seeing] more of that graphic violent content and misogynistic content [than] girls," he said.

"I think the girls were also shown some of those impossible beauty standards... that are very harmful to the girls.

"That has an impact on the girls as well in the way that they are insecure about how they look."

Flagging content

Mr Kaung said people who don't flag inappropriate content will likely see more of it.

"I'm pretty sure a lot of [people] don't [report it]... because they just decide to swipe it out rather than seeing it and then reporting it," he said.

"The thing is the recommendation engine is actually fuelled in that way.

"So if you don't report it - and the recommendation engine thinks that you like it - then they potentially give you more of that violating or harmful content".

Mr Kaung said his job at TikTok involved with what people saw.

"My job was to basically forecast and predict the amount of volume that people are seeing in every region or country," he said.

"My job is to make sure that there is enough, sort of, human moderation resources there to make sure that the platform is safe for all users."

'Learning detection model'

Mr Kaung said artificial intelligence also has a role to play.

"There is about 10% to 15% which can be missed by the machine learning detection model, so those are actually reviewed by the human moderators," he said.

"Human models also help the AI in the way that they help label whether content is harmful content or not as well - so that's how they operate hand-in-hand together."

Asked why certain videos - such as breastfeeding videos - are flagged as inappropriate and removed, Mr Kaung said the algothum doesn't work as well when it comes to speeches or video.

"I think it has something to do with the detection model, so to speak," he said.

"The detection model works really well in terms of images and it doesn't really do well in speeches or video.

"So in the case of breastfeeding, I think the machine would just potentially sort of see the breast and they just [think] this is a nudity and then take it down.

"Those with the detection model need to be doing much more than just detecting whether it is a breast or whether it is genitalia.

"It has to be [done in] more of a contextual way [by asking] whether it is a campaign about breastfeeding awareness or whether it is campaign about breast cancer."

Under-age users

Asked about younger children using their platforms, Mr Kaung said Governments should do more on the issue.

"That would be the Government's job to come up with some sort of regulations," he said.

"I think Ofcom in the UK is trying to implement some regulation where kids from 13 to 18 have to be verified by an ID.

"I think similar regulations should be implemented here and across Europe as well.

"There should be a way for the recommendation algorithm to tone down for those users and have a more targeted approach - so that the kids who are using between 13 and 18 shouldn't be watching that harmful content".

Parents and children

Mr Kaung said parents should also be aware of what their children are viewing.

"There should be a relationship [and] transparency between the kids and the parents in a way that the parents know what sort of content they're seeing," he said.

"The kids should also be able to freely talk about it in an [open] way.

"The good thing would be for a tech company to implement a feature where you can see when your children are being exposed to but at the moment, we don't have that."

Mr Kaung added that while there are people in social media companies who are "very much focused on child safety", "the corporation objective is about making revenue and how much profit they can get."

TikTok has been contacted for comment.

Listen back here:

Main image: Former TikTok employee Andrew Kaung speaking on Lunchtime Live, 4-9-24. Image: Newstalk

Share this article


Read more about

Algorithms Andrew Kaung Harmful Content Lunchtime Live Ofcom TikTok TikTok Algorithms

Most Popular