There are parental controls on social media.


TikTok: Putting a Safety Center to Work for Teens on Social Media Platforms, Reporting Sexual Exploitation and Child Abuse

The study suggests that TikTok may surface potentially harmful content related to suicide and eating disorders to teenagers within minutes of them creating an account.

Michela Menting, a digital security director at market research firm ABI Research, agreed that social media platforms are “offering very little of substance to counter the ills their platforms incur.” She said that their solutions put onus on guardians to use various parental controls such as blocking access and more passive options such as monitoring and surveilling in the background.

In response, the company recently refreshed its Safety Center, where parents can find guidance on how to turn on safety settings, FAQs about how Discord works, and tips on how to talk about online safety with teens. An option to prevent a minor from receivingfriend requests or direct messages from someone they do not know is included in some parental control tools.

After the leaked documents caused a lot of anger, Meta-ownedInstagram paused its plans to release a version of its app for kids under 13 and focused on making its main service safer for young users.

The Safety Center at Facebook has supervision tools and resources. Liza Crenshaw, a Meta spokeswoman, told CNN Business that their vision for the Family Center was to eventually allow parents and guardian to help their teens manage experiences across Meta technologies.

Another feature encourages users to take a break, such assuggesting them to take a deep breath, write something down, or listen to a song after a certain amount of time. If teens have been meditating on any kind of content for too long, they will be pushed towards topics related to architecture and travel destinations if they get a recommendation from the social network.

Snap previously said it’s working on more features, such as the ability for parents to see which new friends their teens have added and allow them to confidentially report concerning accounts that may be interacting with their child. Younger users will be able to notify their parents when they report an account or piece of content on the platform.

The company told CNN Business it will continue to build on its safety features and consider feedback from the community, policymakers, safety and mental health advocates, and other experts to improve the tools over time.

New ways to block mature or problematic videos were announced by TikTok in July. The new safeguards allocated a “maturity score” to videos detected as potentially containing mature or complex themes. It also rolled out a tool that aims to help people decide how much time they want to spend on TikToks. The tool lets users set regular screen time breaks, and provides a dashboard that details the number of times they opened the app, a breakdown of daytime and nighttime usage and more.

In addition to parental controls, the app restricts access to some features to younger users, such as Live and direct messaging. A pop-up also surfaces when teens under the age of 16 are ready to publish their first video, asking them to choose who can watch the video. Push notifications are not permitted after 10 p.m. for account users of 13 to 15 years old.

Although it didn’t appear before the Senate last year, the popular messaging platform had faced criticism over its difficulties reporting problematic content and its ability to strangers get in touch with young users.

Still, it’s possible for minors to connect with strangers on public servers or in private chats if the person was invited by someone else in the room or if the channel link is dropped into a public group that the user accessed. All users, even those with a young mind, can receive friend invitations from anyone in the same server, which will allow them to send private messages.

TikTok: How social media removed harmful content on suicide and self-harm topics: A Center for Countering Digital Hate report on April 21, 2021

According to a Center for Countering Digital Hate report, it can take less than three minutes for someone to sign up for a TikTok account, and it can take another five minutes to find a community promoting eating disorder content.

According to the report, the CEO of the CCDH said that young people’s feeds were bombarded with harmful, harrowing content that could have a large cumulative impact on their understanding of the world around them.

A TikTok spokesperson pushed back on the study, saying it is an inaccurate depiction of the viewing experience on the platform for varying reasons, including the small sample size, the limited 30-minute window for testing, and the way the accounts scrolled past a series of unrelated topics to look for other content.

TikTok does not allow the depiction, promotion, or glorifying of activities that could lead to suicide or self harm. Of the videos removed for violating its policies on suicide and self-harm content from April to June of this year, 93.4% were removed at zero views, 91.5% were removed within 24 hours of being posted and 97.1% were removed before any reports, according to the company.

The CCDH does not distinguish between positive and negative videos on given topics, and people often share empowering stories about eating disorder recovery.

This isn’t the first time social media has been tested. In October 2021, US Sen. Richard Blumenthal’s staff registered an Instagram account as a 13-year-old girl and proceeded to follow some dieting and pro-eating disorder accounts (the latter of which are supposed to be banned by Instagram). Instagram’s algorithm soon began almost exclusively recommending the young teenage account should follow more and more extreme dieting accounts, the senator told CNN at the time.

When searching for banned words, the person will be diverted to local support resources if they try to use #self harm.