What can parents do to help their children stay safe online? The case of Instagram, Snapchat and Meta, and what they can do for their teen’s safety online
The social media regulations come as parents and lawmakers are growing increasingly concerned about kids and teenagers’ use and how platforms like TikTok, Instagram and others are affecting young people’s mental health.
While social media platforms don’t provide much of a response to the ills their platforms incur, they are still useful, according toMichela Menting. She pointed out that the solutions put the onus on the guardians to use the various parental controls that they are given, and more passive options such as monitoring and surveillance tools.
It is necessary for guardians to learn how to use the parental controls, as well as being aware that teens can circumvent those tools. Here’s a closer look at what parents can do to help keep their kids safe online.
After the leak of documents, Meta-ownedInstagram decided to focus on making its main service safer for young users, rather than releasing a version for kids under 13 years old.
The hub also offers a guide to Meta’s VR parental supervision tools from ConnectSafely, a nonprofit aimed at helping kids stay safe online, to assist parents with discussing virtual reality with their teens. Guardians can see which accounts their teens have blocked and access supervision tools, as well as approve their teen’s download or purchase of an app that is blocked by default based on its rating, or block specific apps that may be inappropriate for their teen.
A feature that encourages users to take a break from the app is suggesting that they take a deep breath, write a note, check a to-do list or listen to a song after a certain amount of time. If teens linger on any particular type of content for too long, you can be sure that instagram will push them toward architecture and travel destinations.
In August, Snapchat introduced a parent guide and hub aimed at giving guardians more insight into how their teens use the app, including who they’ve been talking to within the last week (without divulging the content of those conversations). To use the feature, teens have to give their consent, and parents have to make their own account.
TikTok: A Safe, Secure, and Popular Video Filtering Platform for Children’s Online Social Media Adversarial Communication
The company told CNN Business it will continue to build on its safety features and consider feedback from the community, policymakers, safety and mental health advocates, and other experts to improve the tools over time.
In July, TikTok announced new ways to filter out mature or “potentially problematic” videos. The new safeguards allocated a “maturity score” to videos detected as potentially containing mature or complex themes. It also rolled out a tool that aims to help people decide how much time they want to spend on TikToks. The dashboard shows the number of times a user opened the app, the amount of time they spent on their screen, as well as when they last used the app.
The app has some restrictions on access for younger users, such as Live and direct messaging. A pop-up also surfaces when teens under the age of 16 are ready to publish their first video, asking them to choose who can watch the video. The push notifications are not curbed after 9 pm for account users who are 13 to 15.
Last year, the popular messaging platform did not make an appearance before the Senate but it’s been criticized over difficulties reporting problematic content and the ability of strangers to get in touch with young users.
Still, it’s possible for minors to connect with strangers on public servers or in private chats if the person was invited by someone else in the room or if the channel link is dropped into a public group that the user accessed. By default, all users — including users ages 13 to 17 — can receive friend invitations from anyone in the same server, which then opens up the ability for them to send private messages.
He said they remain very hopeful that they will be able to pass legislation across the country that will change the relationship of our children with social media.
The laws passed through Utah’s Republican-supermajority Legislature are the latest reflection of how politicians’ perceptions of technology companies are changing — and that includes pro-business Republicans.
Other red states, such as Arkansas, Texas, Ohio and Louisiana have similar proposals in the works, along with New Jersey. Tech companies are required by a new law in California to put kids’ safety first, by barring them from using personal information in ways that could harm children physically or mentally.
The law was generally welcomed by children’s advocacy groups. Common Sense Media is a nonprofit that focuses on kids and technology. Jim Steyer, the CEO and founder of Common Sense, called it a step forward to hold social media companies accountable for how kids are protected online.
Legislation like this is necessary to hold big tech accountable for creating safer and healthier online experiences for kids and teens.
Utah lawmakers have focused on children and the information they can access online. Two years ago, Cox signed legislation that called on tech companies to automatically block porn on cell phones and tablets sold, citing the dangers it posed to children. The bill was revised to prevent it from taking effect unless at least five other states passed similar laws.
Nicole Bembridge, an associate director at NetChoice, said thatUtah will soon require online services to collect sensitive information about teens and families, including government-issued ID and birth certificates, putting their private data at risk.