YouTube, TikTok And Snap Will Have To Testify Before Congress Over Kids Safety
The prevalence of internet-based communication platforms, such as social media networks, video streaming sites, and instant messaging applications, has drastically changed how children interact. Due to their relative accessibility and ubiquity in today’s society, kids have begun to rely on these platforms for entertainment and communication. However, as these platforms become increasingly popular among children, it is important to consider how they may impact child safety.
There are numerous ways that internet-based communication platforms can affect the safety of children on an individual and societal level. For example, through targeted advertising and behavioural analytics applications such as Google Analytics, companies can gain access to sensitive personal information about users. Additionally, the availability and prevalence of online harassment, cyberbullying, sexting, identity theft, and other malicious behaviour threaten kids unaware of their necessary safety measures or how to react in threatening situations.
Finally, young people who spend too much time on these digital devices often neglect their physical well-being by not engaging in outdoor activities or even getting enough sleep; this can lead to detriments in physical health (e.g., weight gain) that ultimately degrade their overall safety.
YouTube
YouTube, the internet giant, is being called to testify before Congress as lawmakers take a closer look at how the platform affects the safety of children in its user base.
With its wide range of content, including videos and ads that are not always child-friendly, YouTube is faced with unique challenges in ensuring children are safe.
This article will take a closer look at these challenges and how YouTube addresses them.
Content Moderation
YouTube is a popular video sharing site, and has implemented content moderation systems to ensure that appropriate content is featured on the platform. However, it is difficult for moderators to manually review all videos posted on the platform and determine whether any of them could harm kids.
To address this issue, YouTube has developed an automated system for flagging inappropriate content. This system uses algorithms that detect violent and sexual imagery, hate speech or potentially dangerous comments. YouTube also utilises human reviewers who can provide additional context when videos are assigned “red flags” by the algorithm.
To ensure that content displayed on its platform is suitable for children, YouTube has recently announced that it will implement additional measures such as age-restrictions and sorting out content into age-appropriate categories. It has also provided parents with a series of tools to monitor their children’s online activities and keep them safe while they are online.
These changes aim to provide a safer online environment for kids by reducing their access to inappropriate content or other potentially dangerous situations.
Advertising Practices
Regarding YouTube and other social media platforms, there is an increased concern regarding the safety of children using these Internet channels. Given that YouTube has a wide range of content, from educational videos to live broadcasts, parents must understand the potential risks when it comes to their children viewing inappropriate material.
Regarding advertising practices specifically, YouTube has had a long-standing policy for over thirteen years that prohibits advertising for products incompatibly challenging or dangerous to children. In addition, advertising practices must adhere to standards set forth by YouTube under the FTC Act and FTC’s Children’s Online Privacy Protection Act (COPPA). As such, advertisers are prohibited from redirecting kids who view ads on YouTube or other sites, from collecting their personal information without parent consent or having videos that contain untested claims about products, services or health treatments – all of which can be manipulative and impactful on a child’s impressionable minds.
Furthermore, YouTube seeks to protect minors from exposure to advertisers’ inappropriate ads by displaying content in areas where age-restricted views will not be available. Additionally, there is enforcement towards data collection policies associated with child directed treatment and privacy notifications in language appropriate for all age groups, to ensure accurate messaging regarding the companies’ use and protection of this data as well as other disclosures important for parents/guardians/and occupants alike.
Parental Controls
Parental control tools on YouTube provide parents with ways to help keep their children safe from potentially unsuitable content. Typically these may include options such as the ability to block age inappropriate content, set a limit on daily viewing time, monitor recent view history and access their child’s online activity.
Additionally, third-party programs and extensions are aimed at allowing parents to block specific channels and videos that could be considered unsuitable for kids. It can also be beneficial to use child profiles on the platform so that children are only exposed to content that is suitable for them. With these controls in place, it’s important for parents to also talk with their kids about what they’re seeing online, so they know how to make smart decisions when it comes to their safety on the platform.
TikTok
TikTok has become a popular social media platform for teenagers, but it has also come under fire from parents and lawmakers concerned about its impact on kids’ safety.
Congress is planning to call representatives from YouTube, TikTok, and Snap to testify before them about the potential dangers these platforms may pose.
In this article, we will take a closer look at the impact of TikTok on kids’ safety.
Content Moderation
Content Moderation on social media platforms is important in ensuring kids safety and protecting their content. As one of the largest video-sharing platforms, TikTok has worked hard to implement robust policy and moderation protocols that ensure the platform remains a safe space for all users.
TikTok ensures safety and protection by using automated content and image recognition systems that can detect inappropriate material. This includes offensive language, underage users, attacks on individuals or groups, bullying and hate speech. The platform also operates a “zero tolerance” policy for inappropriate content and continuously adds more protection measures to safeguard users from potential harm or exploitation.
In addition to these technical safety features, TikTok has a reporting system that allows people to flag any posts that they find concerning or inappropriate. There is an entire team of trained moderators who can then take action on any reported material to make sure everything is kept within the boundaries of the community guidelines. In addition, the platform provides its users with various setting options when it comes to interacting with others: limiting who can message them directly (only friends or no one), controlling who can see their account (public or private), and blocking people they don’t want communicating with them.
These safety measures provided by TikTok offer its users a secure environment where they can create amazing videos while feeling safe knowing their content won’t be targeted by malicious actors or exposed to inappropriate content.
Advertising Practices
While many kids use TikTok to simply watch videos, there are some ways that platforms like this can impact safety.
TikTok is known for its rapidly growing user base but also has significant advertising practices. Advertisements can be everywhere — in videos and even when scrolling through your feed — which can be a risk factor for children. For example, the Bright Near Me app allows users to purchase products and services in their local areas, which has been advertised on TikTok multiple times. If kids cannot distinguish between advertisements and regular content, they could be easily targeted by unscrupulous advertisers.
Ads can also emotionally impact younger users who may not be able to process them objectively or make mature purchasing decisions. This presents a big challenge for parents trying to protect their children from negative messaging that could increase their vulnerability online. While brands create ads using sophisticated algorithms that tailor content specifically to certain age groups, it’s still important for parents to monitor the ads and set appropriate boundaries for appropriate ad viewing.
Parental Controls
For parents concerned about the content their children can access on social media platforms such as TikTok, some options are available regarding parental controls. However, it’s important to note that while the platforms offer controls to limit certain behaviours, they might not be effective in preventing kids from being exposed to inappropriate content.
On TikTok, there is a setting which allows parents to limit the use of set times or duration of use for each account. This can be especially useful for younger kids who struggle with using devices for extended periods. Parents can also opt-in for search filters that filter out specific terms and inappropriate topics from appearing in search results and comment sections within posts.
To ensure safety and security, viewers of all ages can also directly report potentially dangerous activities or behaviour such as cyberbullying to support moderators on TikTok, who investigate reported cases and take action accordingly. In addition, users under 18 have restricted access to direct messages sent by other accounts unless approved by a parent or guardian.
Snapchat
In the past several years, social media platforms like Snapchat, YouTube, and TikTok have grown immensely in popularity. These platforms have made it easier for kids to communicate with others, but the rise in usage also raises questions about kids’ safety on these platforms.
In this article, we will focus on Snapchat and how kids’ safety is impacted by its usage.
Content Moderation
Content moderation is a major concern for social media platforms like Snapchat. The issue of content moderation becomes especially important when it comes to online safety and youth, as most of Snapchat users are kids and teens.
Snapchat employs dedicated teams of content moderators who constantly monitor photos and videos and other content posted by users. This includes flagging inappropriate images or posts that might pose a risk for minors and moderating cyberbullying, hate speech, malicious activity and graphic content. Meanwhile, the platform also relies on automated tools and reporting systems for quicker identification and removal of inappropriate materials.
Although some might argue that their efforts have been ineffective recently – especially regarding the problem of cyberbullying – Snapchat has implemented several measures to improve user safety. These include introducing age gates into games, suspension of user accounts if they post graphic material or engage in cyberbullying behaviour online and flagging obscene images or words before they are made visible to all users. Moreover, they also updated their Privacy Policies so minors under 16 need parental consent before creating an account on the platform.
Advertising Practices
Snapchat is unique among the mainstream social media platforms in terms of how it allows advertisers to connect with users. It is one of the first social media networks to use a “mixed-content” monetization model: pricing per impression, revenue from brand partnerships and money from digital advertising. Snapchat allows brands to target ads based on user demographics, interests and device type, and provide exclusive content such as sponsored geofilters.
Advertisers also have the option of using Snapchat ads which appear either between stories or under a special ad section called ‘Discover’. Snapchat also has its mobile advertising platform – the Snapchat Ads API – which enables advertisers to buy sponsored lenses or geo-filters for their campaigns. Advertisements for products such as tobacco or e-cigarettes are not allowed on the app, however there is no harm reduction mechanism to prevent underage users from stumbling across ads that may be inappropriate for their age group.
More information regarding Snapchat’s advertising practices can be found here: https://business.snapchat.com/en-US/advertisers/
Parental Controls
Snapchat is a photo and video-sharing social networking platform that allows users to send temporary multimedia messages. Though initially targeted towards teens and young adults, it is now becoming increasingly popular among children. Unfortunately, with this increase in popularity comes certain risks for children such as cyberbullying, inappropriate content, contact with strangers.
Fortunately, Snapchat does offer some parental control features that can help parents manage their child’s activity on the platform and better ensure their safety. For example, parents can monitor their child’s activity by reviewing sent and received Snaps, restrict who their child can communicate with, disable the Snap Map feature, hide adult content from appearing on their device, limit the number of Snaps they can send per day and adjust general privacy settings.
Although these parental controls may be helpful in some cases, they are not foolproof and cannot guarantee complete protection against all potential risks associated with using Snapchat. Therefore, parents must remain informed about how this social network works to better protect their children’s online safety.
tags = YouTube, TikTok and Snap executives will head to Capitol Hill, Facebook whistle-blower Frances Haugen, Competition from YouTube, Snap and TikTok, Big Tech, discussions that have drawn bipartisan, TikTok, Snap and YouTube will send to talk to the lawmakers, youtube senate senators blumenthal blackburnbrownforbes, review youtube instagram parlerbrewsterforbes, youtube october senate blumenthal blackburnbrownforbes, tiktok youtube october blumenthal blackburnbrownforbes