Smiley face
Weather     Live Markets

Children’s online safety is under scrutiny from parents, governments, regulators, and children themselves due to negative experiences on social media platforms. In the US, President Biden has called for stricter regulations on tech companies, while the UK has taken a step ahead by introducing new draft rules that will hold tech companies accountable for keeping children safe online. The Online Safety Act outlines 40 practical steps for social media platforms to follow, including age verification and improved content moderation to filter out harmful content.

The new UK rules aim to prevent children from accessing age-inappropriate content on social media platforms by enforcing stricter age verification measures and filtering out harmful content like pornography, self-harm, suicide, and eating disorders. Tech companies that fail to comply with these rules could face fines or even jail time for executives. While the rules only apply in the UK, the hope is that the severity of the sanctions will prompt social media platforms worldwide to prioritize the safety of children more effectively.

Social media platforms have often been criticized for their role in contributing to mental health issues among children, with tragic incidents like the death of teenager Molly Russell highlighting the need for stricter protections. Tech companies have made efforts to improve moderation and introduce safety features, but many challenges still remain. Children continue to encounter harmful content and face risks of exploitation by adults on these platforms, prompting the need for stronger regulations and safeguards.

In developing the new UK rules, Ofcom consulted with over 15,000 children to gather their input on what changes they would like to see on social media platforms. Feedback from children has influenced rules such as the ability to decline group chat invites and control the types of content they see in their feeds. After a consultation period, Ofcom plans to publish the child safety code within a year, with tech companies given three months to assess and report on the risks they pose to children.

As the new rules are implemented, Ofcom will continue to engage with young people to evaluate the effectiveness of measures introduced by tech companies and address emerging threats. The regulator is also planning to conduct further consultations on the potential risks posed by generative AI to children. Ultimately, the goal is to ensure that reporting channels to tech firms are improved and effective in addressing concerns raised by children. By prioritizing children’s online safety and collaborating with stakeholders, regulators hope to create a safer digital environment for young users worldwide.

Share.
© 2024 Globe Echo. All Rights Reserved.