The advancement of social media regulations and how this is protecting young people

Concerns on social media have been around for almost as long as the platforms themselves. But they have reached new heights recently. The high-profile suicide of 14-year-old Molly Russell re-ignited calls for greater policing of social media companies after it was reported that her Instagram account contained distressing content on depression and suicide. Her father also partly blames Instagram (and its owner Facebook) for her death.

The Momo challenge has recently caused shockwaves on social media, with parents reporting that their children are scared to death by the female character. A distorted female face belonging to ‘Momo’ encourages young people to harm or even kill themselves after receiving a message from her. Indeed the furore surrounding Momo has reached enough of a height to make the original maker (who created Momo as a puppet) ‘kill’ the character.

Unfortunately, these cases are not isolated. Harmful behaviour on social media is occurring every day. As UK Digital Minister Margot James explains, “There is far too much bullying, abuse, and misinformation, as well as serious and organised crime online. For too long the response from many of the large platforms has fallen short.”

So, what are the social media platforms doing to regulate online behaviour?

How the social media giants are fighting back

Efforts to protect young people online depend on each platform. At the moment, there is a voluntary code of conduct that many platforms adhere to. This is usually reassessed whenever a case like Molly Russell’s occurs. Following her death, Instagram banned graphic self-harm images – a move that some parties say was too little, too late. “It should never have taken the death of Molly Russell for Instagram to act,” NSPCC chief executive Peter Wanless stated in response to Instagram’s actions.

Alternatively, social media companies are forced to address potentially harmful behaviour when it directly hits their bottom line. Video platform YouTube recently came under fire for allowing networks of paedophiles to form within the comment sections of its adverts. When news of the paedophile scandal broke, brands such as Hasbro, Disney and Nestle swiftly stopped advertising on the platform. YouTube has now banned all commenting on any videos featuring children or young people.

A monumental challenge to manage

This highlights the huge challenge that social media companies face. Every platform is different, which leads to different online behaviours to police. Plus, as YouTube discovered, humans can be pretty innovative in their exploitation of online platforms.

There have been consistent calls from Governments, NGOs and the public for more policing of social media platforms. Current efforts aren’t enough. However, there isn’t a silver bullet solution that will immediately protect all young people online.

Facebook hires an army of content moderators across the globe to vet its content. But even this has been criticised. Firstly, there are the opaque and confusing content guidelines that moderators have to adhere to. Rules that (rightly) ban videos of murder, but leave posts stating that “Autistic people should be sterilised” up. Additionally, the support and treatment of content moderators is under scrutiny, with many reporting PTSD-like symptoms after a few months on the job.

Naturally, if the social media giants are struggling with policing their own platforms – what hope is there for Government bodies?

How Governments are responding

Government actions to protect young users online range from individual politicians calling for greater oversight on social media companies, to fines and codes of practices. Video sharing app TikTok, for example, was recently fined by the U.S. Federal Trade Commission after it was found to be illegally collecting the data of children aged 13 and under. Facebook has faced similar scrutiny over its collection and use of personal data, with several GDPR-related cases pending against the tech giant.

Social media companies additionally have the option to follow a set of child safety recommendations set out by the UK government around a decade ago. Of course, these recommendations are now outdated and don’t take into account recent cases of harmful behaviour online. Some bodies, such as the NSPCC want the government to take it a step further, by making social media platforms sign up to a mandatory code of practice. They also want a fine issued for any breaches of the code.

Indeed, financial consequences for infringing child safety may be the only way to ensure effective policing of social media.

Taking a leaf from the Middle East and China

Another option could be to enforce punitive measures on social media companies and any individuals guilty of breaching the code of conduct. In the UAE, users can be fined or jailed for any number of offences, including posting anti-Islamic content, spreading rumours about another user via social media and taking photos of people and posting them online without permission. Momo would’ve been stopped in her tracks if sharing her messages resulted in a fine or jail-time.

Some governments may take the dramatic step of shutting off social media completely. China is well-known for limiting access to (or completely banning) social media platforms like Facebook and Twitter. As a result, alternatives such as WeChat and Weibo have been developed. These are more heavily policed by the Chinese government compared to their international counterparts. Although, admittedly, many restrictions surround China’s politics more than the welfare of its children.

Room for improvement

Many efforts to date have been ad-hoc and reactive to public outcry or events like Molly Russell’s death. It’s evident that much more can be done by social media companies and legislators to protect young people online.

To do this, governments and social media companies need to work together. Legislators need to understand each platform inside out, including their potential for harm and any internal limitations. Understanding this, some companies (including Facebook and Twitter) are reaching out to guide politicians on regulating online behaviour.

It’s clear that current ways of protecting young people aren’t working. Only time will tell what regulations will be developed to keep youngsters safe online. But they better come soon – before another Momo leaps out of social media’s shadow?