Why banning kids from social media won’t solve the youth mental health crisis

Why banning kids from social media won’t solve the youth mental health crisis


Sydney (The Conversation): Banning children under the age of 16 from social media sounds like an appealing idea. For parents worried about their children’s lives in the digital age, the Australian government’s move may come as a relief.
But the evidence suggests it is very unlikely that sanctions will have a positive effect. youth mental health In fact, restrictions may make our children even more vulnerable online.
Children and young people go online primarily to socialise with their peers. Online spaces are one of the few ways for our busy children to communicate openly with each other, which is vital for their wellbeing.
Banning social media would close this door and force children into a lower quality online environment. Children are already saying that adults don’t understand what they do online and there aren’t enough resources to support them.
A blanket ban confirms that parents “don’t get it”. Kids will find ways to get around the ban. And if their interactions on social media get worse, the fact that they shouldn’t be there will make it even more difficult to reach out to adults for help.
Importantly, calls for blanket bans – which are challenging to enforce – force tech platforms into “compliance mode”. They divert company resources from designing better online environments for children to litigation.
What should we do in place of a ban?
Protecting our children online is a collective responsibility. We can take constructive steps, but this requires greater collaboration between governments, industry, the community sector, parents, carers, educators, researchers and children and young people.
All children learn by taking risks and making mistakes. The focus needs to be on eliminating online harms and preparing children and their caregivers to deal confidently with the digital world.
Stronger regulation is part of the solution. But making the internet a better place for kids – not just banning them – is the best protection we can provide.
So, what would that look like?
One way to implement it is to security-by-design The principle. Popularised internationally by the Australian e-Safety Commissioner, security by design means it incorporates security features into the DNA of tech products and platforms.
Here, we should take inspiration from the children themselves. They are urging platforms and governments to do several things:
– Provide privacy to minors by default
– Provide standardized, easily accessible and well-understood reporting processes across multiple platforms
– Using AI to detect bad guys attempting to interact with children.
Children also want to know what data is collected from them, how it is used, by whom and for what purpose.
They are also demanding safety-designed features that remove sexual, violent and other age-inappropriate content from their feeds.
All of these steps will help reinforce what they are already doing to look after themselves and others online – such as being cautious when interacting with people they don’t know, and not sharing personal information or images online.
Not only safe, but also optimal
Security by design is not the whole solution. Building on efforts to develop industry codes, industry and government should work together to develop comprehensive standards that provide not only a safe, but also an optimal digital environment for children.
How? High-quality, child-focused evidence can help major platforms develop industry-wide standards that define what types of content are appropriate for children of different ages.
We also need targeted education for children that builds their digital capabilities and prepares them to cope and thrive in online engagement.
For example, rather than education that focuses on extreme harms, children are demanding online safety education in schools and elsewhere that helps them manage the low-level, everyday risks of harm online: such as conflicts with friends, inappropriate content or feeling excluded.
Focus on the evidence
Some official, evidence-based guidance already exists. It tells us how to make sure children can minimise the potential harms and maximise the benefits of the digital environment.
Where evidence doesn’t yet exist, we need to invest in child-centred research. This is the best way to gain nuanced insights into children’s digital practices, and can guide a coherent and strategic long-term approach to policy and practice.
Drawing lessons from the COVID pandemic, we also need to better align evidence with decision-making processes. This means accelerating high-quality, robust research processes or finding ways for research to better anticipate emerging challenges and generate evidence. This way, governments can assess the benefits and harms of particular policy actions.
Technology is not beyond our control. Rather, we must decide together what role we want technology to play in childhood.
We need to move beyond a protectionist approach and work with children to create the best digital environment we can imagine. Doing so does not put the future at risk.




Comments

No comments yet. Why don’t you start the discussion?

    Leave a Reply

    Your email address will not be published. Required fields are marked *