Empowering us to stay safer online

Q&A with Former Cabinet Minister and GoBubble Board Public Policy Advisor, Lord Jim Knight

hero image
Home | Blog | Empowering us to stay safer online

The Government have announced that ‘tweaks’ are to be made to the online safety bill around ‘illegal but harmful content’ related to adults, reiterating their commitment to protecting under 18’s from online harm.

The Online Safety Bill is expected to return to Parliament next month.  Newly re-appointed Culture Secretary Michelle Donelan has said.

 

"We want it in law as soon as possible to protect children when accessing content online”.

 

As we have seen from a recent report on young people’s digital experiences, online abuse is an ever-increasing problem.  Making the internet safer for children and young people, by providing clear guidance and regulation, is of utmost importance.

 

GoBubble team member Laura Watson spoke to Former Cabinet Minister and GoBubble Board Public Policy Advisor, Lord Jim Knight. He was a member of the committee that reviewed the Online Safety Bill and listened to evidence from those who have experienced online hate, including ex- England footballer, Rio Ferdinand.  They also heard from the platforms themselves.

 

Laura and Jim discuss technological advances in content moderation, its use as a tool for protecting people from digital harm, and why GoBubble’s Emotion AI is leading the way in the global market.

 

The UK’s online safety bill is a hot topic again at the moment, with discussions about legislation for protecting children and young people from digital harm at the forefront. The research tells us that the risk of online harm has drastically increased since the pandemic, but so has awareness of the issue. How do you think the landscape changed, and what else can be done to tackle the problem?

 

JK: “The good news is that the narrative around digital safety is changing and that direct action is being taken in law, but it also requires a collaborative and coordinated effort. This is about the government working with the platforms, content moderation providers, and families and safeguarding teams within education and youth sports settings. Together we need a clear objective to minimise the risk of digital harm for young people.

 

“For many parents, guardians and teachers, social media was not a part of their childhood and the experience of navigating multiple social channels can be overwhelming. The temptation is to avoid it, or to give in to social pressure with a limited understanding of the many platforms. Many use different platforms to their children or pupils, such as sharing photos on Facebook when the child is already uploading video content on Instagram, Snapchat or Tik Tok.

 

“We know that children learn to use new technology quickly and find ways around safety measures and restrictions in no time at all, usually by sharing information in school, between siblings, and during sports activities and extra-curricular groups, so we have to keep our eyes open. The rise in screen time since the pandemic also makes it far more likely that they’ll encounter some level of abuse online.

 

“The platforms thrive on a business model based on advertising.  The more users engage for longer with more content, the more advertising they sell.  Their content algorithms are ruthless in pursuing this objective.  Sadly, outrageous shocking content is especially engaging and the algorithms know that.  Harmful people know that too, and this is the core of the problem.

 

“One thing that can help parents and those in positions of authority within schools and sports settings is to come together to keep children off certain social channels for as long as possible, as well as being mindful of age limits and content restrictions, and ensuring that social media profiles are not set up as adult accounts.”

 

Recent research from Ofcom tells us that 1 in 3 children lie about their age to access adult content online, with 32% having an account intended for adults and 47% aged 8-15 having a user age of 16 or over. It notes that many children receive help from parents to set up their social media accounts, with the motivations for parents including ensuring their child does not miss out.

 

“Understandably it can be a difficult dilemma for parents and guardians who want to respect their child’s privacy but are keen to monitor their online behaviour and protect them from potentially harmful content. Parents must take responsibility when allowing children to access apps that have clear age limits, due to the nature of the content available within them and to gain a greater understanding of what is appropriate for the age of their child. I am regularly shocked at how many primary school children use social media, why do young children need to use these risky tools at all?

 

“Unfortunately, those classed as young adults, who sit between age brackets, can also be vulnerable to offensive and potentially harmful content. The age assurance process will therefore be an important consideration within the online safety bill.

 

“We must continue to educate and upskill parents around digital safety, by raising awareness of information websites and organisations, as well as the tools that are out there to protect young people online, including content moderation services to support and empower them.”

 

An Ofcom report revealed that two-thirds of teens and young adults have recently encountered at least one piece of harmful content online, with 28% saying the most common potential harms online were offensive/bad language, whilst 14% had witnessed abusive behaviour, violent, hateful, offensive, or discriminatory content. How is this raised level of online hate and digital harm affecting the mental health of young people?

 

JK: “It’s undeniable that the problem is widespread, and its impact is damaging mental health and even causing loss of life for young people. I was very moved by the testimony and outcome at the Molly Russell inquest. 

 

“We can compare the problem to the effect that a bus full of children shouting at and bullying an individual can have, when social media comes in, it becomes 20 or 2000 buses full of such direct abuse, shouting 24/7.  A share or a retweet is also easier to action than a carefully crafted and deliberately toxic post, but can still have a powerfully negative effect as social channels amplify the content within seconds.

 

“This makes it hard for a young person to escape, and unfortunately, the after-effects of such abuse on an individual’s emotional health can be debilitating and long-lasting. So, it’s important that we’re transparent with young people, and teach them about building positive communities online and about the business model of social media platforms, explaining the monetisation of toxic content and the wider implications of sharing it.

 

“Reducing the amount of toxicity and abuse across social media doesn’t limit freedom of speech, instead, it brings about greater freedom, enabling people to post and share content without the feeling of fear or threat, and the potential for safer, more supportive digital spaces can be transformative.

 

In the mission to protect young people from online abuse and bullying, content moderation can be one of the powerful tools to help them stay safer online. You work closely with SaaS company GoBubble which provides AI-led content moderation, what is it about this solution that stands out to you and why have you chosen to get involved?

 

JK: “There have been so many advances in the digital safety space and within content moderation in recent years.  It’s no longer about just spotting keywords because online language patterns are nuanced and changing rapidly. GoBubble’s Emotion AI scans social media in 21 languages across text, image, video, audio, and emojis to identify and block toxic and potentially harmful content.”

 

“This AI-led approach is taking content moderation to the next level, analysing content in far greater depth by looking at the social and cultural context of posts, and reviewing sentiment and author interaction using patent-pending Emotion AI technology.

 

“In effect, we have algorithms with the objective of minimising harm working as a shield against algorithms with the objective of maximising engagement.

 

“Go-Bubble empowers organisations and individuals to manage the content they see on their own social media channels and platforms and supports their positive mental well-being. The machine-led approach also reduces the level of human moderation happening in this space, which not only increases the efficiency of the service and the amount of inherent bias within the moderation process, but has a positive impact on the mental health of content moderators who are exposed to toxic content less regularly.

 

So, how was this Emotion AI technology created and how long has it been in existence?

 

JK: “GoBubble has created its own AI, to address the ever-increasing issue of online abuse, by experts in the field of digital safeguarding, law enforcement and online trust and safety. Their experience is in big tech companies such as Google, Facebook and Twitter. The company has been in existence for nearly 10 years and began in EdTech, which is how I became aware of the product and the established values of the organisation.

 

“Developed initially with schools over several years to flag online abuse and bullying, and to reward positive online behaviour and kindness, it positively impacted the well-being of more than 1 million young people across the globe. The service is already being used by top international sports teams, brands, and multinational corporations as well as major platforms and online communities that integrate the technology within their own digital eco-systems, across a wide variety of sectors.”

 

What do you think will be the focus now as the online safety bill approaches review in the coming weeks, particularly regarding children and young people?

 

JK: “No one can find their way around tech better than children and teenagers, their own language patterns and online culture and behaviours are developing at a mind-blowing rate, so the technology has to keep up.  Platforms need to be held responsible by a powerful new regulator but we must also make the most of moderation tools.

 

“The tools that we advocate to support and empower young people and adults must be robust, but dynamic enough to adapt to the pace of cultural and social change, and the effects that can have on our language and behaviour. AI offers that, it isn’t 100% accurate, but a tool such as GoBubble’s Emotion AI provides 90% accuracy and is continually learning, so offers the most compelling answer to the problem.

 

“What is needed now is clear guidance and transparency around content moderation to ensure the most vulnerable in our society are protected and our right to freedom of speech and expression online is uncompromised.”