The UK news was led today by Ian Russell, the father of 14-year-old Molly Russell calling for urgent action to protect children online after an inquest found social media content contributed “more than minimally” to her death.
As the Coroner concluded Molly died from an act of self-harm while suffering depression and the negative effects of online content that she viewed which “shouldn’t have been available for a child to see”.
This should not only send shock waves through corporate boards rooms but must be a clarion call for those working with new technologies to immediately implement new solutions.
The benefits of AI in Web3 models are much discussed and in terms of protection they do not just stop with general cybersecurity. Their in-built auto-sensory abilities allow effective identity verification procedures. For example, by merging AI with know-your-customer technologies, companies can implement safety processes that can analyze real-time digital footprints and filter sensitive keywords that can deter malicious activity on Web3 systems.
This is the promise of potential protection but it does not feel as if the industry is taking steps let alone urgent steps to adopt these safeguards. Clearly huge change and re-prioritisation are required to enagage these solutions in a way that safeguards free speech. However, in the words of Ian Russell surely: “It’s time to protect our innocent young people instead of allowing platforms to prioritise their profits by monetising the misery of children.”