The AI tool protecting Olympians from cyberbullies

An AI algorithm is diving into the vast seas of social media content related to the Olympics, with the primary goal of combating online abuse.

The International Olympic Committee (IOC) estimates that the 2024 Summer Olympics will generate over half a billion social media posts. This estimate excludes the comments accompanying those posts. Assuming each post is about 10 words long, the collective text would be roughly 638 times the length of the King James Bible. It would take nearly 16 years to read through if each post were given a second.

Many of these posts will mention the names of over 15,000 athletes and 2,000 officials, subjecting them to intense scrutiny during one of the most critical moments of their careers. Alongside the cheering and national pride, there will be hate, coordinated abuse, harassment, and even threats of violence.

Such negative attention poses significant risks to the mental health of Olympians, potentially affecting their performance. To address this, the IOC is testing a new solution: an AI-powered system designed to protect athletes from cyberbullying and abuse whenever someone posts about the Olympics.

Online abuse has increasingly plagued elite sports, with numerous high-profile athletes calling for better protection. American tennis player Sloane Stephens reported receiving over 2,000 abusive messages following a match. Similarly, England footballer Jude Bellingham has spoken out about the racist abuse he and other players frequently endure. The English Football Association recently announced funding for a police unit to prosecute online abusers.

In recent years, the Olympics has emphasized mental health, recognizing the impact of social media on athletes’ well-being, says Kirsty Burrows, head of the Safe Sport Unit at the IOC.

“Interpersonal violence can be both physical and online, and the level of online abuse and vitriol is rising across society,” Burrows explains. While AI isn’t a perfect solution, it plays a crucial role in managing the immense volume of data. “It would be impossible to handle without the AI system.”

Understanding Language

Setting up filters to catch specific harmful words or phrases is straightforward. However, language can be much more nuanced. Handling a vast amount of content requires a tool that understands context, which is where the latest AI technology excels.

Large language models, like ChatGPT, which learn to process and generate language by analyzing patterns in extensive text datasets, can discern the sentiment and intent behind messages even when they don’t contain obvious harmful words. The IOC’s system for identifying online abuse, called Threat Matrix, is designed for this purpose.

During the games, Threat Matrix will monitor social media posts in over 35 languages in collaboration with platforms like Facebook, Instagram, TikTok, and X. It will detect abusive comments directed at athletes, their teams, and officials at the Olympic and Paralympic Games, although individuals can opt out if they wish. The system will classify different types of abuse and alert human reviewers.

“The AI does most of the heavy lifting, but human triage is critical,” says Burrows.

When the system flags an issue, a rapid response team will review the posts for context that the AI might miss and then take appropriate action. This could involve reaching out to the victim to offer support, requesting the removal of posts that violate social media policies, or contacting law enforcement for more serious threats. Often, this intervention occurs before the athlete even sees the harmful content, according to the IOC.

Control F5 Team
Blog Editor
OUR WORK
Case studies

We have helped 20+ companies in industries like Finance, Transportation, Health, Tourism, Events, Education, Sports.

READY TO DO THIS
Let’s build something together