YouTube is getting better at detecting and faster at removing terrorist-related content on the video-hosting platform—thanks to machine learning.
On YouTube’s official blog, the company reported that the use of machine learning has doubled the number of extremism-related videos they’ve pulled down from the site.
In the past month alone, YouTube said 75 per cent of videos removed for violent extremism were taken down before being flagged by a human.
“While these tools aren’t perfect, and aren’t right for every setting, in many cases our systems have proven more accurate than humans at flagging videos that need to be removed,” YouTube said.
The Google-owned company said the AI-powered technology is helping them quickly scale their efforts. But the robots still face significant challenges as 400 hours of content is uploaded to YouTube every minute.
In June, Google announced four steps aimed to bolster its counter-terrorism efforts with new measures to identify and remove extremist propaganda on YouTube. These changes included amping up the use of robots to identify and take down terrorism-related videos.
YouTube has faced past criticism for hosting videos that include hate content and failing to remove advertisements on these videos. Earlier this year, the video-hosting platform was pushed to clarify its own hate speech definition, establishing new categories and changing monetization rules. At the same time, on a site where the right to free speech is eternally debated these initiatives also face intense scrutiny.
Other internet and social media companies have come under political pressure to tackle online terrorism and extremism. In response, Facebook, Microsoft, Twitter and YouTube teamed up to create the Global Internet Forum to Counter Terrorism.
The forum held their inaugural workshop yesterday in San Francisco which brought together representatives from the tech industry, government and NGOs to formalize their collaborative goals and start discussing joint strategies.