Google To Use AI, Human Experts To Fight Online Extremism

Silicon Valley has come to the conclusion that it will take a combination of advanced technologies and human intelligence to help find and control extremist content online. Google and Google-owned YouTube are the latest companies to embrace this approach, announcing on Sunday that they are taking four new steps to fight terrorist content on the Internet.

Those steps the companies plans to take are: ramping up their uses of video analysis models and other technology to identify extremist videos; "greatly" increasing the number of experts to flag questionable content on YouTube; toughening their stances on videos that violate content policies; and stepping up their collaborative efforts with other tech companies such as Facebook, Microsoft, and Twitter.

'No Place for Terrorist Content'

"There should be no place for terrorist content on our services," Google general counsel Kent Walker wrote Sunday in a blog post that was also published as an opinion piece in the Financial Times. "While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now."

While using technology to identify extremist content "can be challenging," Walker said Google has used video analysis models to help identify more than half of the terrorism-related content it has removed over the past six months. He added the company plans to "apply our most advanced machine learning research to train new 'content classifiers'" for identifying and removing extremist video content.

Walker added that Google also plans to add 50 more independent non-governmental organizations to the 63 groups already working with YouTube's Trusted Flagger program, and will support them with grant funding.

Google will also put new restrictions on videos with "inflammatory religious or supremacist content," placing them behind a warning...

Comments are closed.