What are social media companies really doing to combat terrorism online?

With the advance of technology, terrorist groups have begun to not only use, but rely on online resources for the recruitment and spreading of propaganda.

Terrorist operatives are mostly targeting people on social media and this is why the leading platforms sent their representatives to a hearing in Washington D.C. earlier this year to speak to the U.S. Senate Committee on Commerce, Science, and Transportation. Here they were able to discuss their current efforts to combat terrorism online, particularly within social media platforms. 

This was the first time big companies like Twitter, YouTube, and Facebook spoke about online terrorism openly and here is what they had to say:

Facebook said they were able to remove 99% of harmful content

Facebook’s AI platform has been helpful in this case of online terrorism. Thanks to it, Facebook can now recognise and remove 99% of content related to Al Qaeda and ISIS, said Facebook’s head of Product Policy and Counterterrorism, Monika Bickert. The AI software is able to go through video, images and text posts with almost 100% accuracy. She also noted that further improvements on the AI software were expected.

Person wearing a grey jumper sat at a desk looking at radical content online

Users that post terrorist related content online are removed from Facebook’s platform and are prevented from creating new accounts. Other people that are connected to the account are also looked into by a team of experts. Facebook has added 3,000 people to their review team that should expand to 20,000 by the end of this year. This is enough evidence that Facebook is taking online terrorism very seriously.

Twitter says they are doing more of the same

Twitter has been attempting to ban terrorist-related accounts for some time now and that number just went over 1 Million. The banning started since mid-2015 but a total of 574,070 accounts were banned last year. This was due to the improvement made by the algorithm working on identifying and removing terrorist-related content from Twitter. This technology supplements reports from Twitter users and makes the job easier for people in charge of removing harmful content from this platform. Twitter will also have a different approach to political campaigns starting this year. Some money from political ad revenue will be forwarded to charity and users will be protected from false information.

News articles being shown from a projector shaped like a globe

YouTube Relying on AI

Machine learning has been playing a key part in removing terrorist content from the internet and the case is the same with YouTube as well. Their AI is able to remove 98 percent of “violent extremism” videos, up from 40 percent a year ago. 70 percent of those videos are removed within 8 hours. This still leaves some room but the removal times should go down to two hours very soon.

Google is taking this matter very seriously. The AI won’t be alone on this case since 10 000 flaggers will be added to the review team this year. These staff members will be a part of the Trusted Flagger program that will involve counter-terrorism groups. There are videos that fall into the “grey area” but YouTube is restricting those as well. Videos like that will be unable to receive monetary revenue and comments will also be disabled to prevent unwanted discussions.

What should we expect in the future?

The number of removed content and user accounts related to terrorism online has been growing over the past couple of years. This means that big companies and social media platforms are making an effort against this but is that enough? Some people think that removing anonymous accounts from the internet would solve a big portion of the problem and they are probably right. If every platform required an ID check there would be no room for hate speech and online terrorist content. We would have a clean and regulated space where online terrorism would have no chance of surviving. But this would also raise the question of online privacy.

Obviously, it is hard to find a balance so we should definitely begin leaning more heaving on AI, which is maturing and beginning to show better results.


I agree to have my personal information transfered to MailChimp ( more information )
Join other visitors who are receiving our newsletter and keep up to date with the latest news on Cyber Crime and happenings at INSIKT Intelligence.
Your email address will not be sold or shared with anyone else.