WHAT ARE SOCIAL MEDIA COMPANIES REALLY DOING TO REMOVE CRIMINAL CONTENT?
WHAT ARE SOCIAL MEDIA COMPANIES REALLY DOING
TO COMBAT TERRORISM ONLINE?
With the advance of technology, terrorist groups have begun to not only use, but rely on online resources for recruitment and spreading of propaganda. Terrorist operatives are mostly targeting people on social media and this is why the leading platforms sent their representatives to a hearing in Washington D.C. earlier this year to speak to the U.S. Senate Committee on Commerce, Science, and Transportation about their current efforts to eliminate social media terrorism.
This was the first time big companies like Twitter, YouTube, and Facebook spoke about online terrorism openly and here is what they had to say.
Facebook says they are able to remove 99% of harmful content
Facebook’s AI platform has been helpful in this case of online terrorism. Thanks to it Facebook can now recognize and remove 99% of content related to Al Qaeda and ISIS, said Facebook’s head of Product Policy and Counterterrorism, Monika Bickert. The AI software is able to go through video, images and text posts with almost 100% accuracy. Further improvements on the AI software are expected as well.
Users that post terrorist related content online are removed from Facebook’s platform and prevented from creating new accounts. Other people that are connected to the account are also looked into by a team of experts. Facebook has added 3000 people to their review team that should expand to 20,000 by the end of this year. This is enough evidence that Facebook is taking online terrorism very seriously.
Twitter says they are doing more of the same
Twitter has been attempting to ban terrorist-related accounts for some time now and that number just went over 1 Million. The banning started since mid-2015 but a total of 574,070 accounts were banned last year. This was due to the improvement made by the algorithm working on identifying and removing terrorist-related content from Twitter. This technology supplements reports from Twitter users and makes the job easier for people in charge of removing harmful content from this platform.
Twitter will also have a different approach to political campaigns starting this year. Some money from political ad revenue will be forwarded to charity and users will be protected from false information to some extent.
YouTube Relying on AI
Machine learning has been playing a key part in removing terrorist content from the internet and the case is the same with YouTube as well. Their AI is able to remove 98 percent of “violent extremism” videos, up from 40 percent a year ago. 70 percent of those videos are removed within 8 hours. This still leaves some room but the removal times should go down to two hours very soon.
Google is taking this matter very seriously. The AI won’t be alone on this case since 10 000 flaggers will be added to the review team this year. This staff will be a part of the Trusted Flagger program that will have counter-terrorism groups. There are videos that fall into the “gray area” but YouTube is restricting those as well. Videos like that will be unable to receive monetary revenue and comments will also be disabled to prevent unwanted discussions.
What should we expect in the future?
The number of removed content and user accounts related to online terrorism has been growing over the past couple of years. This means that big companies and social media platforms are making an effort against this but is that enough? Some people think that removing anonymous accounts from the internet would solve a big portion of the problem and they are probably right. If every platform required an ID check there would be no room for hate speech and terrorist content online. We would have a clean and regulated space where online terrorism would have no chance of surviving. But this would also raise the question of online privacy.
Obviously, it is hard to find a balance so we should definitely begin leaning more heaving on AI, which is maturing and beginning to show better results.