Share on facebook
Share on twitter
Share on linkedin
Share on reddit
Share on pinterest
Online terrorist content removal in 60 minutes?
Even if big internet companies manage to shut down terrorist content from their platforms using sophisticated algorithms, the problem itself will not cease to exist.
Social data cluster

[vc_row full_width=”stretch_row”][vc_column][insikt_heading title=”Online terrorist content removal in 60 minutes or less?” title_color=”#ffffff” bg_color=”#b5bdbc” border_color=”#eb3324″][/vc_column][/vc_row][vc_row el_class=”container”][vc_column][vc_row_inner][vc_column_inner][vc_column_text css=”.vc_custom_1531749291948{border-right-width: 15px !important;border-left-width: 20px !important;padding-right: 20px !important;padding-left: 15px !important;}”]

Even if big internet companies manage to shut down online terrorist content from their platforms using sophisticated algorithms, the problem itself will not cease to exist.

[/vc_column_text][/vc_column_inner][/vc_row_inner][/vc_column][/vc_row][vc_row][vc_column width=”1/2″][vc_column_text css=”.vc_custom_1530088689611{padding-right: 15px !important;padding-left: 15px !important;}”]

Billions of everyday citizens are using the social media platforms. So are the terrorists. YouTube is used to distribute videos of radical content. Facebook is used for operational and tactical information, such as bomb recipes, weapon maintenance, tactical shootings, links to extremist sites and Facebook groups according to the US Department of Homeland security. Twitter has been used for recruitment and radicalisation by the Islamic State.

Tech companies have been tackling the issue of fighting online terrorist content proactively using artificial intelligence for flagging terrorist content. In September 2017, Twitter announced in its transparency report the taking down of 300,000 terrorist accounts in that year only – three quarters of them even before the tweet was sent. As well as YouTube previously launching an experiment to fight the propaganda-related content against ISIS recruits and Facebook has taken responsibility in eliminating online terrorist content from its platform.

[/vc_column_text][/vc_column][vc_column width=”1/2″][vc_single_image image=”483″ img_size=”700×300″][/vc_column][/vc_row][vc_row][vc_column][vc_column_text css=”.vc_custom_1531126596113{border-right-width: 20px !important;border-left-width: 20px !important;padding-right: 15px !important;padding-left: 15px !important;}”]

As terrorist attacks have been proliferating, US and European governments are pushing the responsibility on the tech companies and urging them to take countermeasures against terrorist content spreading on social media platforms. In 2016 internet companies, such as Google, Youtube, Facebook and Twitter, aligned with the European Commission by applying technology to automatically detect terrorist material.

However the government wants more. In 2017, Theresa May, the Prime Minister of the United Kingdom urged Google, Facebook and Twitter to take down terrorist content within two hours or face heavy fines. She made this announcement after meeting French President Emmanuel Macron, and Paolo Gentiloni, the Prime Minister of Italy at the general assembly of the United Nations. May added that “the industry needs to go further and faster in automating the detection and removal of terrorist content online, and develop technological solutions which prevent it being uploaded in the first place.” Technologies such as the INVISO Intelligence Platform which helps law enforcement agencies detect radical content online through the use of data mining and natural language processing.

In 2018, Facebook, Google, Microsoft and Twitter agreed to remove terrorist content within 24 hours after a long discussion with EU regulators. The EU Commission went further and issued a recommendation in March 2018 to increase response time in taking down terrorist content in one single hour. Even though the technology applied is getting more and more sophisticated and quicker – like all models – they do not work with 100 percent accuracy. The Commission expects tech companies to delete content proactively which may cause collateral damage due to the very nature of artificial intelligence.

[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column width=”1/2″][vc_single_image image=”597″ img_size=”900×500″][/vc_column][vc_column width=”1/2″][vc_column_text]

Such collateral damage are the false positives, when artificial intelligence-based algorithms of the big internet companies detect and take down content which were not meant to be terrorism-related and gets removed accidentally. Such automatised taking down of content does harm freedom of speech.

However, terrorists are not only using the big internet companies’ platforms, but also more and more the smaller ones using end-to-end encryption, such as Telegram, WhatsApp or Snapchat. Telegram is used by ISIS to share and disseminate propaganda, but the platform also fights. The attacker at London bridge in 2017 sent his last message using Whatsapp, and police had a hard time in getting the content of the message. Members of DAESH behind the 2015 Paris attacks were also using Telegram to spread propaganda, and also used it to recruit people for the Christmas market attack in Berlin in 2016. Snapchat is popular among jihadist extremists too.

Even if big internet companies manage to shut down online terrorist content from their platforms using sophisticated algorithms, the problem itself will not cease to exist. Terrorists will find their ways to communicate via the internet, either by the enlisted end-to-end encrypted social media platforms, or others unknown as of now. Governments and companies need to adapt to the ever-changing situation using an evolving technology – but also keeping the privacy of the citizens in the forefront of the fight against terrorism.

[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_column_text]

READ THROUGH SOME OF OUR OTHER NEWS ARTICLES

[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column width=”1/4″][icon_box_content image=”579″ img_pos=”center” title=”ISIS PROPAGANDA WEBSITE SHUTS DOWN AFTER SUCCESSFUL COORDINATED PLAN” slink=”https://www.insiktai.com/isis-propaganda-website-shut-down-2/”][/icon_box_content][/vc_column][vc_column width=”1/4″][icon_box_content image=”570″ img_pos=”center” title=”THIS IS HOW EXTREMISTS TRY TO TRICK YOUTUBE” slink=”https://www.insiktai.com/extremists-trick-youtube-and-upload-propaganda/”][/icon_box_content][/vc_column][vc_column width=”1/4″][icon_box_content image=”569″ img_pos=”center” title=”EUROPEAN SECURITY SECRETARY ENTHUSIASTIC ABOUT INSIKT’S EC-FUNDED RESULTS” slink=”https://www.insiktai.com/counter-terrorism-event-visit/”][/icon_box_content][/vc_column][vc_column width=”1/4″][icon_box_content image=”156″ img_pos=”center” title=”WHAT ARE SOCIAL MEDIA COMPANIES REALLY DOING TO REMOVE CRIMINAL CONTENT?” slink=”https://www.insiktai.com/online-terrorist-content-removal-in-60-minutes-or-less/”][/icon_box_content][/vc_column][/vc_row][vc_row][vc_column][vc_btn title=”CONTACT US” size=”lg” align=”center” link=”url:http%3A%2F%2Fwww.insiktai.com%2Fcontact-us%2F||target:%20_blank|”][/vc_column][/vc_row]

Read next

Copyright © 2021 INSIKT AI All rights reserved

Tell us about your need
Members of

Our technology has been co-funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement number 767542.