Terror content on many social media mediums is available majorly related to the Islamic State (IS), Al-Qaeda, and their affiliates which not only promote extremism but also affect the younger generation making them believe their schemes. To avoid these attempts many mediums are working to reduce these kinds of terror content if they are available on their mediums.
Facebook is also working on the front foot against this terror content, as it claims to remove a total of about 14 million terror-related pieces with the help of their new machine learning tools that not only identified these posts but also take lesser time to reduce if reported by its users.
It works across in 19 different languages and also uses audio and text-hashing techniques for detecting terrorist content. They started this task at the beginning of this year by dividing into Q1, Q2, and Q3.
In Q1; they took action on 1.9 million pieces of terror content than in Q2 they took action on 9.4 million pieces of terror content as a majority of which was old material. In Q3, the overall content declined to 3 million, of which 800,000 pieces of content were old which shows significant increases from Q1.
User-reported terror content removals grew around 16,000 in Q3 from 10,000 in Q1 along with that the new machine learning tools have also reduced the amount of time the content reported by its users stays on the platform, from 43 hours in Q1 to 18 hours in Q3.
Seeing these positive attempts made by Facebook, the US Department of Justice warn IS supporters to be careful when posting propaganda on Facebook, pointing to the network’s Q1 2018. As pushing propaganda on the social media giant is getting more and more difficult.