Facebook Redefines 'terrorist Organisations' To Prevent Extremism

Social Media News

Facebook said its automated systems remove the content glorifying the Islamic State group and al-Qaida before it’s reported. Facebook discussed the steps being

Written By Tanmay Patange | Mumbai | Updated On:

Facebook said its automated systems remove the content glorifying the Islamic State group and al-Qaida before it’s reported. In its recent blog post, Facebook discussed the steps being taken to remove extremist content and prevent extremist organisations from using the platform. In March, the deadly terrorist attack in Christchurch, New Zealand that killed 50 people was live-streamed on Facebook.

What Facebook has done so far

In May, Facebook announced restrictions on who can use Facebook Live feature.

"We also co-developed a nine-point industry plan in partnership with Microsoft, Twitter, Google and Amazon, which outlines the steps we’re taking to address the abuse of technology to spread terrorist content," Facebook said.

Facebook tightens political ad rules ahead of US presidential election

Ideology vs behaviour

Facebook said it identified the groups as terrorist organisations based on their behaviour on the platform and not the ideologies.

"We initially focused on global terrorist groups like ISIS and al-Qaeda."

Facebook said it removed over 26 million pieces of content related to ISIS and al-Qaeda in the last two years and 99 per cent of extremist content was removed before it was reported. Facebook also said it banned over 200 groups and organisations related to white supremacy.

Facebook boss Mark Zuckerberg now blaming US government for spreading of fake news on Facebook

Christchurch attack aftermath

"The (Christchurch) attack demonstrated the misuse of technology to spread radical expressions of hate, and highlighted where we needed to improve detection and enforcement against violent extremist content."

Facebook further blamed its inability to detect and stop Christchurch attack live stream on the lack of training data for its automated machine learning systems.

Facebook described as "morally bankrupt pathological liars" over Christchurch attack live stream

"The video of the attack in Christchurch did not prompt our automatic detection systems because we did not have enough content depicting first-person footage of violent events to effectively train our machine learning technology."

With that said, Facebook is now obtaining camera footage from firearms training programs in the US and the UK to train its systems.

Facebook to ban posts on white supremacy, white nationalism and white separatism

Facebook redefines terrorist organisations

Facebook said it also updated its definition of terrorist organisations. 

"While our previous definition focused on acts of violence intended to achieve a political or ideological aim, our new definition more clearly delineates that attempts at violence, particularly when directed toward civilians with the intent to coerce and intimidate, also qualify."

By 2030, 40% Indians will not have access to drinking water