SOCIAL MEDIA PLATFORMS (SMPs)

Most of the large corporations with global reach that support social media platforms, including Facebook, Google, YouTube, Twitter and Microsoft, have taken multiple measures to restrict the misuse of their platforms by terrorists and other violent extremists. Some of them have adopted policies that strictly prohibit online support for terrorist groups and acts, in addition to other relevant policies banning hate speech and violent content. These large companies use a combination of tools to combat the misuse of their platforms, including reporting systems for users, governments and organizations to flag content that potentially violates the platform’s policies. This process may be supported by artificial intelligence (AI) and machine learning programs that identify and remove content, in addition to human staff who review and escalate removal requests. One such tool is the Tech Against Terrorism initiative, which provides a common platform for tackling  terrorist exploitation of their platforms and generating best practices.

FACEBOOK  

In April 2018, Facebook made its Community Standards Guidelines public for the first time, in an effort to make its content removal policy more transparent. The guidelines are written by Facebook’s content policy team in close consultation with global experts, and specifies that terrorists, terrorist organizations, and hate organizations (as well as their leaders and prominent members) are not allowed any presence on their platform. Facebook provides its own definition of terrorist organizations and terrorists under Section 2. Dangerous Individuals and Organizations of the Community Standards Guidelines, as follows:

“A terrorist organization is defined as:

Any non-governmental organization that engages in premeditated acts of violence against persons or property to intimidate a civilian population, government, or international organization in order to achieve a political, religious, or ideological aim.

A member of a terrorist organization or any person who commits a terrorist act is considered a terrorist.

A terrorist act is defined as a premeditated act of violence against persons or property carried out by a non-government actor to intimidate a civilian population, government, or international organization in order to achieve a political, religious, or ideological aim.”

It should be noted that the UN’s Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism has expressed concerns regarding the definition of terrorism and terrorist organizations used by Facebook. The Special Rapporteur suggests that Facebook’s definitions lack a human rights context or grounding. Nevertheless, Facebook maintains that freedom of expression is one of its core values.

The removal of problematic content from Facebook works as follows. Content that violates Facebook’s community standards is removed, and the account or page receives a “strike”. When the strikes against an account or page reach a certain threshold, it faces either temporary or permanent suspension. The weight and effect of a strike depends on the severity of the violation. Facebook has stated explicitly that it does not share the number of strikes necessary for suspension, in order to avoid enabling users to “game” the system.

Posts violating community standards are identified through a combination of Artificial Intelligence (AI) and reports from users and content reviewers. Any Facebook user can report content that violates Facebook’s policy, which is automatically assigned to a human content review team based on language and violation type.

Regarding content reviews at Facebook, in July 2018, the Vice President of Operations at Facebook wrote a summary of the content review team, where she noted that the number of people working in safety and security at Facebook has tripled over the previous year to 30,000 people, including around 15,000 content reviewers. These content reviewers include a mix of full-time employees, contractors, and partner companies. Content reviewers are provided with extensive training, coaching, and mental health resources including counselors and resilience programs.

The Global Head of Policy Management and the Head of Counterterrorism Policy at Facebook have stated that the company chooses to keep its enforcement techniques discrete to protect it from anyone who uses the platform for terrorist purposes. However, they have also acknowledged that the public wants to know how they prevent terrorists from spreading hostile content on their platform. Therefore, Facebook has announced that it directly informs law enforcement whenever a possible terrorist threat is identified.  To do so, the social media company uses machine learning programs built by its Product Team, identifies content and refers it to its content review teams. On 17 July 2018, Facebook’s Vice President for Global Policy Management gave a testimony before the House of Representatives Judiciary Committee, where she described machine learning as Facebook’s “first line of defense for content assessment”.

In November 2018, the Chief Executive Officer of Facebook, Mark Zuckerberg, announced a plan to address content governance and enforcement issues at Facebook. He laid out his vision for a new way for people to appeal content decisions to an independent body. In January 2018, Facebook released a Draft Charter explaining in more detail the structure of the Oversight Board for Content Decisions. As mentioned in the Charter, “Facebook takes responsibility for its content decisions, policies and the values it uses to make them”. The Draft Charter mentions that the Oversight Board for Content Decisions will be composed of experts with experience in content, privacy, free expression, human rights, journalism, civil rights, safety and other relevant disciplines. Board members will be tasked with reviewing the specific decisions Facebook makes when enforcing its Community Standards.

In March 2019, following the terrorist attack in New Zealand, Facebook’s VP and Deputy General Counsel Chris Sonderby released an update explaining what actions Facebook took after the attack was streamed live. Facebook announced that it worked directly with law enforcement in New Zealand and immediately took down the video when it received reports about it. According to information provided by the social media platform, the video was seen live by less than 200 people and in total, it was watched about 4000 times before it was removed. Facebook received the first report only 29 minutes after the video started and 12 minutes after the live broadcast had ended. In addition, in the first 24 hours, Facebook reportedly removed about 1.5 million copied videos of the attack and around 1.2 million of those were blocked during upload. Facebook also used audio and other different URL capabilities to help detect the video in question and prevent its dissemination.

On March 27, 2019, Facebook announced that it will ban the praise, support and representation of white nationalism and white separatism on both Facebook and Instagram. Despite having rules against hate speech and hateful treatment regarding race and ethnicity, Facebook did not have a specific policy against white nationalism and white separatism. The company further announced that users searching for terms associated with hate groups will now be directed to Life After Hate, an organization that provides crisis intervention, education, support groups and outreach.

TWITTER 

Twitter’s rules ban both the supporting of terrorism and groups that use terrorism, stating:

“You may not make specific threats of violence or wish for the serious physical harm, death, or disease of an individual or group of people. This includes, but is not limited to, threatening or promoting terrorism. You also may not affiliate with organizations that — whether by their own statements or activity both on and off the platform — use or promote violence against civilians to further their causes.”

Twitter also maintains multiple policies applicable to terrorist use of its platform, including a Violent Extremist Groups Policy, which prohibits the use of Twitter’s service by such groups. Twitter defines groups under the policy as meeting the following criteria:

  • Identify through their stated purpose, publications, or actions, as an extremist group
  • Have engaged in, or currently engage in, violence (and/or the promotion of violence) as a means to further their cause
  • Target civilians in their acts (and/or promotion) of violence

Other relevant policies include a Hateful Conduct Policy, and a policy which bans tweets that “contain violent threats or glorify violence”, including those of terrorist attacks.

Twitter produces a biannual Transparency Report, where it provides in-depth analytics of removal requests, information requests and content removed by Twitter autonomously for violating its rules and policies. Twitter’s Rule Enforcement Report, a subsection of the Transparency Report, states that between January and June 2018 the company suspended 205,156 accounts for “violations related to promotion of terrorism”.

YOUTUBE & GOOGLE

YouTube has been a Google subsidiary since 2006, and thus shares some policies and measures with respect to the prohibition and take down of terrorist content. Its Violent or Graphic Content Policy states the following:

“We do not permit terrorist organizations to use YouTube for any purpose, including recruitment. YouTube also strictly prohibits content related to terrorism, such as content that promotes terrorist acts, incites violence, or celebrates terrorist attacks.”

If [you are] posting content related to terrorism for an educational, documentary, scientific, or artistic purpose, be mindful to provide enough information so viewers understand the context.”

YouTube also publishes online additional policies that help regulate the misuse of its platform, including a Hate Speech Policy banning content “that promotes violence against or has the primary purpose of inciting hatred against individuals or groups based on certain attributes”; a policy against real depictions of graphic or violent content when it is “intended to be shocking, sensational, or gratuitous”; and a Dangerous Activities Policy that bans content which encourages dangerous or illegal activities including, among other things, instructional bomb making”.

Some of these policies have been criticized for using vague terms and not aligning its definition of terrorism and incitement to terrorism with the definitions recommended by the UN Special Rapporteur on Counter-Terrorism.

In addition to Youtube content reviews and machine learning programs, YouTube also operates a Trusted Flagger program, a tool for individuals, government agencies and NGOs to report content that violates its community guidelines. The YouTube trusted flagger program includes features such as bulk flagging, private support and visibility of the decisions into flagged content, and priorities flagged content for increased actionability. Google maintains the right to remove individuals from the Trusted Flagger program if they regularly flag content that does not violate Youtube’s policies.

In a December 2017 blog post titled “Expanding our work against abuse of our platform”, the CEO of YouTube Susan Wojcicki stated that Google would increase the number of people working to address content that violates its policies to 10,000 in 2018. In addition to human monitoring, Google uses machine learning to flag and remove violent extremist content from its Youtube platform. Between June and December 2017, Google removed 150,000 videos for violent extremism, and 98% of them were flagged by Google’s machine-learning algorithms. Today, nearly 70% of content is taken down within eight hours of being posted and nearly 50% of it is removed within two hours.

Google provides quarterly transparency reports that provide extensive details on content removed from its YouTube platform. Data from the third quarter of 2018 indicates that between July and September, Youtube removed 3,303 channels and 10,394 videos for the promotion of violent extremism.

MICROSOFT  

In a 2016 official blog post, Microsoft recognized that while it doesn’t run any leading social network services or video sharing platforms, occasionally “terrorist content may be posted to or shared on Microsoft-hosted consumer services”. In another blog post in May of that year, entitled “Microsoft’s approach to terrorist content online”, Microsoft detailed how it combats terrorist abuse of its services and defined terrorist content as follows:

“For purposes of our services, we will consider terrorist content to be material posted by or in support of organizations included on the Consolidated United Nations Security Council Sanctions List that depicts graphic violence, encourages violent action, endorses a terrorist organization or its acts, or encourages people to join such groups.

The Microsoft Service Agreement reflects this definition by including in its Code of Conduct that when using Microsoft’s Services, one must agree to not “engage in activity that is harmful to himself, the Services or others (e.g., transmitting viruses, stalking, posting terrorist content, communicating hate speech, or advocating violence against others)”. In instances where its code of conduct is violated, Microsoft utilizes its “notice-and-takedown” process to remove prohibited content. Microsoft provides a specific terrorist content form that individuals, governments, and organizations can use to report illicit content. 

Close Menu