Noting that “terrorism is one of the truly urgent issues of our time,” Microsoft recently said it is taking several steps to identify and combat extremist content online.
Those measures include a change in its terms of use to prohibit the posting of terrorist content on Microsoft consumer services like Xbox Live, OneDrive, Skype and Outlook, according to a blog post on Friday. The company defines such content as material posted by or supporting groups identified as terrorist organizations on the United Nations Security Council Sanctions List.
Microsoft said it is also helping to launch a new public-private partnership aimed at preventing “terrorist abuse of Internet platforms,” and investing in research into technologies that can proactively search for and identify terrorist material online.
‘Notice-and-Takedown Process’
Hate speech and messages advocating violence against others were already prohibited under the terms of use for Microsoft’s hosted services. The policy change announced Friday adds a prohibition on terrorist content as well.
Such content includes material posted by or supporting U.N.-identified terror groups that “depicts graphic violence, encourages violent action, endorses a terrorist organization or its acts, or encourages people to join such groups,” Microsoft said in a post on its On the Issues Blog.
Microsoft has also posted an online form specifically designed for reporting terrorist content on one of its consumer sites. The company already has a “notice-and-takedown” process for removing prohibited content on its sites, and will apply that to terrorist content as well.
However, Microsoft noted that it will remove links to terrorist information found online by its Bing search engine only when such action is required under local laws. In France, for example, the company is already “routinely provided by the police authority with links to terrorist-related content that is unlawful there.”
Promoting More ‘Informed Choices’
To promote more “informed choices” by people who might be exposed to harmful content, Microsoft said that it would explore new partnerships with non-governmental organizations to promote public-service announcements “with links to positive messaging and alternative narratives for some search queries for terrorist material.”
Because keeping terrorist content offline can often be a game of “whack-a-mole,” Microsoft said it is also providing funding and technical support to Hany Farid, a computer science professor at Dartmouth College who has developed techniques for identifying fake and manipulated photos online.
In January, U.S. officials met with technology company representatives following President Barack Obama’s call for the government and private sector to work together to counter violent extremism online. However, organizations like the Center for Democracy and Technology warn that such efforts need to be managed carefully to ensure transparency and avoid overly restrictive limits on lawful speech.
“Whether through new technology, broader terms of service statements, or more internal resources dedicated to taking down ‘inappropriate’ content, Internet companies are being asked to do more to combat extremism, even as various stakeholders are debating whether such efforts would be more harmful than beneficial,” Irina Raicu, Internet ethics program director at Santa Clara University’s Markkula Center for Applied Ethics, wrote in a commentary in December. “
Raicu said people need to be realistic about the limited impact and the potential dangers of each of the measures that have been proposed in response to violent extremist content. “[H]owever, acknowledging the fuzzy edges of the definition of ‘extremist’ should not prevent the case-by-case evaluation and the swift removal of violent incitement,” she added.
Microsoft said it also plans to add new resources to its online safety pages for youths to “help young people distinguish factual and credible content from misinformation and hate speech.”