YouTube and Google confirmed they use technology to identify and eliminate videos containing messages or displays of extremist behavior.
In the online environment, most companies rely on their users to flag inappropriate content. However, big companies like Facebook and YouTube have introduced an automatic process to remove extremist material.
The method used the same technology that was developed to deal with copyrighted material. The software compares indices contained by a video with a database of banned content. The technique had also been used to prohibit online images of child abuse.
Companies do not release more information on how the database is indexed or what its content would be, as they are afraid that terrorists may learn how to avoid the system and publish unacceptable content online.
Other experts say that companies do not want to discuss the matter publicly because they avoid being associated with any censorship. Child pornography and copyright infringements are definitely illegal; however extremist content can be interpretable and has many nuances.
“There’s no upside in these companies talking about it. Why would they brag about censorship?”, said Matthew Prince, chief executive of CloudFlare.
The companies have to decide what constitutes an extremist message and what separates it from the free speech. The general definition includes graphic violence, encouraging violent actions, endorsement of terrorist organizations or acts, supporting people to join terrorist or violent groups.
The process stops the reposting of content that had been already banned and had limits when it comes to new material.
Terrorist organizations such as ISIS use the web as a propaganda space and a recruitment tool. Companies try to ban the practice and to exclude extremist behavior and content from their online social networks.
Facebook and Twitter declared they were removing accounts in the very first moment they are created.
Microsoft also said that it excluded all terrorist content from its services, which includes Docs, Xbox Live, and Outlook. Microsoft’s algorithm is named PhotoDNA and identifies copies from already identified terrorist content.
When it comes to the Bing search engine, Microsoft declares it started to eliminate all search results that could be linked to extremism. Also, the company began to promote more positive search results in terrorist-themed queries on Bing.
Google also has a high-performance automatic system that identifies extremist videos and messages.
The companies started to filter the content of their products and services after a proposal coming from Counter Extremism Project.
Image Source: Wikipedia