Shortly after the shooting attack in a synagogue in Pittsburgh, I noticed that the word “jews” was a topic trend on Twitter. As a researcher of social networks and educator, I was concerned that the violence will spread into the Internet, as happened in the past.

The activity of the alleged attacker on the social network Gab has called attention to the role that this network has as an alternative to full of hatred to options more conventional such as Facebook or Twitter. These last are among the social media platforms that have pledged to fight against hate speech and insults on their pages.

however, when I explored the activity on the Internet after the shooting, I immediately became clear that the problems are not only in places like Gab. By contrast, hate speech is still easy to find on social networks-conventional, including Twitter. I have also determined what additional measures might be taken by the company.

incomplete Responses to new terms of hate

I expected that to appear in new web threats related to the shooting of Pittsburgh, and there were signs that indicated that was already happening. In a recent attack anti-semite, the leader of Nation of Islam, Louis Farrakhan, used the word “termite” to describe the jews. I searched this term, knowing that it was likely that the racist what to use as a keyword to avoid being detected by expressing anti-semitism.

[Hitlerica, Gaseadlos all The images with texts of hate are more difficult to identify for the algorithms, but no less dangerous or harmful. Screenshot made by Jennifer Grygiel, CC BY-ND

Twitter had suspended the account of Farrakhan after another of his anti-semitic statements, and the search function of the network suggested automatically that maybe I could find the expression “termite eats bullets” [the termite eats bullets]. That makes the search box of Twitter in a sign of the hate speech.

however, the company had adjusted apparently some of their internal algorithms, because in my search results didn’t show up any tweet with uses anti-semitic of the word “termite”.

Messages that have gone unnoticed for years

To continue my search of speeches of hatred and call to violence against the jews, I found evidence even more disturbing flaws in the system of Twitter for moderating content. After the presidential elections of 2016 in the united States, and the discovery that Twitter had been used to influence them, the company said it was investing in machine learning to “detect and mitigate the effect of the activity of fake accounts, coordinated and automatic over-users”. Based on my results, these systems have not detected even violent threats and hate speech are clear and direct which take years and years on this site.

[“let us Kill jews and matémoslos for fun”. A simple example of tweet-hate that has been allowed to stay on Twitter for more than four years. Screen captured by Jennifer Grygiel, CC BY-ND

When I reported that a tweet uploaded in 2014, proposed to kill jews “for fun”, Twitter removed that same day, but its automatic warning-general gave no explanation as to why it had remained untouched for more than four years.

The hate fools the system

When I checked tweets of hatred that had not been detected in years, I found that many had no text and contain only one image. Without text, the tweets are more difficult to detect, both for users as for the own algorithms used by Twitter to detect hatred. But users who are looking to specifically hate speech on Twitter may very well move by the activity of the accounts that you are seeing even more hate messages.

[Hitlerica, Gaseadlos all The images with texts of hate are more difficult to identify for the algorithms, but no less dangerous or harmful. Screenshot made by Jennifer Grygiel, CC BY-ND

Twitter seems to be aware of this problem: users who report on a tweet are encouraged to review other tweets from the same account and to submit more content to review, but still leaves room for some not be detected.

Help for tech giants in a bind

Betorder

as he found tweets which in my opinion violate the policies of Twitter, I was reporting them. Most were eliminated quickly, even in less than an hour. But some of the offensive messages it took up to several days to disappear. There remain a few text messages that still have not been eliminated, despite of failing to follow clearly the policies of Twitter. This shows that the process of content review of the company it is not consistent.

[“Arab, stop to fuck and kill…. in Las Vegas, f * * * ing incompetent” “anti-zionist, centraos in kill ….. Let fuck. We are losing lives”] Tweets for January, 2015 urging you to kill a specific person, reported on 24 and 25 October 2018, kept appearing in the October 31, 2018 (edited to cross out the name of the person). Screen captured by Jennifer Grygiel, CC BY-ND

it May seem that Twitter is improving in the removal of harmful content and that is removing a lot of content and memes, and suspending accounts, but a lot of that activity is not related to hate speech. A good part of the attention of Twitter has focused more on what the company called “manipulation coordinated”, as bots and networks of fake profiles run by sections of propaganda by the government.

In my opinion, the company could provide a significant step and request the assistance of the citizenship, as well as researchers and experts, as my collaborators and I, to detect the content of hate. It is normal for technology companies, Twitter included– to offer payment to those who discover vulnerabilities in its computer support. However, everything that the company offers to users who report problematic content is to send them an automatically generated message by saying “thank you”. The disparity between the way that Twitter deals with the encoding problems and the complaints of content conveys the message that gives priority to the technology community.

instead, Twitter could charge users to report content that violates its community guidelines, offering financial rewards for pointing out the social vulnerabilities of your system, as if those users will be helping to determine software problems or hardware. A ceo of Facebook expressed the concern that this possible solution to fail and generate more hatred in the network, but I think the rewards program could be structured and designed in a way that avoided that problem.

much Remains to be done

There are other problems of Twitter that go beyond what is published directly on your site. The hanging hate speech often take advantage of another key tool of Twitter: the ability to include links to other content on the Internet. This role is key in the use of Twitter, and is used to share content of mutual interests in the network. But it is also a way of spreading hate speech.

For example, a tweet, looking totally innocent, that say “This is grace” and include a link. But the link –to content not included in the Twitter servers– displays a message full of hate.

[“The jews will burn@odioalosnegros” “Adof Hitler@quemajudíos”, various versions]. A surprising number of profiles have names and identities of Twitter with messages of hate. Campaign captured by Jennifer Grygiel, CC BY-ND

in Addition, the system of moderation of content that Twitter only allows users to report tweets of hatred and threatening, but not accounts that contain similar messages in your own profile. Some of these accounts –with photos of Adolf Hitler, and the names and addresses of Twitter that encourage you to burn the jews– or even hang tweets or follow other users. Sometimes there are only for users to find you when they search the words in their profiles, turning again to the search box in a broadcast system. And although it is impossible to know, it is possible that these accounts are used also to communicate with others on Twitter through direct message, using the platform as a communication channel is covert.

No tweets or other public activity, it is impossible for users to report these accounts via the usual system of complaint content. But they are just as offensive and dangerous, and it is necessary to evaluate and moderarlos in the same way as any other content of the site. As those who wish to spread hatred become more expert, the community guidelines of Twitter –and more importantly, their efforts to implement them– need to be updated and to be aware of.

If the networks want to avoid continue to be or become vectors of war information, and pests of ideas and memes of hate, they have to be much more active and, as a minimum, have thousands of employees, moderation of content full-time, as did a teacher in the course of a weekend.

Jennifer Grygiel is an assistant professor of Communications, University of Syracuse

Disclosure statement. Jennifer Grygiel has a small portfolio of shares in the following companies of social networks: FB, GOOG, TWTR, BABA, LNKD, YY, and SNAP.

This article was originally published on The Conversation. To read the original.