A Guardian commentator is currently, and understandably, freaked out by the number of upticks for the vicious, hateful Daily Mail comments they’ve been reading, while ‘Reform’ posts are swamping twitter feeds. ‘Bots’ are occasionally mentioned in political coverage, but few people are aware of how far bot activity goes, and to what depths it can reach. It’s too late to rewrite this before the election now, so below is a section from my MA dissertation in 2020, with apologies for not having tidied up the academic language. Particularly relevant bits are in bold.
A version published by Byline Times (without the academic references): https://bylinetimes.com/2024/07/03/the-spiral-of-silence-and-the-rise-of-the-bots/
THE RISE OF THE BOTS
Hostile information campaigns, Rahman et al report, include the spreading of disinformation or biased information; spreading narratives by traditional media via proxies, or covert identities and using automated social media accounts (bots) or inauthentic social media accounts (trolls) (2020, p6).
Governments being shown to run such propaganda operations have, in the past, used a combination of human operatives and computer programming. In Azerbaijan, Israel, Russia, Tajikistan and Uzbekistan, student or youth groups are hired by government agencies to use computational propaganda, report Bradshaw and Howard (2019 p 9) . Young people are also employed in the Russian and Chinese troll farms, while in Mexico young people acting as “cyborgs”, and “bot herders”, along with fully automated social media personas, formed the backbone of President Nieto’s successful 2012 election campaign (Pomerantsev 2019, p62).
In the UK, the Government Communications Headquarters (GCHQ) are now “broadening their recruitment base” to deal with increased demand for cyber actions (ISC, 2019); recruiting in schools in areas of socio-economic deprivation (Kennard, 2020). As The Intercept reported, GCHQ “has developed covert tools to seed the internet with false information, including the ability to manipulate the results of online polls, artificially inflate pageview counts on web sites…and plant false Facebook wall posts” (Fishman & Greenwald, 2015).
CAPTCHAs (and re-CAPTCHAs) are the computer-automated tests designed to distinguish robots from humans, used for security reasons; for example, as a protection to stop the creation of fake on-line accounts. Not only have computer scientists developed machine-learning programmes which solve them (Bursztein et al. 2014); “captcha farms” are multi-million dollar businesses described as “digital sweatshops”; based in economically-deprived countries, where (human) employees solve CAPTCHAs for a rate of $0.17 (£0.13) per 1000 CAPTCHAs solved (Netacea 2019).
It is, reports InfoSecurity magazine “a Nigerian-fraudster-style of economy with people effectively working along with the malicious bots in order to overcome human challenges. The bots are actually passing off this work to a human” (Puddephatt 2019).
4.3 Bot Armies
Ben-David et al, in a comprehensive study of the way in which far-right groups network and circulate hate on Facebook, point out that the largest growing threat is from right-wing extremists and hate groups, and argue that online, “hate practices are a growing trend and numerous human rights groups have expressed concern about the use of the Internet—especially social networking platforms—to spread all forms of discrimination” (2016, p1170). Lingiardi et al (2019) found that women, gay people, lesbians and immigrants were the main targets of hate speech in Italy.
In fact, the targets of this hate speech are, as Antonio Guterres puts it, “any so-called other” (2019). In order for such hate speech to reach the necessary prominence in public media and discourse, it must be, firstly, visible. This is achieved by targeting prominent celebrities, together with politicians, journalists, lawyers and anyone else who might be expected to speak for the “so-called other”. Racist abuse predominates and women in power are repeatedly targeted. A 2017 UK survey by Amnesty found that Asian and black women MPs received 35% more abusive tweets than white women MPs.
However, hate speech being “weaponized for political gain” (UN, 2019) is not a sign that, as the UN puts it, this is not ” the loud voices of a few people on the fringe of society”. On the contrary, the “weaponization” consists exactly of reinforcing and amplifying these voices, forcing them into the mainstream as a result.
Researchers at Carnegie Mellon university performing a preliminary examination of over 200 million tweets discussing the coronavirus found that about 45 percent were sent by accounts more resembling “computerized robots” than humans; for example, tweeting more than would be humanly possible (Owen, 2020). Collating reports of such activities worldwide, which is rarely if ever done, leads to an overwhelming picture, regardless of the fact that the numbers are almost certain to be underestimates: “almost all bad bots are highly sophisticated and hard to detect” (Levine, 2016).
The individuals behind the bots remained unknown. Katie Joseff, from the Digital Intelligence Lab, who co-authored the report, said that anyone could be behind them. “It wouldn’t be at all out of the realm of question for Nazis or anyone on the alt-right to be able to use bot accounts” she said. “They are very accessible, and people who just have normal social media followings, or even high schoolers, know how to buy fake accounts”.
At least 60 percent of the tweets about the 2018 Central American refugee caravan, which saw thousands of migrants making their tortuous way through Mexico to the US border, were estimated to be by bots, which had evolved from “simply sending automated tweets that Twitter might delete” to working to “amplify and spread the divisive Tweets written by actual humans” (Lapowski, 2018).
The Anti Defamation League found that between 30 to 40 percent of accounts regularly tweeting hatred against Jewish people were likely to be bots (van Sant, 2018). In total, according to the ADL report, they produced 43 percent of all anti-Semitic tweets (2018). The report, which came out the day before 11 people were murdered in a shooting at a Pittsburgh synagogue, concluded that political bots were “playing a significant role in artificially amplifying derogatory content over Twitter about Jewish people”.
In Indonesia in 2019, the BBC reported that any account using the hashtag #FreeWestPapua, representing the campaign for independence from Indonesian annexation, was immediately flooded by automated messages promoting the Indonesian government. The same Twitter bots also targeted Veronica Koman, an Indonesian human rights lawyer, with rape and death threats (Strick & Syavira, 2019).
In Finland, a similar hate campaign was launched against a journalist who, ironically, had broken the story about the pro-Kremlin propaganda machine operating through Twitter bots and bot networks (BBC, 2018).
One of the most active accounts spreading “anti-Muslim hate” in the UK in 2017 was one of thousands of accounts subsequently determined to be a fake, created in Russia. It had also spread pro-Brexit messages. The twitter account of “Jenna Abrams”, who tweeted anti-Muslim and anti-feminist hate to over 70,000 followers, was revealed as another bot (Hope Not Hate 2019).
In fact, one-third of the Twitter traffic regarding the Brexit referendum was generated by merely 1% of the accounts, a large majority being automated or semi-automated bots, reported Schaefer et al (2017, p4). They also found evidence of a “massive” army of Japanese bots run by extremist right wing supporters of the successful right wing candidate Abe, which flooded social media with aggressive and hateful tweets during the 2014 election.
After the 2017 US Senate Intelligence Committee hearings, attempts to restrict “botnets” – inter-connected webs of accounts – resulted in over 117 thousand “malicious applications” and more than 450 thousand suspicious accounts being blocked, report Vasilkova and Legostaeva (2019, p126). Malicious bots have been found operating across every topic, including climate change, where a quarter of tweets attacking both the science and Greta Thunberg, were found to be bots (Milman, 2020).
When automated bots can tweet thousands of times a day, and advanced bots have developed to use human messages, and spread them automatically, does it even, in such an artificial landscape, make sense to talk about “hate speech?”
4.4 How to Fake the Hate
By regularly culling millions of fake (automated) accounts and hate posts, Twitter and Facebook have made headlines, and given the impression that the issue of fakery and hate online is at least partially being dealt with. Most recently “a global network of fake accounts used in a coordinated campaign to push pro-Trump political messages” was deleted by both platforms (Horwitz & McMillan 2019). Very few people are aware of the real extent and reach of the programmes, and the extent to which digital spaces can be manipulated.
Public awareness can extend to the issue of “fake followers”; although people are less aware of how cheap and easy it is to purchase them – $50 for 2,500 “followers” for example (Hubspot, 2019). Again, one of the most high-profile examples is Donald Trump, over 60 percent of whose twitter followers were estimated to be “bots, spam, inactive or propaganda” (Fishkin, 2018). Both governments and ‘legitimate human users’ who are online proponents of hate speech have easy, cheap access both to buying thousands of followers, and using bots to retweet their own messages, or those of others, or each other.
Equally, very few people know that in 2019, Microsoft engineers at Beihang University and Microsoft China, disclosed that they had developed a bot that reads and comments on online news articles. It is made of a reading network that “comprehends” an article and extracts important points, and a network which then writes a comment which is based on those points, and on the article’s title (Yang et al. 2019). “Our model can significantly out-perform existing methods in terms of both automatic evaluation and human judgment” say the authors.
It is important to note that there were “existing methods”; something of which even the few journalists who responded to Microsoft’s announcement seemed unaware. “Essentially” reported Vice “the paper is suggesting that a system that automatically generates fake engagement and debate on an article could be beneficial because it could dupe real humans into engaging with the article as well”. The researchers, Vice noted, left that statement out of the updated version of their report. Instead, they acknowledged that a bot which pretends to be human, and which comments on news stories, may pose some “risks” (Cole 2019).
As the Irish Times pointed out, the code was now available on the free tech platform GitHub: “so, although Microsoft acknowledges it would be unethical to use this to deceive people, there is nothing stopping those with the technical know-how from doing so” (Boran 2019).
Alongside this (and almost entirely unreported) are bots which can be used to upvote or downvote comments on a range of media platforms. The influential news and discussion platform Reddit exemplifies the problem, with dozens of sites offering the chance to buy automated upvotes or downvotes there. Toffee (2017) proved that it was both “easy and cheap” to maliciously manipulate posts and comments. Facebook and Disqus have also been linked to automatic bot votes.
The results, said Carmen at al, showed that anyone with a political agenda can secretly manipulate Reddit votes, boosting visibility and interaction, at an average cost of $1 each thread (2018).
This has an effect. “Readers tend to estimate public opinion based on those comments” reported Jeong et al – and also change their own opinion in the face of it. Jeong et al. examined one of South Korea’s main news portals, Naver News, and discovered more than ten thousand comment threads which were highly likely to have been manipulated. They found that co-ordinated manipulation in recent years had significantly increased (2020).
4.5 Traditional Media: Amplifying Hate in the UK and Beyond
Hiding in the open, the largest perpetrators of hate speech in the UK, online and off, are the press. The right-wing British media was “uniquely aggressive in its campaigns against refugees and migrants” reported the United Nations High Commission for Refugees (Berry et al. 2015). Irish travellers, Gypsies and Roma had also been the subject of focussed long-term attacks: most egregiously by the Sun, the Daily Mail and the Daily Express: all of which display prominent headlines in every UK high street.
While German newspapers were also found to be using dehumanising language which portrayed refugees, for example, as a “common threat” (Fischer, 2019) the UN Human Rights Commissioner highlighted the “decades of sustained and unrestrained anti-foreigner abuse, misinformation and distortion” in the UK press, the most extreme of which was comparable to the language which incited the Rwandan genocide (UNHCR, 2015). In 2017, former UK Conservative minister, Baroness Sayeeda Warsi, called hate speech in the UK press a “plague…poisoning our public discourse…crowding out tolerance, reason and understanding “; in this case with Muslims the principal target (Ruddick, 2017).
Although physical sales across the mainstream press have been falling, what is little understood is these papers’ worldwide online reach. They are still thought of as “British” but the Sun has a global readership of over 32 million monthly online; the Daily Mail and the Express around 25 million (Tobitt, 2019). Recent campaigning by “Stop Funding Hate” which persuaded companies, through consumer pressure, not to advertise on such platforms, seems to have had an effect, with the group recording a drop in anti-migrant front pages from over 100 in 2016 to zero in 2019 (2020).
Alongside the online reach, however, come the newspaper comments sections, which are meant to be regulated by the newspapers themselves and by the Independent Press Standards Organisation (IPSO). As campaign group “Hacked Off” report, the sections are instead a “Wild West” of unregulated inflammatory hate speech, where racist comments can receive hundreds or thousands of “upvotes” and remain on the site for months, if removed at all (2020).
Comments on the MailOnline’s (Daily Mail) coverage of a fire at a refugee camp in Lesbos on September 9th 2020 (Pleasance, 2020) demonstrate the fact that the hate speech of previous headlines (“Migrants: How Many More Can We Take?” 27th August 2015) has moved to below the line, where it has become even more virulent.
“Turning Europe into the same cess-pit they come from” reads one top-rated comment. “These are the kind of Sc++m the UK is letting in” and “These people are from countries that are essentially cesspits and they seem to want to turn the world into the same hole they crawled out of” read others. The refugees are compared to “soldier ants”, “invaders” and “money grabbers”. A month after publication, the comments were still in place. The UK government, Hacked Off report, intended to exempt newspaper comments sections from any Bill regulating the internet (2020).
Two UK newspapers not cited by the UN in its denunciation of hate speech were the traditionally liberal, sometimes seen as left-wing, Guardian, and its Sunday paper, The Observer. Although producing headlines such as the previously mentioned, and inaccurate “There’s a social pandemic poisoning Europe: hatred of Muslims” (2020) which can do little but spread fear and division among the communities it apparently aims to protect, the comment sections of both are well-moderated, and largely free of hate speech.
However, the Observer’s coverage above, and its more recent coverage of the QAnon conspiracy illustrate a larger problem: the apparent inability, or unwillingness, by mainstream media to address computer-generated propaganda.
The QAnon conspiracy, described as “the Nazi cult rebranded” had, the Observer reported, grown to “terrifying” levels in the UK and elsewhere: with membership on Facebook groups up by 120 percent; engagement rates up by 91 percent; and millions of tweets and posts using QAnon-related phrases and hashtags (Doward 2020). “Britain is the second country in the world for output of Q-related tweets” reported the website Wired, basing its piece, as did the Observer, on a report by the Institute of Strategic Dialogue (ISD) (Volpicelli 2020).
Tech investigators had found, two years previously, that QAnon was, from the beginning, artificially boosted by bots (Glaser, 2018). Researchers at the Middlebury Institute of International Studies at Monterey had successfully programmed the latest in artificial intelligence neural networks to reproduce QAnon propaganda (McGuffie, Newhouse 2020). The ISD’s report mentioned automatic bots once, in passing, with a reference to a report suggesting that Russian bots may have boosted the QAnon traffic on Twitter (Gallagher et al. 2020, p12); the Observer and Wired did not mention it.
Even in media which does not promote hate speech, the strategies behind, and opportunities for, artificial inflation of posts, tweets, hashtags and comments are unexamined. As yet, there has been no investigation comparable to that of the research into Naver News (Jeong et al. 2019) on the sources and manipulation of hate speech in UK online comments, or into the upvoting of such comments.