Article Is it time for an offence of ‘dishonest algorithmic manipulation for electoral gain’? Dan Jerker B Svantesson and William van Caenegem Bond University, Australia Abstract Algorithms impact important aspects of our lives and of society. There are now strong concerns about algorithmic manipulation, used by domestic actors or foreign powers, in attempts to influence the political process, including the outcome of elections. There is no reason to think that Australia is immune or protected from such activities and we ought to carefully consider how to tackle such threats – threats that go to the very heart of a democratic society. In this article, we examine the potential introduction of a Commonwealth offence of ‘dishonest algorithmic manipulation for electoral gain’. Keywords algorithmic manipulation, fake news, content regulation, Internet law, social media, election, propaganda, law reform Today important aspects of our lives are affected, or indeed controlled, by algorithms. Algorithms may determine whether we obtain a loan or not, and how high our insurance premiums will be. Many, indeed most, of that type of algorithm-influenced decisions occur ‘behind the scenes’ and we are rarely aware of whether a particular decision is based on an algorithm and, if so, how it has affected that decision. In the online environment, algorithms are even more dominant. They determine crucially important matters such as what search results we are presented with, which emails reach us (and which are relegated to the trash bin) and what newsfeeds we are presented with. We use the term ‘algorithm’ here in a broad and flexible sense; in simple terms, an algorithm may be seen as instructions or rules governing a process or calculation typically part of some kind of automatic technological problem solving or assessment. Thus, whether that process results from or can be classified as ‘Artificial Intelligence’ or the like, is neither here nor there. Also, yet unforeseen or unknown developments of technology are included under a broad and flexible umbrella. At any rate, not least in the online context, we – the ones affected by the algorithms – usually have no idea at all how they operate and the extent to which any hidden biases dictate what we experience online. Information has always been subject to manipulation and presentation to the advantage of hidden persuaders; however, what is new in our era is the extent to which automated processes gather, package and present information to us in a manner that suggests neutral technological determination. However, sleight of hand often hides behind seemingly ungoverned processes. The legal issues that all this gives rise to are now gaining considerable attention. In this article we seek to briefly, and admittedly superficially, introduce a legal problem and outline the contours of a possible solution, one that will need considerable discussion and debate Corresponding author: Dan Svantesson, Bond University, Robina 4229 Qld, Australia. Email: dasvante@bond.edu.au Alternative Law Journal 2017, Vol. 42(3) 184–189 ! The Author(s) 2017 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav DOI: 10.1177/1037969X17730192 http://journals.sagepub.com/altlj over some time. Our focus is on a particular aspect of how our online experience is guided by algorithms; we focus on what may be termed dishonest algorithmic manipulation for electoral gain. More specifically, having discussed what dishonest algorithmic manipulation for electoral gain is, we examine two alternatives for addressing this concern. The first is to predominantly focus on the content we are being fed online, while the second option – our preference – is to predominantly focus on the method by which content is selected for us. This second option limits the risk to free speech. Naturally what we examine is from a slightly broader perspective – not something new to the law. There is a long history of regulation of ‘truth in political advertis- ing’.1 Furthermore, courts have generally held against extending consumer protection law applying to sociopolitical speech.2 In this instance, we do not advocate taking any different course: as will be explained below, we take a narrower and more cautious approach, focused on technical means. ‘Fake news’ and dishonest algorithmic manipulation for electoral gain We are said to now be in the ‘post-truth’ era and there has been much attention lately directed at so-called FAKE news, election manipulation, the role of ‘big data’3 and foreign powers interfering with the political process in sovereign States. These are serious matters and are treated as such, for example in the context of national defence strategies; not least with 2017 being a year of key elections in, eg, both Germany and France. However, so far, limited attention has been directed at what our criminal laws can do to help combat these destructive forces. Our views – including our political views – are subjected to manipulation in a variety of ways. Normally it is up to citizens to evaluate the political bias that attaches to information from particular sources. However, where such manipulation occurs in the invisible engine rooms of the Internet, citizens may be particularly vulnerable. For example, online it is difficult to assess why we see the particular content we are served; transparency is lacking and hidden forces might be at work which we do not understand. This is significant, especially where people get the majority – if not all – of their news online, and where that environment gives the impression of neutrality or impartiality. Many of us now use only online content to make important political decisions as electors – such as who to vote for in national elections. Imagine that a marketing company employs algorithmic manipulation to artificially cause the search results, tweets and Facebook posts you see simply to communicate content favouring a certain product. Such manipulative and misleading commercial communication is already regulated by law, for example, via the broadly worded Article 18 of the Australian Consumer Law (ACL): ‘A person must not, in trade or commerce, engage in conduct that is misleading or deceptive or is likely to mislead or deceive.’ However, where the same means are employed for non-commercial communication the law says nothing, precisely for the reason that provisions such as the abovementioned Article 18 are focused on conduct occurring in trade or commerce. Making all the search results, tweets and Facebook posts you see communicate content favouring a particular political view, person or party undermines the integrity of the democratic process, but the law is mute about it. This is simply astonishing, since much more is ultimately at stake: why are consumers protected, but voters not? Naturally raising this question suggests that there is some identifiable truth from which algorithmic manipulation diverts the reader. Although it often will be difficult to ascertain ‘truth’ in this context, we believe that it nevertheless may be possible to identify when such truth has been corrupted. It is an exercise commonly engaged in by courts – as for instance in cases of libel or defamation (where the matter can often not be determined scientifically, but nonetheless can be determined by the normal evidentiary processes before a court of law). The seriousness of the situation the world is facing can escape no one who has read news reports such as Carole Cadwalladr’s fascinating description, in The Guardian on 26 February 2017,4 of what goes on behind the scenes; the hidden forces at work to feed online content to unsuspecting readers. In her article Cadwalladr discusses in detail the links between Trump’s election campaign, Brexit, social media, algorithms, fake news, bots (automated software agents), bio-psycho-social profiling, Artificial Intelligence and data analytics companies such as Cambridge Analytica. For example, in referring to her interview with Phil Howard and Sam Woolley of the Oxford Internet Institute’s Computational Propaganda Project, Cadwalladr5 observes that: [B]efore the US election, hundreds upon hundreds of websites were set up to blast out just a few links, articles that were all pro-Trump. ‘This is being done by people who understand information structure, who are bulk buying domain names and then using automation to blast out a certain message. To make Trump look like he’s a consensus.’ 1 As documented for instance in Graeme Orr, The Law of Politics (Federation Press, 2010). 2 See, eg, Graeme Orr, ‘Government Communication and the Law’ in Sally Young (ed), Government Communication in Australia (Cambridge University Press, 2007) 19. 3 For an interesting discussion of big data in the context of government powers refer, eg, to: Kate Galloway, ‘Big Data: A case study of disruption and government power’ (2017) Alternative Law Journal 42(2) 89. 4 Carole Cadwalladr, ‘Robert Mercer: The big data billionaire waging war on mainstream media’, The Guardian, 26 February 2017 https://www.the- guardian.com/politics/2017/feb/26/robert-mercer-breitbart-war-on-media-steve-bannon-donald-trump-nigel-farage. 5 Ibid. Svantesson and Caenegem 185 [. . .] You can take an existing trending topic, such as fake news, and then weaponise it. You can turn it against the very media that uncovered it. Viewed in a certain light, fake news is a suicide bomb at the heart of our information system. Strapped to the live body of us – the mainstream media. [. . .] Many of the techniques were refined in Russia, he says, and then exported everywhere else. ‘You have these incredible propaganda tools developed in an authoritarian regime moving into a free market economy with a complete regulatory vacuum. What you get is a firestorm.’ This is the world we enter every day, on our laptops and our smartphones. It has become a battleground where the ambitions of nation states and ideologues are being fought – using us. We are the bounty: our social media feeds; our conversations; our hearts and minds. Our votes. Bots influence trending topics and trending topics have a powerful effect on algorithms, Woolley explains, on Twitter, on Google, on Facebook. Know how to manipulate information structure and you can manipulate reality. We’re not quite in the alternative reality where the actual news has become ‘FAKE news!!!’ But we’re almost there. Out on Twitter, the new transnational battleground for the future, someone I follow tweets a quote by Marshall McLuhan, the great information theorist of the 60s. ‘World War III will be a guerrilla information war,’ it says. ‘With no divisions between military and civilian participation.’ By that definition we’re already there. At the time of writing, the news headlines are focused on allegations of Russia seeking to influence the election in several westerns democracies. And in Germany, for example, steps are being taken towards a law regulating, among other matters, fake news, with a view to protecting voters and the democratic process.6 There is no reason to think that Australia is beyond threats such as those perceived in Europe and the US, and an attack on the democratic process of a country is doubtlessly an attack at the very core of that country. Consequently, we argue that Australia ought to start looking at, and acting on, these issues now, while taking a very cautious approach. In any case, we focus on conduct engaged in to not simply take the advantage in a political debate, but to make an electoral gain. In other words, conduct that is intended to skew electoral results and also has that potential or actual effect. Simply influencing opinions alone is not sufficient – it must be with a view to influencing the outcome of an actual electoral process that is extant and foreseen. Option one: Focus predominantly on the content We realise we are in dangerous territory when regulating non-commercial speech. So, if we want to deter certain conduct in this sphere, then any offence we create should be narrow, specific and clearly limited to activities in relation to which there can be no doubts as to their legitimacy. In other words, there is very limited room for grey areas and rough edges. Some countries already have in place criminal offences that may be of relevance. Canadian law, for example, contains the following criminal offence: Every one who willfully publishes a statement, tale or news that he knows is false and that causes or is likely to cause injury or mischief to a public interest is guilty of an indictable offence and liable to imprisonment for a term not exceeding two years.7 While the language used suggests that attention is directed towards the relevant activity (i.e. the wilful publishing of the content), this offence is clearly predominantly aimed at addressing the content. No assessment is necessary in relation to the activity as such, as the legality of the activity is dependent only on the content published; provided of course the mens rea (the mental element, put simply, the intention) is established. The difficulty of applying content-focused law is well known from areas such as hate speech, injurious falsehood and defamation law. The recent debate about section 18C of the Racial Discrimination Act 1975 (Cth) is a case in point. But the problems are particularly well illustrated by a case that came before the Supreme Court of Canada. In R v Zundel,8 Section 181 of their Criminal Code was applied in the context of a publisher which published material questioning the historically accepted account of the Holocaust. The Supreme Court of Canada had to examine the constitutionality of the mentioned Section, and the majority concluded that: Section 181 of the Code infringes the guarantee of freedom of expression. Section 2(b) of the Charter protects the right of a minority to express its view, however unpopular it may be. All communications which convey or attempt to convey meaning are protected by s 2(b), unless the physical form by which the communication is made (for example, a violent act) excludes protection. The content of the communication is irrelevant. The purpose of the guarantee is to permit free expression to the end of promoting truth, political or social participation, and self-fulfilment. That purpose extends to the protection of minority beliefs which the majority regards as wrong or false. Section 181, which may subject a 6 Amar Toor, ‘Germany grapples with fake news ahead of elections’, The Verge, 19 January 2017 https://www.theverge.com/2017/1/19/14314680/ger many-fake-news-facebook-russia-election-merkel. 7 Criminal Code (R.S.C., 1985, c. C-46), Section 181. 8 R v Zundel [1992] 2 S.C.R. 731. 186 Alternative Law Journal 42(3) person to criminal conviction and potential imprisonment because of words he published, has undeniably the effect of restricting freedom of expression and, therefore, imposes a limit on s 2(b). Given the broad, purposive interpretation of the freedom of expression guaranteed by s 2(b), those who deliberately publish falsehoods are not, for that reason alone, precluded from claiming the benefit of the constitutional guarantees of free speech. Before a person is denied the protection of s 2(b), it must be certain that there can be no justification for offering protection. The criterion of falsity falls short of this certainty, given that false statements can sometimes have value and given the difficulty of conclusively determining total falsity.9 This clearly highlights the risks associated with offences predominantly focused on the content in question. What we propose is different. We are wary of provisions such as this as they may be vulnerable to misuse. Instead of focusing on the acceptability of the content, as is done for the abovementioned Canadian offence, we turn our attention to the acceptability of the method used to create or facilitate that content. Option two: Focus predominantly on the method Focusing predominantly on the method is a first way to limit the scope of any offence, while still making it effective in deterring exactly the secretive and invisible manipulation we are concerned about. We propose the creation of a new offence, rather than the introduction of a new tort. So how should we express such an offence? There are, of course, several possibilities, but to keep it simple, and to draw upon the wordings used in other settings in the Criminal Code Act 1995 (Cth), we propose the following: ‘A person commits an offence if the person does anything with the intention of obtaining electoral gain from dishonest algorithmic manipulation.’ Terms used need to be carefully defined in the law itself. Thus ‘electoral gain’ would here include undermining, or favouring, a political view, organisation or individual with a view to gaining an electoral advantage. Clear evidence of intent would be an essential element of the offence, and the intention requirement is one important filter restricting the application of the offence. For example, it is clear that major online platforms, such as Facebook and Google, need to increase, and are in fact increasing, their efforts to combat fake news. However, the offence is not aimed at the activities of such online platforms as they are not commonly argued to intend to undermine, or favour, any specific political view, organisation or individual, with a view to influencing the electoral process. Defining ‘dishonest algorithmic manipulation’ goes to the heart of the proposed offence and will no doubt be both the most important, and most difficult, task. It would of course be premature to seek to provide a definite definition of this key term in this short article. Rather, the delineation of what amounts to ‘dishonest algorithmic manipulation’ has some core elements: there must be a dishonest intent; the manipulation must involve algorithms and there must be an element of manipulation in the sense of deliberate and hidden determination of the settings of an algorithm. The precise delineation of the offence must be decided after extensive consultation, and there will be many stakeholders. Here we make some preliminary observations that may be used to guide any such consultation process. We envisage that there are several fundamentally different drafting options for delineating what amounts to ‘dishonest algorithmic manipulation’. First, it is possible to rely on a positive but nonexhaustive list outlining (examples of) what amounts to ‘algorithmic manipulation’, rather than having a broad and abstract definition. The means employed must concern the manipulation and deployment of algorithms, properly defined. The element of dishonest intent to influence voters (or the electoral process) should be separately established. It is not difficult to identify practices that would satisfy both elements. One example is where bots are ‘designed [and used] for the express purpose of disrupting free expression – for example, by hijacking Twitter hashtags and flooding them with contrary or irrelevant messages’.10 There are several bot tactics – such as hashtag spamming, making artificial trends, smear campaigns, death-threat campaigns and political propaganda – that may be used to infringe on the free expression rights of others in a different way: Hashtag spamming—the practice of affixing a specific hashtag to irrelevant content—renders the hashtag unusable. Artificial trends can bury real trends, thus keeping them off the public and media’s radar. Smear campaigns and death threats can both intimidate vocal political opponents and dissuade would-be speakers.11 All such bot tactics may fall within the definition of ‘dishonest algorithmic manipulation’ and be included on a positive list of offensive conduct. They would also satisfy the separate requirement that the manipulation is hidden and deliberate. A second option is to start with the assumption that any algorithmic manipulation is caught, and then couple that starting point with a negative list outlining practises that are not dishonest. Constructing such a list would take us into more treacherous terrain as there are numerous forms of algorithmic manipulation we would not want to see affected by the offence, such as standard 9 R v Zundel [1992] 2 S.C.R. 731, Per La Forest, L’Heureux-Dube´, Sopinka and McLachlin JJ. 10 Nathalie Mare´chal, ‘When Bots Tweet: Toward a Normative Framework for Bots on Social Networking Sites’ (2016) 10 International Journal of Communication, 5022–5031 http://ijoc.org/index.php/ijoc/article/view/6180/1811. 11 Ibid. Svantesson and Caenegem 187 forms of search engine optimisation. Further, we would need to anticipate acceptable future forms of algorithmic manipulation. Thus, we see this option as less realistic. We should also attempt to define the element of dishonest intent. This should be done by reference to the effects of such conduct – that being the influencing of electoral choice. Again we could use a non-exhaustive list of incidences of dishonest intent. For example, one possible incidence would be where algorithms are manipulated with the intent that the result unduly and invisibly prioritises one political message, while drowning out competing views. Those behind the manipulation must favour a particular electoral outcome, which motivates their actions. Finally, one can of course imagine hybrid structures, for example combining the first and third option so that we get a detailed definition backed up with a list of examples; a method used with success in other settings such as Council Directive 93/13/EEC of 5 April 1993 on unfair terms in consumer contracts (mirrored in Australia’s ACL part 2–3). Obstacles and challenges The above has already emphasised the considerable challenges associated with properly delineating the proposed offence so as to avoid the risk of interference with legitimate political speech. However, there are other obstacles and challenges that also need to be confronted. As an example, we are not naı¨ve as to the obvious difficulties involved in enforcing the crime outlined above. Where the dishonest algorithmic manipulation originates abroad, effective enforcement may require assistance by the very State that is behind the crime in the first place. If it is indeed the case that Russia has been engaging particularly actively in dishonest algorithmic manipulation, the enforcement of the proposed offence would often require cooperation from Russia; cooperation that of course is unlikely to be forthcoming. Having said that, recent arrests of Russian hackers holidaying outside Russia highlight that the prospect of enforcement nevertheless should not be underestimated.12 At any rate, computer forensics is a complex area and even where the algorithmic manipulation is domestic, proving its existence and identifying the responsible parties will not be easy. But Australia’s capabilities are steadily increasing, and it may reasonably be assumed that, if an Australian election was subjected to serious algorithmic manipulation, addressing the situation would be made a priority. We argue that these obstacles and challenges should not prevent us trying to address dishonest algorithmic manipulation. After all, law serves different functions, one being to communicate societal standards. And if the law is punchy enough, it will have a general deterrent effect. The goal of clearly articulating that dishonest algorithmic manipulation for electoral gain is unacceptable would be achieved the very day such activities are made a criminal offence. Concluding remarks The offence of ‘dishonest algorithmic manipulation for electoral gain’ will not alone solve the problems of ‘FAKE news’, election manipulation and foreign powers interfering with the electoral process in sovereign States, associated with our ‘post-truth’ era. However, we suggest that such an offence may serve as one useful tool in the toolbox required to address this complex matter. Any restriction on political activity is a sensitive matter – it goes to the heart of a democratic society. And the last thing we want is to create a tool that may be used to repress healthy political debate. However, the question is whether we can retain a healthy political debate in an era of dishonest algorithmic manipulation. Given recent developments, we fear that we cannot; although recognising the difficulty of what we are proposing, we do not believe the option of doing nothing is preferable. We cannot do nothing. As Professor Lawrence Lessig famously noted, now almost 20 years ago: ‘Left to itself, cyberspace will become a perfect tool of control. . ..’13 This seems a particularly apt observation when placed in the context of dishonest algorithmic manipulation for electoral gain. We think it is possible to fashion a new offence that specifically and narrowly applies to hidden dishonest manipulations of the Internet that confer unfair political gain in elections, while not deterring free speech and debate. However, we hasten to acknowledge that developing such an offence requires great care and extensive consultation. Our modest aim with this article is merely to start the conversation. Author note This article builds, and expands, on their previously published work, Dan Jerker B Svantesson and William van Caenegem, ‘Faking it: We should make manipulating algorithms for political purposes a crime’, The Conversation, 10 March 2017 http://the- conversation.com/faking-it-we-should-make-manipulating-algo- rithms-for-political-purposes-a-crime-73970. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. 12 ‘Russian arrested in Spain ‘‘over mass hacking’’’, BBC News Technology, 10 April 2017 http://www.bbc.com/news/technology-39553250. 13 Lawrence Lessig, Code and Other Laws of Cyberspace (Basic Books, 1999) 5–6. 188 Alternative Law Journal 42(3) Funding The author(s) received no financial support for the research, authorship, and/or publication of this article. Dan Jerker B Svantesson is Professor and CoDirector, Centre for Commercial Law, Faculty of Law, Bond University (Australia) and researcher, Swedish Law & Informatics Research Institute, Stockholm University (Sweden). William van Caenegem is Professor, Centre for Commercial Law, Faculty of Law, Bond University (Australia). Svantesson and Caenegem 189