203© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 E. Pashentsev (ed.), The Palgrave Handbook of Malicious Use of AI and Psychological Security, https://doi.org/10.1007/978-3-031-22552-9_8 8 Malicious Use of Artificial Intelligence in Political Campaigns: Challenges for International Psychological Security for the Next Decades Marius Vacarelu Introduction In order to understand life, a young person must seek to learn from an early age examples from directions: good governance, for the first part, and also the political, economic, and legal abuses and errors made by national and local rulers, for the other part. After all, the life of an adult is not one in a laboratory environment, but one that will know both happy moments and unpleasant situations, and an important part of them will be the leader’s decisions and results of these decisions. Knowing history, a young person will understand that one of the most important drivers of humanity has been—and will be many centuries from now—the politics and competition that it creates locally, nationally, continentally, or globally. Life expectancy has increased and the complexity of social and economic life has reached a level impossible to imagine two centuries ago. Here, we need to note the central position of politics—an area that any person can claim to understand, perhaps also in view of the fact that compulsory education in every country includes the discipline of history, which contains many interesting political facts. Because life expectancy had grown to over 70  years (World Health Organization, 2021), it results that a person would have the M. Vacarelu (*) National School of Political Science and Public Administration, Bucharest, Romania 204 opportunity to vote, to see, and to be involved in political campaigns for a period of over 50 years. Because superior political positions are not for life, the struggle to get them becomes eternal. Many administrative positions have an important political meaning, but they are only achieved as a result of elections and only if the country’s top hierarchy is changing. For these positions, there is not only real open political competition and campaigning but also intrigues and “behinddoors” strategies; the latter actions involve only a few people. This chapter is dedicated to publicly elected positions because their results have a stronger legitimacy, and the interest to win them is not only national but also international, as Cambridge Analytica and many others cases have shown. To win such elections, it is necessary to have a good campaign staff, money, and a coherent plan of travels inside the country, but—above everything—it is important to have a strong record of activity in cyber space. To understand why Internet political activities have become very important, it is useful to know that at the beginning of 2022, 5 billion people use this technology worldwide, meaning 63% of the global population (Digital Around the World, 2022); 2.9 billion people are active Facebook users and 1.93 billion use it daily (Statista, 2021); around 2.5 quintillion bytes of data are produced each day from almost every sector of the economy, and The International Data Corporation predicts that, by 2025, the world will create and replicate 163 zettabytes—or 163 trillion gigabytes—of data every single year (Bartlett et al., 2018). These trends guide politicians because the natural expansion of the population will increase the proportion of the world’s Internet users. Politics has now become more online than “door to door” because online campaigns can function 24 hours/day and can reach any person, even if the voter is abroad. We cannot compare the cost dimension of a traditional campaign in every country, but we have an estimation of their costs in the US, because here we find the highest concentration of rich people, with also the highest level of innovation used by politicians. In this case, we note that between the election year of 2000 and 2008 the cost of running an election increased by 50%, from 4.63 billion dollars to 6.27 billion dollars—and during that time, the mobile phone became the main tool of electronic campaigns. With the progression of AI, the cost increased to 14.4 billion dollars for the 2020 election year, doubling the costs from 2016 (Cost of elections, 2021), when the potential of AI was comparatively weaker. Such costs have become very expensive and not every politician or party is able to pay them. But all costs shown by the US elections cover not only the electronic “battlefield,” but also door-to-door campaigns, television spots, M. Vacarelu 205 posters, and so on. Of course, there are differences among parties and their capacity to use these tools, as there are differences between countries in their dimensions and demographic densities, which modify the strategic political campaigns. Considering these differences, but also the Internet tools that were then proliferating, it became necessary to direct a good part of the political campaign to these new tools to achieve what in every voter’s mind would be a personalized political message. Personalized political communication on the large scale we have seen in recent elections requires resources that are well beyond those commanded by campaign organizations built around individual candidates. This type of communication is instead pursued by wider “campaign assemblages” that include not only staffers and consultants but also allied interest groups and civic associations, numerous individual volunteers and paid part-timers, and a partyprovided technical infrastructure for targeting voters. Close scrutiny of how such campaign assemblages engage in personalized political communication leads one to challenge the dominant view of political communication today, viz., that it is a tightly scripted, controlled, and professionalized set of practices that primarily represses turnout and turns people off politics in its cutthroat pursuit of victory (Nielsen, 2012). This chapter seeks to find out the role of malicious use of artificial intelligence (MUAI) in political campaigns, which poses a threat not only to national but also to international psychological security because its unethical use can create new situations without a predictable control or treatment. Being a new electronic tool, we are still not totally conscious about all its force and all its potential to harm the human mind and—as result of this harm— society as a whole. An ill man, a victim of MUAI, can today become a criminal, a terrorist, a source of mental problems for his friends, and all these consequences can appear as an effect of non-ethical AI political use. Artificial Intelligence as Game Changer in Politics, Politicians and Political Behavior from a Psychological Perspective Like any new tool created by the human mind, AI can have a genuine use, such as improving our quality of life and helping the economy to strengthen its potential to develop new instruments useful for the same purpose (providing a better life). At the same time, as many philosophers have underlined since antiquity, some tools can be used in unethical ways, provoking bad 8 Malicious Use of Artificial Intelligence in Political Campaigns… 206 results for human life and social stability. By its force, AI becomes a real gamechanger in all areas where the human psyche creates an attitude—acceptance or rejection—and here politics represents an effective site for the application of AI “games.” In AI, “politics” represents a wide field of application in the areas of politicians and voters, human psychology and political campaigns; this can impact elections and the creation of new opinions between voting days, which needs to be studied and deeply understood. It is not only a question of whether AI could in%uence people through explicit recommendation and persuasion, but also of whether AI can in%uence human decisions through more covert persuasion and manipulation techniques. Some studies show that AI can make use of human heuristics and biases in order to manipulate people’s decisions in a subtle way. Heuristics are shortcuts of thought, which are deeply configured in the human mind and often allow us to emit fast responses to the demands of the environment without the need for much thinking, data collection, or time and energy consumption. These default reactions are highly e&cient most of the time, but they become biases when they guide decisions in situations where they are not safe or appropriate. These biases can be used to manipulate thinking and behavior, sometimes in the interest of third parties (Agudo, 2021). The emergence of AI was a unique moment for all politicians, regardless of the country. All the technologies until then, as well as the political strategies used for hundreds of years, had a pronounced reactive temporal characteristic, because they developed at a time when the operators of the politician’s tools had approximately the same knowledge and, fundamentally, almost the same physical abilities. No matter how important a news item was, it could reach recipients only after protracted physical operations, which involved collecting the text in a newspaper, transporting it (the journal) to as many cities as possible, and the recipients’ buying and reading these accounts. From the perspective of the political environment, this huge waste of effort could be completely useless, because only the populations of very large cities—and especially those who lived in the countries’ capitals—could revolt quickly, in accordance with the interests of different political leaders. Provincial tumult was also not easy to capture in the countries’ capitals, because the local specifics were different; the inhabitants located hundreds of kilometers from the governmental headquarters felt a deep spatial gap, including in the different political manifestations. Time passed, the number of newspapers doubled and then were almost overshadowed by the power of radio and television, but even in these cases the political environment was not completely satisfied, first because governments maintained a control as strict as possible over certain messages that could be M. Vacarelu 207 offered to the public, either by delaying them or by blatantly banning their appearance. For decades dictatorships and authoritarian regimes have preferred an unequivocal solution, namely a ban on private radio and television operators, so that the most common alternative to internal news appears through radio and TV sources abroad. For a significant number of countries, the acquisition of high-performance radio and TV receivers had been a di&cult process, either because of economic reasons—because poverty plays a very important role—or because simple devices allow governments’ easier control of the population. Only the great changes of the 1980s and 1990s led to the explosion of radio and television stations globally, and the information landscape of the population changed decisively. However, this way of communicating political messages also creates problems for the political environment, because it does not allow a completely honest or balanced competition: the most important and rich media sources exert the greatest in%uence on public opinion. Sometimes, the editorial policy of these important sources of information can be useful to a government or an opposition party, depending on certain options/amounts of money offered, and the citizen is usually in%uenced by the messages that are given continuously from one direction or another. Authoritarian regimes after the 1990s had to partially simulate freedom and allow the emergence of press sources operated by private individuals, but the regimes kept them under control, and when they were considered too critical of governments, financial or even criminal sanctions were imposed on some directors or popular journalists. The advent of mobile phones has suddenly complicated the lives of politicians, because the news could suddenly reach any point in the countries, directly and sometimes instantly in%uencing the political public game. Thus, the first “mobile phone revolution” took place in 2009 in the Republic of Moldova (“First SMS revolution”), and since then authoritarian governments have understood that blocking mobile phone networks becomes mandatory in very tense times, in order to prevent strong protests which are able to throw them out of power. However, the mobile phone has remained only a part of the great technological leap in communication, because newly created Internet technologies have merged with this device. Since the Internet could be accessed by mobile devices, the political game got a global reset, and political actors have been forced to support the new technologies’ development, implementation, and legal requests (Dowd, 2022). This new technological framework provides a greater strength to its operators and implicitly to the given political messages, in both directions: on their own parties and on society. 8 Malicious Use of Artificial Intelligence in Political Campaigns… 208 Among the new technologies that have developed in this interval, the most important is that of AI, and in terms of shaping the future of political competitions, it will play an exceptional role, if not the most important one. As new generations are born and grow in the mobile Internet paradigm (Vogels, 2021), it is clear that AI will most likely become the most important tool in political campaigns in every country. We must emphasize, however, that today AI plays a triple role, less understood by society. First, some scientists and politicians consider it either a panacea for the country’s problems or the supreme political weapon. Second, there are those who only vaguely understand what this technology could be; they are largely aware of it only through science fiction movies and books, in which higher intelligences become dangerous for the human species and the planet. Finally, there are those who consider AI to be a very important technology for many spheres of political, social, economic, and private life, but who try to maintain a sense of reality, corroborating its capabilities with all other means and tools used today. From this perspective, AI must be studied rationally, without sentimentality, analyzing its inherent capabilities, as well as the possibility of doing good or bad, in relation to the interests of the operators who will use it. It must be understood that any politician is ecstatic about the ability of AI to reach any device at any place on the planet. After all, the costs of election campaigns are reduced substantially, because this new technology achieves three things that were impossible using traditional political methods. First, few people today understand the informational capacity of Internet technologies and especially those of AI (Larson, 2021). I need to underline that electronic devices can now store amounts of information that the human mind will never be able to comprehend, thus becoming fantastic libraries in terms of the level of resources, but occupying minimal space. This yields two major consequences in the sphere of political competitions, which must be taken into account before any electoral moment—obviously, without limiting themselves to them. The depository characteristic of AI means that it “knows” different strategies and political operations that have taken place in history, and what is new to the politician’s mind will for an AI-enabled device generally be simply a repetition of an older technique. Basically, AI will help the politician not only by recognizing the political techniques used against him or her, but also by providing historical examples of counterattack. I predict that in the coming decades equal levels of AI use will lead to almost equal confrontations, such as world chess championship matches. In this perspective, only the strength of M. Vacarelu 209 AI and differences due to natural skills and expended sums by one of the parties will be able to create effective advantages. The second consequence is related to the quality of the political acts in upcoming decades, which will be a direct effect of the politicians’ intellectual level. Specifically, the use of AI will lead to the danger of a lowering of the intellectual, legal, economic, and administrative level of political groups (Pashentsev, 2021). If AI provides examples of good political and administrative practices, as well as techniques and methods of attacking or countering political opponents, why should a politician exert greater intellectual effort? The problem will also arise due to the fact that only political activities can be performed without a standard of study, thus offering the possibility that any person, regardless of academic training, can obtain a superior position in a local or national community, based only on party association. Let us not forget that in most professions that lead to superior positions in the national and local administrative hierarchy there are clear and high standards of education, but their leaders/dignitaries are not conditioned by them. The question arises for any politician: when you have an instrument with the power of AI, why not let it do as much of the public duties as possible? Obviously, from this question to the designation of AI as a strategist and main tool of action in political and electoral campaigns will be only a small step, which most politicians will do without hesitation. Thus, if AI can do so much, why should the politician still learn and improve? Another fundamental aspect that we must be aware of is that AI never gets tired and can lead a debate on social media forums for hours, and often, just as with a boxer, the person who stands at the end of the fight wins. The resistance of living beings is limited, and competing with an Internet device/profile—which in reality you can’t always realize is an electronic one—is frightening from the start, and no one would willingly engage in such a debate. It is not impossible to imagine that certain political debates carried out over hours are in the end only a competition between the AI devices of political competitors. For all these reasons, there are major consequences for the psychological security of every person who owns devices connected to the Internet. In fact, there is a double menace for everyone, but only the second is targeted by the political operations. The first problem is human biology, which is not adapted to the whole complex framework of the twenty-first century—from global warming to the global job market, from economic discrepancies to the omnipresence of electronic devices, and so on—and from this point of view, a lot of people will be mentally affected by AI, without having any specific political pressure put on 8 Malicious Use of Artificial Intelligence in Political Campaigns… 210 them. AI is a new tool, developed fast, when neurosciences are not totally able to explain all of the brain’s functioning, meaning that some of its actions are still not totally understandable by people with average scientific skills. Here we cannot have a real solution, because to create an adequate proportion between biology, the Internet, and AI needs years or even decades. It is necessary to have infrastructure that is able to be used by a larger part of the community in a way that is different from the electronic domain: sport, theater, painting, and so on. The lack of such infrastructure will force people to stay mostly connected to the Internet, becoming psychological prisoners of all cyber space, and targeted AI actions against their minds are just a step away when in this space. Thus, I predict that weak economies, with small sport, cultural, and entertainment infrastructures, will become easier victims of any kind of AI operations ruled by politicians. The second menace is given by the improper use of AI, transforming its excellent potential for educational, economic, industrial, and medical purposes into a weapon in the hands of amoral and immoral politicians. This type of use is considered—in a delicate manner—as malicious, but in fact it will represent one of the main forms of political AI use in the coming years. MUAI is able to change election campaigns and their results, but it will affect nations’ futures, too, favoring some competitors and breaking the main characteristics of a coherent society: human trust, human public morality, human solidarity and cooperation, rule of law, and moral values. Why this danger? Because the human mind is not always moral—but here is not the place to present all philosophic discussions of human nature. Human nature is sometimes explained by two famous expressions: homo homini lupus (man is wolf for another man) and “everything is allowed in love and war.” About the relationship between war and politics, we must remember a famous definition given by Clausewitz, who said: “War is nothing more than the continuation of politics by other means … For political aims are the end and war is the means, and the means can never be conceived without the end” (von Clausewitz, 1832). In this case, AI use in political campaigns is just a matter of war, because even the general semantics of politics present the competition for power and achievement as a “struggle,” or “fight”—words with the precise significance of confrontation. In such a paradigm, the shift from the moral use of AI to a malicious use of it is just a small step that is easy to be made. To more important country position, it is more probable to use AI in a not-moral way, because the prize is high and it brings many advantages for winners; this notmoral way is described in the scientific literature as a “malicious use.” M. Vacarelu 211 At the same time, we need to remember the continuous geopolitical competitions among countries, which are also eternal and are executed with any imaginable tools. In this competition, “war,” “struggle,” and “fighting” are not just words, because our planet’s history has had bloody events during almost every recorded year. In such context, AI is just a way to compete among others, but its force can create a real new global hierarchy, as a Brookings Institute study mentions: “Whoever leads in artificial intelligence in 2030 will rule the world until 2100” (Indermit, 2020). After agreeing with this sentence, it is obvious that geopolitical competition will use AI without any hesitation and by employing its full dimensions, including the malicious one too. Political campaigns are often run with limited resources and time, which makes them more brutal than other types of business ventures. With such a lack of time, when different messages can change poll results in a few hours, AI can provide enough useful data for political campaign strategists: to identify and target specific voters who will most likely vote for a candidate; to track the number of ads that have been shown in a given area, as well as what time they were shown; to create personalized campaign messages tailored to individual voters with the help of data gathered from social media posts and other sources; to identify what people search for online or how they spend their time; to predict voter turnout; and so on (Kiran, 2020). For example, the Trump campaign hired the firm known as Cambridge Analytica to access the accounts and profiles of well over 87,000,000 Facebook users by using AI tools; in India during the Lok Sabha election, Prime Minister Modi used a hologram of himself to do virtual rallies at several places targeting different sets of people simultaneously. Having such data—like never before in all human political campaign history—appears as a danger for MUAI: it is enough for just one political strategist to use AI in a non-ethical way, which would force his competitors to develop methods against such behavior, and from this it is just a step to see the broader use of MUAI in politics. The use of MUAI in politics will be continuous, because it is only in monarchies that there are life-time positions, with all other positions open to competition. MUAI will appear in two dimensions, both in%uenced by the quality of countries’ science and their financial strength too. Basically, stronger scientific and financial countries will achieve almost a clear regional, continental, and global supremacy—but these capacities will not exclude failed operations and e&cient counterattacks. The same model will be replicated at the national level, when stronger and richer parties will be able to defeat other competitors in long-term political campaigns. 8 Malicious Use of Artificial Intelligence in Political Campaigns… 212 We must note two important differences between the dimensions of MUAI. Firstly, at the national level, it is easier to defeat a rich party because the selection of a candidate can easily fail under other parties’ attacks, and any strong leadership crisis can consume a big part of parties’ financial resources. The second difference is a result of the country geopolitical global position: for some, national politics remain just national, but for the main regional/ continental and global powers, any political campaign will attract international interest and—in some cases—MUAI from other governments. AI is changing the human mind just by the simple pronunciation of its name. This force of change helps the AI to occupy a strong position to in%uence the political game and the behavior of politicians and voters in such ways that psychologists will be requested to explain some of the politicians’ actions. In such a new “equation of forces,” the AI will start to not just in%uence the human psyche, but also to replace some part of the old instruments used in political campaigns, becoming one of the key factors in any political strategy of the coming decades. Malicious Use of Artificial Intelligence in Political Campaigns: The Rising Threats If today’s elections and political campaigns are something natural, the ways they are conducted reveal deep concerns in many countries. Too many examples of frauds are known from history school handbooks, and also too many new strong electronic tools were created in the last decades, with a huge impact on human behavior. Having these in mind, any voter—and any political strategist too—can try to answer this question with caution: can AI be used in political campaigns in a proper manner? What are the purposes and values of a political campaign? Who does it serve, and what should we expect from it? These are not idle questions. Thanks to huge infusions of money and technology in the last three decades, a modern political campaign has become either an impressive juggernaut of optimized technology delivering relevant messages to citizens or a cold, clinical machine that has lost touch with the beating heart citizen democracy. And in the wake of elections that saw a well-funded, well-organized, and well-planned campaign lose to an ad-hoc, upstart, chaotic outsider, every conventional wisdom about the correct way to organize and manage a campaign is rightly being questioned (McGuire, 2019, p. 5). M. Vacarelu 213 In such a paradigm, AI has its own strong position and it will increase over the coming years, no matter the endless pandemic and con%icts in Europe or other continents. This growth is possible because political strategists need more data now on consumer demographics, behavior and attitudes, including health and location data given by smartphones, smartwatches, and any kind of computers (Bartlett et al., 2018). Elon Musk’s buying of Twitter can be understood as an expression of such, adding new questions about billionaires’ role in national policy creation. While Donald Trump’s campaign during the 2016 US election received a lot of media coverage for its use of big data analytics, similar approaches have been used by a number of campaigns in recent years (Cadwalladr, 2017). During the EU referendum in the UK, Dominic Cummings estimates that Vote Leave ran around one billion targeted adverts in the run up to the vote, mostly via Facebook. Like Trump’s campaign, they sent out multiple different versions of messages, testing them in an interactive feedback loop. In the 2017 UK general election, the Labour Party used data modelling to identify potential Labour voters, and then target them with messages. Through the use of an in-house tool called “Promote” which combined Facebook information with Labour voter data, senior Labour activists were able to send locally based messages to the right (i.e., persuadable) people (Waterson, 2017). Elections are becoming increasingly “datafied,” with advertising and marketing techniques being offered by a network of private contractors and data analysts, offering cutting-edge methods for audience segmentation and targeting to political parties all over the world. Many of these techniques were first developed in the commercial sector—as pointed out in a brief analysis of digital marketing in political campaigns, “electoral politics has now become fully integrated into a growing, global commercial digital media and marketing ecosystem that has already transformed how corporations market their products and in%uence consumers” (Chester & Montgomery, 2017). In such new “media painting,” elections could be seen just as competition of “who owns more data,” meaning that a good campaign script is missing, and political doctrines will be replaced by simple information accumulation. In such a new world, smartphones have come to have their own role in political campaigns, because their small dimensions—compared with a PC— simplify the process of gaining exposure to video and messages, because it offers less opportunity to check the accuracy of the data. In fact, the smartphone interface amplifies the document/link you currently see, keeping in the shadows the possibility of being distracted by other films/add-ons/links/documents and so on. Being easier to keep in pockets, smartphones started to win the competition on device-selling: since 2014, more than 1  billion 8 Malicious Use of Artificial Intelligence in Political Campaigns… 214 smartphones are bought every year (Statista, 2022), forcing political strategists to adapt their campaigns to them. For political campaigns smartphones are very important, because the use of smartphones with “always-on” Internet connection has increased the challenge for the management of interactions, intended interpretations, mutuality of information, and eventual interpretations; users are always available for typing texts, recording audio files, or browsing the web even if other people are also physically co-present (Yus, 2021). The most important thing for the complex relations between AI, smartphones, and political campaigns is the localization of the user. Every minute, a person can be located by meta-dates, and as a result there has been a strong use of AI in political campaigns to use this great possibility to provide targeted messages: if someone is located in a dangerous area, it will provide messages about strong enforcement and new legal proposals to punish criminals; if someone is consulting through his smartphone in a hospital, a political message about healthcare improvement will come; and so on. No matter the place you are in, a good AI tool used in a national political campaign can bring advantages to politicians—the distinction between local and national elections is very important here because the legal domicile linked to one’s voting is not always the same as the place of that person’s daily activities. Smartphones have a complex effect on the human brain, increasing mental laziness, reducing available cognitive capacity (Ward et al., 2018), and affecting reading comprehension (Honma et al., 2022). All these characteristics are very important not only for daily working e&ciency but also in politics, because the lack of comprehension creates strong opportunities for any kind of manipulation, political included. In an article published in the MIT Technology Review, it was clearly a&rmed by a former Google manager that smartphones are weapons of mass manipulation (Metz, 2017), and it is easy to understand that this wide effect is applied to politics too. In fact, smartphones are a very useful instrument for promoting fake news in politics. Their interface, as I mentioned before, favors the one-message view, and this characteristic is fundamental for political campaigns, because politicians want their messages to arrive single and to be seen without any interference from an add-on/video/document and so on. In the case of the smartphone, all political tricks and any kind of malicious behavior find a good platform to act through a biological eyesight algorithm, which is much more concentrated when the main image is small. The size difference between PC and smartphone screens forces the human eye to concentrate on a different scale, changing the force of the message in the same proportion. In a very honest way, we must admit that the smartphone is a blessing for commercials, M. Vacarelu 215 marketing, and any kind of strong public message—with or without political meaning, but it is also a strong provocation for the human brain because its natural capacities are not developed to face such an avalanche of messages. The lack of cognitive capacities “obtained” as a result of smartphone usage— as mentioned above—appears because their interface forces users to concentrate on a single message for many hours, contrary to natural human evolution, which was directly in%uenced by the capacity to have a wider span of awareness of one’s environment. The smartphone-social media connection has built on this history. But smartphone politics has also catalyzed something new. Our constant digital connection and access to vast networks facilitate new ways of doing politics. For one thing, our phones allow us to bypass mainstream media, which is often selective in its coverage (Aschoff, 2021). In this paradigm, we must note that the technological progress of the smartphone attracts the deep-fakes growth too. False information used for the purpose of changing the human mind became a daily situation, and the next years will lead to them increasing in importance, as political campaigns will continue to exist. Today our world—from a legal, political, and economic perspective—is too complex for a single political campaign, forcing strategists to create more sub-strategies to address such social differences. All educational systems developed in the last century—and mainly after World War Two—accent the power of the individual, opposing his interest to that of others. This process created more atomized societies, with more independent voices and more individual claims from governments, local authorities, and politicians. The huge number of independent citizen voices force political strategists to adapt to all claims, which is objectively impossible. In such a way, politicians lose contact with people, and only the appearance of AI has helped them to solve this problem. Before AI, a politician and his team traveled cities, villages, and on streets to convince people of their message without any guarantee that their message would reach the people. Internet and mobile technologies helped politicians to change two main political campaign characteristics. Firstly, the dialogue becomes more personalized (one to one), given messages having no filter from other people. Secondly, politicians now have a clear image of political options and social needs without interventions from lobbying groups, being able to develop more effective campaigns. If we look carefully to those two characteristics, we observe the need for AI in political campaigns, because the diversity of electors must be faced by the same number of propagandists. Politicians’ staffs are not so big, but the need to offer a diversified message to citizens is strong, and it is necessary to develop an electronic instrument to play this role. That 8 Malicious Use of Artificial Intelligence in Political Campaigns… 216 instrument must be intelligent and able to adapt its messages and dialogue to any kind of answers and claims; being electronic is artificial, and if we add those characteristics, we have AI. Political competitions of today are complicated because of huge “today libraries”—digital and physical—who remember any action, declaration, or failure of politicians. A good “today library” offers easy access to its information, forcing politicians to be more cautious than ever, transforming politics into a continuous campaign, forcing the creation of new electoral strategies. Because of this, political competition is in fact uninterrupted, and AI will be used in virtually the same way. Operation Cambridge Analytica was only an expression of a reality that any electoral strategist understands very well: because political options can change very quickly—sometimes at intervals of days or hours—it is necessary to develop a deep knowledge of potential voter activism. In this sense, any political party that wants to win elections in the coming years must invest in procuring services to monitor the options and ideas about the population through AI. More important than gasoline prices for cars in “door-to-door” political campaigns will be the strong access offered by AI to voters’ psychological profiles and metadata that create a better image of human behavior than a normal brain can imagine or accept (Duhigg, 2012). Getting good information about voters’ mental profiles is very important, but a political strategist must launch a campaign that makes people vote and not just stay on the sofa, criticizing political offers. Every person is different, and AI helps political strategists to understand not just the big segments of voters’ communities but even the smallest differences among them. Thus, it is mandatory to not just know about these differences but to build messages able to lead people to be trustful enough of ideas and to manifest enough adherence to those ideas by going to vote. But here human psychology underlines that political messages are not able to have the same meaning for every voter, and from this perspective it is also clear that the perfect politician does not exist because his messages cannot be totally accepted by the whole voting group. To reach a stronger result in voters’ minds becomes more complicated than ever when a voter’s psyche is strongly in%uenced by an informational ocean. It is not possible today to ignore the increasingly stronger human pride and its consequences for individual opinion creation, including in politics; more pride means much stronger opinions and more loyalties to them. Regarding such strong mental relations, voters’ ideas and loyalty to those ideas, a political campaign must act differently, helping voters to feel themselves important and as though they are presenting a candidate’s program as having a deep connection with voters’ opinions. Objectively, this purpose can be achieved just M. Vacarelu 217 by individually personalized campaigns because politicians in a public campaign can just give general messages. To fulfill this goal, a politician needs a strong human presence—as in traditional “door-to-door” campaign—and a lot of money to send its voting-team all over the country. Even if you have both—money and a human campaign infrastructure—you are not able to cover the whole voter’s mind for 24 hours/day, meaning that effectiveness is not totally assured by campaign staff visits and contact with voters. The Internet world is mainly about the mobile Internet (i.e., the smartphone space), which has changed the needs for what constitutes an effective political campaign. Before the appearance of the smartphone, people spent some hours on television, some hours working, some hours reading newspapers, some hours doing sports, and so on, but with clear limits among these actions. The creation of the smartphone brought the Internet’s possibilities everywhere: from jogging to taking a daily walk to one’s job and from watching news to commenting on social media and so on. Thus, the smartphone changed almost all human activities—even sleep is affected—making political strategists prisoners to this new reality: if they are not on a smartphone more often, their candidate will disappear from the public space. Having a need, we have the direction of action, too: politicians need that instrument/technology to e&ciently reach voters’ smartphone. To be “e&cient” in this context means not just to spread political messages, but to send targeted messages that are totally personalized. Or, to provide totally personalized messages, you need to first know the destination public. The result will be—for the needs of the political strategist—a tool able to answer two specific requests: a) 24 hours/day information about voter’s ideas and behavior and b) the chance to enter into contact with voters in a persuasive way. Of course, a persuasive tool must be able to convince a voter for more than one day and in fact change political voters’ options. In a brief and honest way, we need to underline that AI development is very dependent on political needs, at least in the twenty-first century. An eventual history of technology will be written in the twenty-second century that will need to find the correct proportion of AI development between political needs and commercial/companies’ interests, but, in my opinion, the political part will almost be equal to the commercial one. Speaking about the task AI must perform, we must understand too that money in%uences AI development. Stronger AI—paid for by some politicians/parties—will bring more accurate information to political strategists and it will create a big gap in political competition. This financial difference will be very important at the international level, because big technological powers will use their advantage against any competitors, favoring some 8 Malicious Use of Artificial Intelligence in Political Campaigns… 218 politicians and blocking others, in a complex effort to in%uence political leadership abroad. More money, more AI political actions. There are two important things to note that underline the importance of money in political AI operations. Firstly, AI has not completed its development; its evolution is just at its beginning. Today we are used to having many technological peaks, and the differences between some producers are not so high—from airplanes to mobile phones, from medicine to agriculture, and so on—but the case of AI is a different situation. One AI case we are still developing is in experimenting with learning, and rich countries are leading the race to its application to endeavors at almost any commercial and administrative level. So to speak, we are still expecting surprises in AI development, and the race is not even at its middle, offering more situations for concurrence and alliances. Money will be very important here to pay the cost of development and experiments, with a sad conclusion: in the global AI race, the poor countries will remain weak, following the technological leaders very slowly, with more than a decade delay. This first characteristic will be important in geopolitical competitions and global hierarchies, reaching all human activities: politics, commerce, military, sport, and so on. Secondly, an important consideration is the continuous character of today’s political campaigns, because an ocean of information and news force politicians to be present every day in voters’ attention. This costs—again—a lot of money, because some political strategists must create messages able to reach and in%uence the human mind. To “create” means to pay someone to think and to “produce” means (first) to pay AI instruments to collect information about the voters’ main concerns and, second, after a while effective political messages will spread. Again, money competition between parties and politicians will offer some clear expectations for national political competitions: a poor party will usually have low public reach and be hardly able to in%uence the political arena. It is true that in national competitions there are some specific topics able to quickly crush a whole party or group of leaders, but these substantial changes do not happen every day. Thus, a rich party will be able to (first) buy strong AI tools and to use them against competitors and (second) to promote specific regulations. AI use will favor the big parties and in this case it can be imagined that some regulations for “social research,” a nickname for collecting data about national voters using AI, will be set in place. During political competition, the MUAI becomes very important. Why? Because in a rule of law twenty-first century many harsh differences between political disappeared. These differences made by political platforms are a topic of deep analysis done by specialists, but the general public only knows the M. Vacarelu 219 basics of the parties’ visions. No matter the differences in the parties’ visions on tax, money distribution, the social system, criminal justice policy, and so on, common people have their beliefs, and their vote is expressed according to them. In such a situation, AI becomes fundamental to knowing these beliefs and to changing them in a way that is controlled by the AI operator. A continuous campaign with just political program underlinings is not sufficient, because human nature is not always rational. Political competition is with almost no boundaries, and thus it is necessary to add emotions and rationality to the same strategy. But there is an important difference among these parts of political strategies: rational pillars of political programs have fewer adherents, and when emotional messages are able to reach more people in a shorter time, it is best to mobilize them in a partisan direction. As many researchers prove—truly, economists have been more preoccupied on this topic (e.g., Ariely, 2012)—human behavior is far from predictable or totally pragmatic, and emotions have a strong position in man’s actions. Thus, if emotions are much stronger than human average thinking, politicians and their strategists must generate and control them (i.e., emotions) to achieve a specific political result, keeping in mind that such actions can in fact just be manipulation. Positive emotions are di&cult to create, and, in any case, they must be personalized according to the AI operation’s wishes. But the political strategy handbook also underlines that positive emotions must appear very close to the election day. Meanwhile, political campaigns will face a different message typology related to the main facts that are presented in the media. Because the media today is mainly interested in dysfunctional realities, the political campaign will follow the repair of these problems. But because a dysfunctional society has human causes, politics will follow the line, and it will present and underline the leaders who were not able to fulfill the legal standards. AI use will be dedicated to find voters’ opinions about facts, responsibilities, and ways to correct situations. After AI operators get the answers to these three questions, they will send them to political strategists to create messages for the politician who pays them. To create strong voters’ opinions, political strategists will need to use AI to create emotions and to implement an effective movement for the politicians who paid them. In fact, a recent study has shown how AI can learn to identify vulnerabilities in human habits and behaviors and use them to in%uence human decision-making (Dezfouli et al., 2020). But strong voters’ opinions need strong messages, which are able to disrupt the human mind. AI will help political strategists to create such messages able to respond to any person’s thoughts, leading him to love and/or hate a specific politician/party/ideology. 8 Malicious Use of Artificial Intelligence in Political Campaigns… 220 To obtain success in such operations, it has become necessary to use AI in a malicious way because a positive campaign is not able to create or to increase emotions that last for a long time. Political needs require a loyal voter, but today the informational ocean is able to quite easily change the minds of people in any direction, provoking nightmares for political strategists. Thus, they need to solve this equation— loyal voters versus abundance of opinions/ideas/ideologies—in an effective way, and AI capability will be fundamental for this purpose. The advent of mobile Internet and smartphones will force political strategists to use AI in such a way as to occupy most of the smartphone user’s time.To get this “screen occupation” it will be necessary to have strong messages that are able to hold captive the human mind. Such a result—to hold a voter’s mind captive—will need messages that are able to make an impression for a long time, and this goal also encourages the use of MUAI because the wish to strongly impress people is common not only to politics but also to commercial activities. This means that the human mind consumes more strong messages daily than ever in mankind’s history, affecting people’s psychological security. Even scientists want to cooperate with very educated people, who are able to recognize marketing techniques and other forms of manipulation, but the reality is far from this. The abundance of information, daily problems from job, school, family, and so on, and other stress factors distort the human mind and make it vulnerable to many political tricks. From such a perspective, psychological security becomes very important, and because of these situations it will be added to MUAI. Political campaigns have—after some years—a regular concluding moment: the elections. At that moment, all mistakes are forgotten either in the case of a win or not, in the case of losing. Elections are not only the possible conclusion of political careers, but they are mostly the result of political strategists’ competition and rely on their techniques. It is very di&cult from a legal point of view to prove a political campaign has been manipulative—and it is important to note that only the winner’s techniques are in the public eye, but leading politicians will act with all their administrative tools against any investigation or campaign: MUAI is inevitable. The lack of legal sanctions in fact will favor the MUAI to levels almost not imaginable today, but that are somehow understandable by citizens. For example, a study organized by the Australian National Science Agency—by its digital research section—reached some interesting conclusions. “Nearly half of respondents (45%) are unwilling to share their information or data with an AI system. Two in five (40%) are unwilling to rely on recommendations or other output of an AI system. Further, many Australians are not M. Vacarelu 221 convinced about the trustworthiness of AI systems, generally accept (42%) or tolerate AI (28%), and only few approve (16%) or embrace (7%) it … Overwhelmingly (96%), Australians expect AI to be regulated and most expect external, independent oversight. Most Australians (over 68%) have moderate to high confidence in the federal government and regulatory agencies to regulate and govern AI in the best interests of the public. However, the current regulation and laws fall short of community expectations” (Lockey et al., 2020). People are conscious today about the need for AI regulation. The complex situation of AI’s development is globally understood, but every wise citizen is also conscious about the technological gaps between countries. The danger represented by uneven AI progress is real, and it will be expressed mainly in geopolitics, but also in internal political competitions and campaigns. In the geopolitical sphere, it is impossible to believe that Tanzania or Mozambique will be able to develop AI tools stronger than South Africa, for example, as Bulgaria and Albania will be in the same position in comparison with Germany, too. AI is part of the most important human-industrial revolution—the fourth one (Schwab, WEF, 2016)—and every country will approach it with different levels of economic strength and with a predictable result: the economic gaps of today will be conserved in the next decade, as the Brookings Institute study revealed. In fact, some geopoliticians think today about a future of harsh competition between the USA and China (Wang & Chen, 2018; Hass & Balin, 2019; Allison & Schmidt, 2020; Hass, 2021; Sullivan, 2021; Haenle & Bresnick, 2022), meaning that AI tools will be used in various ways, from genuine to very malicious, with complicated consequences for the whole planet. More interesting will be to examine the use of AI—especially in its malicious use—in national areas. The pandemic and also 5G technology represent a good topic to analyze the MUAI perspective on national legislation and administrative practices. There are a lot of people who believe that AI is used by governments against their own citizens (Pennycook & Rand, 2018; Roozenbeek & van der Linden, 2019; Evanega et al., 2020)—as is the worst case of MUAI—and these two areas (the pandemic and 5G technology) are part of a “conspiracy of rich against poor,” “human selection,” and so on. In this second case, the problem is represented by the national rulers and the legal force they use to regulate society—they are creating laws and people are just subjects of them (laws). We must admit in fact that some truth exists in this last phrase, as I underlined before about the lack of legal control for political campaign tricks: national regulators must create laws able to punish political parties who enforce them in specific superior administrative 8 Malicious Use of Artificial Intelligence in Political Campaigns… 222 positions, and here everyone who has legitimate doubt could benefit from remembering the meaning of “egoism”. Relations between national political leaders, those interested in being reelected, and the constitutional right to create laws can be used to promote political parties’ strength above the citizens’ strength, if ethical considerations are not strongly adhered to. When you, as a politician, have the right to decide about people’s position inside society, you can be a danger for all citizens, as many reports have revealed over the last few years. In fact, global rankings of freedoms—freedom of speech (Freedom House, 2021) and of the press (Reporters Without Borders, 2021)—and rights (World Justice Project, 2021) have declined in several countries, and the quality of governance has actually declined in several countries too, although the private sector has created and used technologies that have themselves improved the population’s living standards. Most of the rights and freedoms questioned and reduced in recent years have been victims of the executive branch, which has benefited from the decline of representative institutions that are chosen by citizens (Mennicken & Salais, 2022). The need for ethics in the political space is now at its peak, because MUAI will harm psychological security and human rights, making it possible to create the strongest dictatorships ever. In the face of governmental MUAI, citizens have almost no chance to win, and the final result is a totalitarian society, like some wise writers warned us of many decades ago. The rising threats of the MUAI in political campaigns have started to be seen in many countries, indicating a general lack of trust in today’s politicians. From such a perspective, it might be possible to not see and not accept genuine AI use in any domain. The correct balance between good and bad in AI use in all spheres can be a real provocation for legislators and voters too, but there is no guarantee of an easy and profitable result for all. Regulating Artificial Intelligence Use: Preventing Malicious Use of Artificial Intelligence and Protecting Psychological Security Regulating AI use is very complicated because AI’s evolution is not finished. Much more, AI has a great psychological potential and most legislation has a role in organizing human activities but not the human psyche. It is also a strong obstacle to regulation that geopolitical competition will hide some AI developments from the public, keeping them for the country’s own interests. In such a case, the rules will not be totally prescriptive, asking for high skills M. Vacarelu 223 of legislate. As public law specialists (e.g. Balan, 2008) underlined, to create rules for nations is complicated not only because of specific legislator’s interests but also because of differences in legal systems: the harmonization of these systems to produce a global rule in any area (social, political, economic, etc.) would require the most di&cult cases to change some of their strong national peculiarities, even linguistic ones. The purpose of AI regulations must be the enforcement of ethics, because a strong pillar against MUAI’s ability to affect the psychological security of people and countries is the moral and ethical dimensions of life. However, it is necessary to underline that a very large number of ethical principles, codes, guidelines, and frameworks have been proposed over the past few years. There are currently more than 70 recommendations, published in the last 3 years, just about the ethics of AI. This mushrooming of documents is generating inconsistency and confusion among stakeholders regarding which one may be preferable. It also puts pressure on private and public actors—that design, develop, or deploy digital solutions—to produce their own declarations for fear of appearing to be left behind, thus further contributing to the redundancy of information. In this case, the main, unethical risk is that all this hyperactivity creates a “market of principles and values,” where private and public actors may shop for the kind of ethics that is best retrofitted to justify their current behaviors, rather than revising their behaviors to make them consistent with a socially accepted ethical framework (Floridi, 2021). To concentrate on just ethics is not useful because it has a big vulnerability: the lack of effective punishment in cases of ethical violation. Ethical sanctions are almost always moral, and politicians seem to not be impressed by this; citizens also have the full right to ask for effective punishment of ethical transgressions, because social satisfaction appears when rule violators suffer for their behavior. In the case of psychological security, AI regulation must prevent MUAI—or, at least, diminish it to a controllable level, and, for this to take place, legislators must find the correct proportion between specific legal sanctions (prison, fines) and ethics. I shall examine a brief form of AI regulation to protect psychological security at the international and national levels, underlining the main characteristics of such and trying to present the basics of the main mandatory rules that must be included. This article’s dimensions do not allow me to present a whole Code of AI, but it is necessary to underline the dual legal nature of such regulations—they must address public law and private law, which is not very common for national legislation. Firstly, the international regulation of AI will be very di&cult to adopt and even more di&cult to implement. The lack of sanctions for international rules 8 Malicious Use of Artificial Intelligence in Political Campaigns… 224 will result in non-enforcement—for example, if a country uses AI in a malicious way, what administrative force will come to apply some sanctions against that government? This complicates the jurist’s work. Basically, the jurists who will create international rules on AI, protecting psychological security—in fact, any other legal or moral value—will be forced to resume their work according to principles and ethics. The main principles that need to appear in international AI regulation include the transparency of AI use; predictability of AI use; security of AI use—both for people, but also for its direct users; protection of human rights; citizen participation in national AI regulation; equality between citizens and institutions in AI use and AI applications; impartial use of AI; protecting humanity during AI use; self-restriction of AI use to genuine directions. For psychological security, it is very important to have some principles that will underline the need for AI users’ self-control, because AI development can create more opportunities to harm the human psyche, and operators can always cross the line between legal and not-legal/ethical and not-ethical. A complex international AI regulation might include a Higher Court of Justice just for such cases, but with a realist’s brain, I must admit that there is a very small possibility that we will see its creation very soon, so at least in this decade the answer to that is negative. At the same time, no one can predict with high accuracy today the legal tendencies of the next decades, and what today seems almost impossible might be a goal of the main technological powers. Some international regulations of AI use will need to proclaim some type of consultations between countries, just to prevent—or to try to solve— MUAI being able to harm psychological security. But because of the lack of sanctions—which is an issue for many international law branches—AI regulations will not progress too much in effectiveness, at least in this decade. Only after AI use will have established a stable international order will it be possible to have more precise and effective international regulations, with a specific “right of police” exercised by the superior technological powers above the small ones—but almost without any limits applied to them. Regulation will be more effective at the national level of AI use because governments have almost complete strength to punish any rule violation. Keeping in mind the possibility that even a government will use AI in a malicious way against its citizens, I shall present here the main parts of AI national regulation, underlining the dimension of psychological security protection. First, such AI legislation should be adopted after a wide process of consultation with citizens, universities, lawyers, judges, prosecutors, companies, and civil society. The consequences of formal consultations will be mistrust in governmental intentions, which might create more problems. Secondly, M. Vacarelu 225 national legislation on AI use will be more effective and more administrative than international regulations. Here governments will be able to follow its implementation, and an effective partnership with society will bring benefits for all parties involved in the regulation of AI’s creators. At this level, AI regulations can be created in deep connection with Administrative Procedures Codes and with any other law that organizes the national administrative structures, because it will then be necessary to create a national administrative institution to regulate AI use. The principles of AI use at the national level will be quite the same as those presented in international regulation. Here we must underline the necessity to create and add the “principle of proportional use of AI,” because its use must be adequate for the purpose of not putting more pressure on the human psyche, violating one’s psychological security. Because of the characteristics of national law, it will include two other principles: “the legality of the regulations,” which need to apply to any kind of AI use, and “the principle of public interest in political AI use.” In the commercial branch of AI regulations will be included some principles related to the profit needs of business, but psychological security is regulated by public (administrative) law, and it (psychological security) requests a strong mention of public interest in political and administrative AI use. National AI regulations will create a specific administrative structure with the purpose of monitoring AI users’ practices. Its role must be something close to an administrative structure that monitors the concentrations of commercial firms and the collaboration between companies. A specific regulation to prevent MUAI must be included in national political parties’ legislation and in the election laws, too. It must be clearly mentioned in the MUAI prohibition that there will be sanctions applicable to politicians and parties who violate these rules, from fines to prison and even the dissolution of the political party. Because the violation of psychological security by MUAI can provoke social explosions, even fines for such behavior must be substantial, just to prevent even small cases of such. At the same time, there is also the potential problem that governments can use such big fines to eliminate strong opponents, and this administrative behavior is not just a genuine possibility but also a very effective one. To address this, there must be included in AI regulation some restrictions even for governments that prevent them from creating “the appetite for rule of law and democracy demolition.” Of course, at a national level, there must be created a specific regulation for justice claims to be made against MUAI. A specific functional competence— for example, such cases should be sent to the Supreme Court of Justice—can underline to any AI user that violation of psychological security through the 8 Malicious Use of Artificial Intelligence in Political Campaigns… 226 use of MUAI in the electoral arena is a very serious crime, and it must be punished in such a way as to impress the whole country. The court decision texts should be published in o&cial journals, underlining once more the importance of psychological security and the legal consequences of MUAI. In the end, it is necessary to teach law faculties, students, and prosecutors about MUAI in the political arena. For this, it is not enough to just create new legislation, but it is also necessary to create some specific and independent courses on master and post-graduate programs. Conclusion To many commentators, AI is the most exciting technology of our age, promising the development of intelligent machines that can surpass humans in various tasks, as well as create new products, services, and capabilities, and even build machines that can improve themselves, perhaps eventually beyond all human capabilities. The last decade has witnessed rapid progress in AI based on the application of modern machine learning techniques. AI algorithms are now used by almost all online platforms and in industries that range from manufacturing to health, finance, wholesale, and retail. Government agencies have also started relying on AI, especially in the criminal justice system and in customs and immigration control (Acemoglu et al., 2022). But AI use is not available to all countries to the same degree, and this gap might create a lot of problems in the next decades, and because geopolitical competition is eternal, its actors are not always happy to act according to international law principles and rules. The same problem is met on a national level, with some specific differences, because here governments can create more dangers for people with MUAI, violating their psychological security to such a level that social explosions can appear or strong totalitarian regimes can be created. In this paradigm, my text has tried to express some concerns about MUAI on today’s political competitions. A political campaign today becomes a continuous struggle among politicians, and their strategists are forced to diversify voting attraction techniques. For such purposes, AI development appears as a gift from God for political strategists and its benefits are huge—from access to voters’ psychological profiles to a stronger message being spread to citizens. The future of political campaigns will be in%uenced by AI capacities, but at the same time, AI operators can become vulnerable to many thoughts and even to less desirable behaviors. M. Vacarelu 227 Having these facts and dangers in mind, I tried to describe why we must create a legal system—at the international and national level—able to prevent MUAI and any violation of psychological security. For sure, today’s ideas will change in time, and in 20 years some of my concerns will have been improved or worsened. A legal response is important for not only legislators but also AI operators, because they need to know not only lawyers’ concerns but also what regulation directions are being discussed today. This can prevent mistakes or such behaviors as are able to harm the human psyche, affecting psychological security in the political campaign arena. References Acemoglu D., Ozdaglar A., & Siderius J. (2022). A model of online misinformation. Retrieved May 19, 2022, from https://siderius.lids.mit.edu/wp-content/uploads/ sites/36/2022/01/fake-news-revision-v10.pdf Agudo, U. (2021). The in%uence of algorithms on political and dating decisions. Retrieved May 19, 2022, from https://www.ncbi.nlm.nih.gov/pmc/articles/ PMC8059858/ Allison, G., & Schmidt, E. (2020). Is China beating the U.S. to AI supremacy? Retrieved May 19, 2022, from https://www.belfercenter.org/sites/default/ files/2020-08/AISupremacy.pdf Ariely, D. (2012). The honest truth about dishonesty: How we lie to everyone—Especially ourselves. HarperCollins. Aschoff N. (2021) The smartphone society: Technology, power, and resistance in the new gilded age, . Balan, E. (2008). Institutii administrative (administrative institutions). C.H. Beck. Bartlett J., Smith J., & Acton R. (2018). The future of political campaigning. Retrieved May 19, 2022, from https://ico.org.uk/media/2259365/the-future-of- political-campaigning.pdf Cadwalladr C. (2017). British courts may unlock secrets of how Trump campaign profiled USvoters.LegalmechanismmayhelpacademicexposehowBigDatafirmslikeCambridge Analytica and Facebook get their information. Retrieved May 19, 2022, from https:// www.theguardian.com/technology/2017/oct/01/cambridge-analytica- big-data-facebook-trump-voters Chester J., & Montgomery, K. C. (2017). The role of digital marketing in political campaigns. Retrieved May 19, 2022, from https://doi.org/10.14763/2017.4.773. Dezfouli, A., Nock, R., Dayan, P. (2020). Adversarial vulnerabilities of human decision-making. Retrieved May 19, 2022, from https://doi.org/10.1073/ pnas.2016921117 Digital around the world. (2022). Retrieved May 19, 2022, from https://datarepor- tal.com/global-digital-overview 8 Malicious Use of Artificial Intelligence in Political Campaigns… 228 Dowd, R. (2022). The birth of digital human rights. Digitized data governance as a human rights issue in the EU. Palgrave Macmillan. Duhigg, C. (2012). The power of habit: Why we do what we do in life and business. Random House. Evanega, S., Lynas, M., Adams, J., & Smolenyak, K. (2020). Coronavirus misinformation: Quantifying sources and themes in the COVID-19 ‘infodemic. Retrieved May 19, 2022, from https://allianceforscience.cornell.edu/wp-content/ uploads/2020/10/Evanega-et-al-Coronavirus-misinformation- submitted_07_23_20-1.pdf Floridi, L. (Ed.). (2021). Ethics, governance, and policies in artificial intelligence. Springer. Freedom House. (2021). Freedom in the World, 2021. Democracy under siege. Retrieved May 19, 2022, from https://freedomhouse.org/report/freedom- world/2021/democracy-under-siege Haenle, P., & Bresnick, S. (2022). Why U.S.-China relations are locked in a stalemate. Retrieved May 19, 2022, from https://carnegieendowment.org/2022/02/21/ why-u.s.-china-relations-are-locked-in-stalemate-pub-86478 Hass, R, & Balin, Z. (2019). US-China relations in the age of artificial intelligence. Retrieved May 19, 2022, from https://www.brookings.edu/research/ us-china-relations-in-the-age-of-artificial-intelligence/ Hass, R. (2021). How China is responding to escalating strategic competition with the US.  Retrieved May 19, 2022,from https://www.brookings.edu/articles/ how-china-is-responding-to-escalating-strategic-competition-with-the-us/ Honma, M., Masaoka, Y., Iizuka, N., Wada, S., Kamimura, S., Yoshikawa, A., Moriya, R., Kamijo, S., & Izumizaki, M. (2022). Reading on a smartphone affects sigh generation, brain activity, and comprehension. Retrieved May 19, 2022, from https://www.nature.com/articles/s41598-022-05605-0 Indermit, G (2020). Whoever leads in artificial intelligence in 2030 will rule the world until 2100. Retrieved May 19, 2022, from https://www.brookings.edu/ blog/future-development/2020/01/17/whoever-leads-in-artificial-intelligence- in-2030-will-rule-the-world-until-2100/ Kiran, V. (2020). Artificial intelligence in election campaign: Artificial intelligence and data for politics. Retrieved May 19, 2022, from https://politicalmarketer. com/artificial-intelligence-in-election-campaign/ Larson, E. J. (2021). The myth of artificial intelligence. Why computers Can’t think the way we do. The Belknap Press of Harvard University Press. Lockey, S., Gillespie, N., & Curtis, C. (2020). Trust in artificial intelligence: Australian insights. Retrieved May 19, 2022, from https://doi.org/ 10.14264/b32f129 McGuire B. (2019). Scaling the field program in modern political campaigns. Retrieved May 19, 2022, from https://www.hks.harvard.edu/sites/default/files/degree%20 programs/MPP/files/Scaling%20the%20Field%20Organization%20in%20 Modern%20Political%20Campaigns_Final.pdf M. Vacarelu 229 Mennicken, A., & Salais, R. (Eds.). (2022). The new politics of numbers. Utopia, evidence and democracy. Palgrave Macmillan. Metz, R. (2017). Smartphones are weapons of mass manipulation, and this guy is declaring war on them. Retrieved May 19, 2022, from https://www.technologyre- view.com/2017/10/19/148493/smartphones-are-weapons-of-mass-manipulation-and- this-guy-is-declaring-war-on-them/ Nielsen, R. K. (2012). Ground wars—Personalized communication in political campaigns. Princeton University Press. Pashentsev, E. (2021). The malicious use of artificial intelligence through agenda setting: Challenges to political stability. In Proceedings of the 3rd European Conference on the Impact of Artificial Intelligence and Robotics ECIAIR 2021, Academic Conferences International Limited. Pennycook, G., & Rand, D. G. (2018). Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition. https://doi.org/10.1016/j.cognition.2018.06.011 Reporters Without Borders. (2021). The world press freedom index 2021. Retrieved May 19, 2022, from https://rsf.org/en/world-press-freedom-index Roozenbeek, J., & van der Linden, S. (2019). Fake news game confers psychological resistance against online misinformation. Retrieved May 19, 2022, from https:// www.nature.com/articles/s41599-019-0279-9 Schwab, K. (2016). The Fourth Industrial Revolution: what it means, how to respond. Retrieved May 19, 2022, from https://www.weforum.org/agenda/2016/01/ the-fourth-industrial-revolution-what-it-means-and-how-to-respond/ Statista. (2022). Smartphones—Statistics & facts. Retrieved May 19, 2022, from https://www.statista.com/topics/840/smartphones/ Statista, Number of daily active Facebook users worldwide as of 4th quarter. (2021). (in millions). Retrieved May 19, 2022, from https://www.statista.com/statis- tics/346167/facebook-global-dau/ Sullivan, R. (2021). The U.S., China, and artificial intelligence competition factors. Retrieved May 19, 2022, from https://www.airuniversity.af.edu/Portals/10/CASI/ documents/Research/Cyber/2021-10-04%20US%20China%20AI%20 Competition%20Factors.pdf?ver=KBcxNomlMXM86FnIuuvNEw%3D%3D Vogels, E. (2021). Millennials stand out for their technology use, but older generations also embrace digital life. Retrieved May 19, 2022, from https://www.pewre- search.org/fact-tank/2019/09/09/us-generations-technology-use/ von Clausewitz, C. (1832). On war cost of election. Retrieved May 19, 2022, from https://www.opensecrets.org/elections-overview/cost-of-election?cycle=2020&dis play=T&in%=Y Wang, Y., & Chen, D. (2018). Rising Sino-U.S. competition in artificial intelligence. China Quarterly of International Strategic Studies, 4(2), 241–258. https://doi.org/ 10.1142/S2377740018500148. Ward, A. F., Duke, K., Gneezy, A., & Bos, M. W. (2018). Brain drain: The mere presence of one’s own smartphone reduces available cognitive capacity. Retrieved May 8 Malicious Use of Artificial Intelligence in Political Campaigns… 230 19, 2022, from https://repositories.lib.utexas.edu/bitstream/handle/2152/64130/ braindrain.pdf Waterson, J. (2017). Here's how labour ran an under-the-radar dark ads campaign during the general election. Retrieved May 29, 2022, from https://www.buzzfeed. com/jimwaterson/heres-how-labour-ran-an-under-the-radar-dark-ads-campaign World Health Organization. (2021). GHE: Life expectancy and healthy life expectancy. Retrieved May 19, 2022, from https://www.who.int/data/gho/data/themes/ mortality-and-global-health-estimates/ghe-life-expectancy-and-healthy- life-expectancy World Justice Project. (2021). The world justice project rule of law index 2021. Retrieved May 19, 2022, from https://worldjusticeproject.org/our-work/research- and-data/wjp-rule-law-index-2021 Yus, F. (2021). Smartphone communication. Interactions in the app ecosystem. Routledge. M. Vacarelu