AI’s profound impact on elections and global power dynamics underscores the urgent need to tackle its challenges, ensuring both ethical use and the preservation of democratic values.
The advent of Artificial Intelligence (AI) has sparked many debates on the ethical implications of its use cases and prompted a deeper conversation on potential technological sentience. AI, however, is not necessarily a recent development but rather a technology that has simply become more advanced and accessible over the years. ChatGPT, Gemini, Character.ai, and QuillBot are just some examples of increasingly popular AI tools that can be leveraged by anyone around the world with access to a smart device. While such tools have been utilized to improve operational efficiency in sectors such as finance, education, and tech, it is no surprise that the misuse of AI for malicious intent has raised significant security concerns around the world.
Generally, AI software is often marketed as a tool to aid in decision-making and reduce human error. However, what logic does AI use to make certain decisions, and what determines that a decision was indeed appropriate? How does one define human error—and more importantly, who defines it? AI-based automation is also linked to operational efficiency with the eventual reduction of overhead costs; however, is this “automation” more integral than safeguarding the jobs that it has replaced? Even if these trade-offs are somewhat justifiable, perhaps the most relevant concern in the digital age is AI’s role in shaping human beliefs and institutions. What is AI’s role in this scenario, and should it even have a role to play?
A major testament to this development is the 2024 election year. In 2024, around 70 countries were scheduled for national parliamentary and presidential elections. Several countries, however, fell victim to pre-polling AI-related incidents, which depicted the proliferation of misinformation through various social media platforms. From deepfake videos to other forms of biased AI-generated content, governments have struggled to restrict and regulate the flow of such misinformation. A non-profit publication named Rest of World collected information about instances of AI-based misinformation from several countries holding elections, such as India, Pakistan, Venezuela, and South Korea. A commonality found among these countries was AI-generated content used to convey a politically biased message to persuade voters in favor of a particular party.
Unlike others, South Korea’s government introduced an amendment to the Public Official Election Act to ban the use of deepfakes for campaigning during the 90 days before election day. However, in India, for example, pop culture was utilized as a medium to connect with social media users. Specifically, clips from Bollywood movies were altered by replacing actors’ faces with politicians. Though these clips are often created for entertainment purposes, they do play a role in influencing a viewer’s opinion of a particular party or candidate based on the nature of the character associated with them in such clips. Given India’s socio-cultural diversity and ethno-religious history, politics is often intertwined with caste, ethnic groups, and religion. Specifically, there exists a threshold of sensitivity regarding politics, which, if violated, can potentially spark widespread violent conflict or further division between different communities. That said, in a democratic country experiencing a rampant increase in smartphone and social media usage, media regulation can come at the cost of an individual’s freedom of expression.
India’s neighbor, Pakistan, faces a similar challenge, where political leaders from opposition parties have called for the boycott of elections through misleading deepfakes. For example, Pakistan Tehreek-e-Insaf party leader Imran Khan was seen giving a speech after winning the election despite being in jail. Essentially, an AI-generated audio track of Khan’s voice was embedded with an older video of him giving a speech. Similarly, Donald Trump was seen endorsing Khan once again in an old video with a generated audio track created using Parrot AI. Although this video was declared a deepfake by a fact-checking organization, there is a possibility that, had this been a more refined and realistic deepfake, Trump’s false endorsement of Imran Khan could have had negative repercussions in the United States, Pakistan, and any of their adversaries. It is imperative to note that the relevance of these deepfakes lies in the matter of authenticity and the extent of trust the public has in official government media communications.
If deepfake technology improves to become virtually indistinguishable from authentic media, controlling the spread of misinformation and disinformation will require equivalent efforts from the government to create advanced AI detection technology, yet even this is not a lasting solution. While AI-generated content may impair public trust in a government, the opposite is possible as well. For example, the Journal of Democracy explained how AI-generated letters issued to policymakers throughout the United States could convey a false consensus on particular issues. Furthermore, policymakers would essentially deem non-existent matters to be the legitimate concerns of the general population. Consequently, there arises a chasm or an implicit barrier between the government and the general population, as genuine concerns are miscommunicated, making the election and lawmaking process rather ineffective and inefficient.
This dynamic poses several implications for the legitimacy of the institution of democracy, not only in the United States but in other democratic countries around the world. In a democracy, the right to free speech implies that an individual can vocalize their opinion; however, if that opinion is artificial and inauthentic, should that freedom be granted? This connects back to the paradox of regulating media content without infringing on the right to free speech by allowing unrestricted online content. Threats to democracy existed even before the public release of AI technology, where threat actors were able to execute cyber campaigns to fulfill certain strategic objectives. One of the most applicable examples of this is Russian interference in the 2016 US elections.
Essentially, Russian operatives employed tactics such as disinformation campaigns, social media manipulation, data leaks, and cyberattacks directed at weakening election-related infrastructure to skew the election in favor of Donald Trump. By proliferating inflammatory content, manipulating online political discussions, and employing APT groups such as Fancy Bear to steal and leak sensitive government information, Russia strategically undermined institutional democratic practices intended for fair and free elections. That said, due to the accessible nature of AI software over the past 2-3 years, both internal and external actors with political, financial, or espionage-related motives have been able to deploy more advanced and efficient cyber campaigns to threaten a state’s sovereignty and internal security.
As a result of such advanced and widespread campaigns, the integrity of democratic institutions continues to deteriorate. Given this situation, countermeasures to defend against the misuse of AI must be of utmost importance for all democratic governments. Some examples of countermeasures include employing digital literacy campaigns, promoting ethical AI development, establishing stricter regulations and compliance standards, and developing advanced fact-checking tools. Digital literacy campaigns involve educating voters about biased AI-generated content and disinformation, and helping them understand how to identify credible sources to inform their decisions accurately. Secondly, by promoting ethical AI development, potential biases in existing AI training models can be avoided, which can help reduce discriminatory practices such as voter targeting and the marginalization of particular groups.
Establishing stricter regulations and enforcing compliance with security standards can also help ensure transparency, protect private information, and provide better incident response guidance in the case of a cyberattack. Lastly, developing advanced fact-checking tools for filtering AI-generated content can help counteract the spread of false information by flagging online content that is inaccurate. If all of these countermeasures are utilized in a cohesive effort, the consequences of misusing AI can lead to increased awareness about how to optimize one’s right to free speech. A final concern regarding this matter relates to the notion of the balance of power. A core principle in international relations, the balance of power refers to the equal distribution of power among states to avoid creating a dominant world power. The balance of power in today’s multi-polar system is somewhat disparate or skewed, with Global North states being more influential and dominant than the Global South. While states like China and India have become prominent global powers, other states in the Global South have yet to acquire the same degree of influence. In the case of AI, the ability to purchase, maintain, and develop AI technology is limited to states with adequate power and resources.
Such an imbalance can result in issues regarding global governance, with states potentially disagreeing on how to regulate the use of AI due to differing priorities. Furthermore, centralized decision-making could arise from the conglomerate of states that are the primary developers and suppliers of AI technology for the world. This could lead to a lack of transparency in how AI models are trained and cause biased decision-making and regulation that benefits certain states at the expense of others. Moreover, centralized control over AI regulation could potentially exacerbate economic disparities by giving states equipped with AI-specific advantages, such as improvements in productivity, market leadership, and opportunities for startups and innovation. Given these developments are relatively recent, a cohesive and united effort from all states can help mitigate the impact of AI’s threat to democracy and protect the future of human autonomy.
AI’s profound impact on elections and global power dynamics underscores the urgent need to tackle its challenges, ensuring both ethical use and the preservation of democratic values.
The advent of Artificial Intelligence (AI) has sparked many debates on the ethical implications of its use cases and prompted a deeper conversation on potential technological sentience. AI, however, is not necessarily a recent development but rather a technology that has simply become more advanced and accessible over the years. ChatGPT, Gemini, Character.ai, and QuillBot are just some examples of increasingly popular AI tools that can be leveraged by anyone around the world with access to a smart device. While such tools have been utilized to improve operational efficiency in sectors such as finance, education, and tech, it is no surprise that the misuse of AI for malicious intent has raised significant security concerns around the world.
Generally, AI software is often marketed as a tool to aid in decision-making and reduce human error. However, what logic does AI use to make certain decisions, and what determines that a decision was indeed appropriate? How does one define human error—and more importantly, who defines it? AI-based automation is also linked to operational efficiency with the eventual reduction of overhead costs; however, is this “automation” more integral than safeguarding the jobs that it has replaced? Even if these trade-offs are somewhat justifiable, perhaps the most relevant concern in the digital age is AI’s role in shaping human beliefs and institutions. What is AI’s role in this scenario, and should it even have a role to play?
A major testament to this development is the 2024 election year. In 2024, around 70 countries were scheduled for national parliamentary and presidential elections. Several countries, however, fell victim to pre-polling AI-related incidents, which depicted the proliferation of misinformation through various social media platforms. From deepfake videos to other forms of biased AI-generated content, governments have struggled to restrict and regulate the flow of such misinformation. A non-profit publication named Rest of World collected information about instances of AI-based misinformation from several countries holding elections, such as India, Pakistan, Venezuela, and South Korea. A commonality found among these countries was AI-generated content used to convey a politically biased message to persuade voters in favor of a particular party.
Unlike others, South Korea’s government introduced an amendment to the Public Official Election Act to ban the use of deepfakes for campaigning during the 90 days before election day. However, in India, for example, pop culture was utilized as a medium to connect with social media users. Specifically, clips from Bollywood movies were altered by replacing actors’ faces with politicians. Though these clips are often created for entertainment purposes, they do play a role in influencing a viewer’s opinion of a particular party or candidate based on the nature of the character associated with them in such clips. Given India’s socio-cultural diversity and ethno-religious history, politics is often intertwined with caste, ethnic groups, and religion. Specifically, there exists a threshold of sensitivity regarding politics, which, if violated, can potentially spark widespread violent conflict or further division between different communities. That said, in a democratic country experiencing a rampant increase in smartphone and social media usage, media regulation can come at the cost of an individual’s freedom of expression.
India’s neighbor, Pakistan, faces a similar challenge, where political leaders from opposition parties have called for the boycott of elections through misleading deepfakes. For example, Pakistan Tehreek-e-Insaf party leader Imran Khan was seen giving a speech after winning the election despite being in jail. Essentially, an AI-generated audio track of Khan’s voice was embedded with an older video of him giving a speech. Similarly, Donald Trump was seen endorsing Khan once again in an old video with a generated audio track created using Parrot AI. Although this video was declared a deepfake by a fact-checking organization, there is a possibility that, had this been a more refined and realistic deepfake, Trump’s false endorsement of Imran Khan could have had negative repercussions in the United States, Pakistan, and any of their adversaries. It is imperative to note that the relevance of these deepfakes lies in the matter of authenticity and the extent of trust the public has in official government media communications.
If deepfake technology improves to become virtually indistinguishable from authentic media, controlling the spread of misinformation and disinformation will require equivalent efforts from the government to create advanced AI detection technology, yet even this is not a lasting solution. While AI-generated content may impair public trust in a government, the opposite is possible as well. For example, the Journal of Democracy explained how AI-generated letters issued to policymakers throughout the United States could convey a false consensus on particular issues. Furthermore, policymakers would essentially deem non-existent matters to be the legitimate concerns of the general population. Consequently, there arises a chasm or an implicit barrier between the government and the general population, as genuine concerns are miscommunicated, making the election and lawmaking process rather ineffective and inefficient.
This dynamic poses several implications for the legitimacy of the institution of democracy, not only in the United States but in other democratic countries around the world. In a democracy, the right to free speech implies that an individual can vocalize their opinion; however, if that opinion is artificial and inauthentic, should that freedom be granted? This connects back to the paradox of regulating media content without infringing on the right to free speech by allowing unrestricted online content. Threats to democracy existed even before the public release of AI technology, where threat actors were able to execute cyber campaigns to fulfill certain strategic objectives. One of the most applicable examples of this is Russian interference in the 2016 US elections.
Essentially, Russian operatives employed tactics such as disinformation campaigns, social media manipulation, data leaks, and cyberattacks directed at weakening election-related infrastructure to skew the election in favor of Donald Trump. By proliferating inflammatory content, manipulating online political discussions, and employing APT groups such as Fancy Bear to steal and leak sensitive government information, Russia strategically undermined institutional democratic practices intended for fair and free elections. That said, due to the accessible nature of AI software over the past 2-3 years, both internal and external actors with political, financial, or espionage-related motives have been able to deploy more advanced and efficient cyber campaigns to threaten a state’s sovereignty and internal security.
As a result of such advanced and widespread campaigns, the integrity of democratic institutions continues to deteriorate. Given this situation, countermeasures to defend against the misuse of AI must be of utmost importance for all democratic governments. Some examples of countermeasures include employing digital literacy campaigns, promoting ethical AI development, establishing stricter regulations and compliance standards, and developing advanced fact-checking tools. Digital literacy campaigns involve educating voters about biased AI-generated content and disinformation, and helping them understand how to identify credible sources to inform their decisions accurately. Secondly, by promoting ethical AI development, potential biases in existing AI training models can be avoided, which can help reduce discriminatory practices such as voter targeting and the marginalization of particular groups.
Establishing stricter regulations and enforcing compliance with security standards can also help ensure transparency, protect private information, and provide better incident response guidance in the case of a cyberattack. Lastly, developing advanced fact-checking tools for filtering AI-generated content can help counteract the spread of false information by flagging online content that is inaccurate. If all of these countermeasures are utilized in a cohesive effort, the consequences of misusing AI can lead to increased awareness about how to optimize one’s right to free speech. A final concern regarding this matter relates to the notion of the balance of power. A core principle in international relations, the balance of power refers to the equal distribution of power among states to avoid creating a dominant world power. The balance of power in today’s multi-polar system is somewhat disparate or skewed, with Global North states being more influential and dominant than the Global South. While states like China and India have become prominent global powers, other states in the Global South have yet to acquire the same degree of influence. In the case of AI, the ability to purchase, maintain, and develop AI technology is limited to states with adequate power and resources.
Such an imbalance can result in issues regarding global governance, with states potentially disagreeing on how to regulate the use of AI due to differing priorities. Furthermore, centralized decision-making could arise from the conglomerate of states that are the primary developers and suppliers of AI technology for the world. This could lead to a lack of transparency in how AI models are trained and cause biased decision-making and regulation that benefits certain states at the expense of others. Moreover, centralized control over AI regulation could potentially exacerbate economic disparities by giving states equipped with AI-specific advantages, such as improvements in productivity, market leadership, and opportunities for startups and innovation. Given these developments are relatively recent, a cohesive and united effort from all states can help mitigate the impact of AI’s threat to democracy and protect the future of human autonomy.
Arushi Kaur is Virginia Tech graduate with a background in International Studies and Cybersecurity. Her areas of expertise include International Security and the MENA region.