AI in Elections: The Double-Edged Sword of Democracy’s Digital Future
Rapid advancements in artificial intelligence (AI) pose both promising opportunities and grave risks for the integrity of democratic institutions and processes.
The Story:
The rapid advancements in artificial intelligence (AI) technologies have the potential to significantly transform democratic governance and its execution, both positively and negatively. On one hand, AI could assist election officials and workers in their critical efforts to oversee the polls, making election administration processes more efficient, reliable, and secure. AI-powered tabulators, for instance, could scan paper ballots more quickly than poll workers, reducing the time necessary to report election results or conduct recounts, thereby helping to quell accusations of fraud during close and contentious races. However, the same technological capabilities that could strengthen democracy also pose significant risks.
AI-generated disinformation is harder to detect and can disrupt elections in subtle ways, compared to the viral deepfakes that officials are looking for. AI-powered techniques like generating legions of slightly different text posts or using WhatsApp channels to spread misinformation can be harder to catch. Meta's efforts to address this, such as labeling AI-generated content, may not be enough, and they should focus on improving content policies and enforcement on platforms like WhatsApp. As former Meta head of electoral risk Josh Lawson stated, "Now something like an Arabic language operation is in reach for as low sophistication as the Proud Boys."
AI's ability to locate and synthesize vast amounts of public data could be exploited to generate phishing attacks tailored to election officials, potentially jeopardizing the integrity of the elections they oversee. Furthermore, AI models could be designed to disseminate misinformation or disinformation, particularly targeting less-informed citizens who may be more vulnerable to baseless election fraud narratives. There is also the risk of partisan biases in the way voter rolls are "cleaned up" using AI, with minority voters being disproportionately targeted.
AI equips illiberal nonstate actors and autocracies with an array of comparatively low-cost, unmanned tools by which adversaries may pry the electorate further apart, fueling caustic polarization and internal destabilization. AI-fueled programs, like ChatGPT, can fabricate letters to elected officials, public comments, and other written endorsements of specific bills or positions that are often difficult to distinguish from those written by actual constituents.
Beyond election administration, AI is also altering the way candidates for elected office conduct their campaigns, as well as how voters locate and consume information about candidates and issues. While AI tools could lower financial barriers to entry for first-time and underfunded candidates, and facilitate more effective targeted advertising to reach undecided voters, these same technologies could also worsen the flood of misinformation and disinformation typical of election season. Political bots, deepfakes, and other AI-generated visuals have already scrambled pre-election information ecosystems in democracies across the globe, and the risk of AI-fueled informational chaos is only expected to grow more acute as high-stakes elections approach in 2024. As researcher Renee DiResta notes, "It's ordinary people creating fan content. Do they mean to be deceptive? Who knows?"
Beyond elections and voting, other dimensions of democratic governance stand to be impacted by the AI revolution. New technologies could further democratize the public comment process, making it easier for citizens to voice their opinions, organize with others, and act on their priorities beyond the ballot box. However, these same technologies could also allow bad actors to camouflage their machinations as genuine public sentiment, with AI-fueled programs capable of fabricating letters to elected officials, public comments, and other written endorsements that may be difficult to distinguish from those written by actual constituents.
The View:
The stakes are undoubtedly high as we grapple with the emerging risks and potential rewards at the intersection of AI and democracy. The Biden administration's AI Executive Order is a promising starting point, but a more comprehensive and proactive approach is needed to address the myriad challenges posed by these transformative technologies.
One important consideration is that the impact of AI on elections may not be as "mass" as some expect. Officials are often looking for the more overt and dramatic forms of disruption, while the more subtle and diffuse effects of AI-powered disinformation can be harder to detect but still sway election outcomes. Policymakers must be attuned to these more nuanced threats. In fact, smoking guns are rare with AI tools thanks to their more diffuse and nuanced effects. We should prepare ourselves for more noise than signal as synthetic content pours onto the internet.
Likewise, tech companies like Meta need to shift their focus beyond simply limiting viral content. They must also address the variety and scalability of AI-generated disinformation. This requires more proactive enforcement and clear content policies, especially on platforms like WhatsApp where the spread of misinformation can be harder to monitor and control.
Policymakers, election advocates, and the general public must remain vigilant and engage in a robust, ongoing dialogue to ensure that AI is leveraged as a force for strengthening and protecting democratic institutions, rather than undermining them. The disruptive effects of the AI revolution must be minimized, while its positive democratic potential is maximized – a task that will require the concerted efforts of government, civil society, and the technology sector alike.
As the 2024 U.S. presidential election and other pivotal contests approach, the need for effective, transparent strategies and guidelines becomes all the more urgent. But this is not a challenge confined to the next election cycle; the impacts of AI-powered tools are likely to reach every corner and function of government, from data collection to election administration to citizen engagement. Policymakers, advocates, and citizens must be prepared to keep pace as these technologies continue to evolve, ensuring that AI is leveraged as a force for a better and more inclusive democracy.
TLDR:
AI technologies can bolster election administration by improving efficiency, reliability, and security of voting processes, but also present vulnerabilities to malicious exploitation.
AI-powered tools are transforming political campaigns, enhancing voter engagement and education, but also enable the widespread dissemination of misinformation and disinformation.
Citizen participation and public influence on policymaking can be democratized through AI, but bad actors can also leverage these technologies to fabricate the illusion of public consensus.
Policymakers, election officials, and civil society must urgently develop robust, transparent strategies to harness the positive democratic potential of AI while mitigating its disruptive and destabilizing effects.
The stakes are high as crucial elections loom, with anti-democratic forces poised to exploit AI's weaknesses to undermine public trust in democratic institutions.
Decisive, proactive action is needed to ensure AI strengthens, rather than undermines, the foundations of free and fair governance in the digital age.
Insights From:
AI's Capacity to Spoil Elections Could Be Harder to Detect - Bloomberg