AI Misinformation Threatens 2024 US Elections

Dennis Hillemann
5 min readJan 6, 2024


As we approach the 2024 United States presidential elections, a new threat hovers in the distance: the spread of misinformation generated by artificial intelligence (AI). This is not just an evolution of the fake news problem we have faced before; it signals a major change in the political information landscape. The upcoming election cycle will not only be a competition for votes, but also a battle over what is true and false.

The Unseen Adversary: AI and the Fabrication of Reality

In recent years, there has been a surge in the use and development of deepfake technology. Deepfakes are created using artificial intelligence, specifically deep learning algorithms, to manipulate or generate images, videos, and audio that appear to be authentic but are actually fabricated.

The ability to create highly realistic fake content has raised concerns about its potential impact on politics. Experts warn that deepfakes could be used to spread false information, manipulate public opinion, and even sway election outcomes.

One of the key concerns is that deepfakes can be used to create fake interviews or speeches by political candidates. These manipulated videos could be used to make it seem like a candidate said or did something they never actually did, potentially damaging their reputation and credibility.

Deepfakes can also be used to spread misinformation through social media. In the 2016 U.S. presidential election, there were already reports of bots spreading fake news and propaganda on platforms such as Twitter and Facebook. With the rise of deepfake technology, these efforts could become even more sophisticated and difficult to detect.

Furthermore, as AI continues to advance and improve, it is becoming increasingly difficult for humans to distinguish between real and fake content. This means that even if a deepfake is exposed as false later on, the damage may have already been done.

The potential for foreign interference is also a major concern when it comes to deepfakes in politics. With the ability to create convincing fake content, adversarial nations or groups could use this technology to sow discord and confusion among voters in another country.

Some experts have proposed solutions such as developing better detection tools for identifying deepfakes or implementing stricter regulations on the use of AI in creating political content. However, these solutions may not be enough to fully address the problem.

Ultimately, it will require a combination of technological advancements and human vigilance to combat the threat of deepfakes in politics. As AI continues to evolve, so too must our strategies for protecting the

The Public’s Concern: A Call for Vigilance

According to several recent surveys, including ones conducted by UChicago Harris/AP-NORC and Morning Consult-Axios, a majority of Americans from both political parties are worried about the impact of AI on spreading false information during elections. The concern is that not only could AI be used to spread misinformation, but it could also amplify existing disinformation campaigns, leading to a loss of trust in the integrity of the electoral process and its outcomes.

Strategies for Mitigation: Preparing for the Inevitable

As the threat of deepfakes in politics looms over our information ecosystem, it is crucial that we adopt a multifaceted approach to combat their potential impact. This chapter explores some strategies that have been proposed by experts to mitigate the effects of deepfakes on political discourse and elections.

Pre-bunking Strategies

One approach that has gained popularity among experts is the use of “pre-bunking” strategies. These involve proactively informing the public about the possibility of encountering fake content, particularly during high-stakes events like elections.

Nicole Schneidman, a technology policy advocate at Protect Democracy, emphasizes the importance of educating people before they encounter deepfakes. She believes that this could help fortify individuals against potential deception and make them less susceptible to manipulation.

Pre-bunking strategies can involve various forms of outreach, such as public service announcements, educational campaigns, and workshops. By raising awareness and promoting critical thinking skills, individuals can learn to recognize signs of manipulated content and better protect themselves from being misled.

Verification Tools

Another proposed solution is the use of digital signatures or other verification tools. These tools offer a means to verify the authenticity of online content directly from authoritative sources.

For example, a political candidate could have their own digital signature that can be verified by users when they come across any media attributed to them. This would provide a level of trust and assurance that what they are seeing is genuine.

Additionally, there are also efforts underway to develop AI-based tools that can detect and flag deepfake content in real-time. These tools could be used by social media platforms and fact-checking organizations to quickly identify and remove manipulated content.

Regulations and Policies

Some experts believe that stricter regulations on the use of AI in creating political content could help mitigate the spread of deepfakes. This could include requiring disclaimers on manipulated content and mandating that deepfakes be labeled as such.

The Role of Regulation: A Patchwork of Protections

Despite efforts from US Congress, along with the Federal Election Commission, to come up with regulatory measures, there is still a lack of specific rules and legislation. As a result, some states have taken matters into their own hands by implementing their own limitations on political AI deepfakes. However, a more comprehensive national framework is necessary to effectively tackle the challenges presented by these technologies.

The Imperative of Information Literacy

As the 2024 elections approach, it is evident that having knowledge about AI will be essential in differentiating between what is true and what is false. By actively engaging with AI technology and comprehending its capabilities, we can better recognize the potential dangers it presents. It’s not just about being ready; it’s about taking a proactive approach towards promoting transparency and authenticity in the information we both consume and distribute.

As AI continues to advance, we must ask ourselves: are we prepared to navigate the maze of falsehoods it may unleash? How can we safeguard our democratic process from the potentially dubious effects of synthetic media? It is imperative that we contemplate these questions and take action together. Join the discussion and share your ideas on how we can fortify ourselves against the looming threat of misinformation.



Dennis Hillemann

Lawyer and partner with a track record of successful litigation and a passion for innovation in the legal field