Rep. Kolodin Warns Arizona Is “Behind the Curve” as Lawmakers Confront AI Threats to Elections, Free Speech & Employment
Arizona Sounds the Alarm: Top Experts Rush In to Chart the Future of AI
The Arizona House of Representatives held a public hearing on November 14 to examine how artificial intelligence (AI) could disrupt democratic processes and undermine election integrity, with lawmakers from both parties warning the state is already lagging behind fast-moving technology.
The hearing, titled “The Implications of Artificial Intelligence for Democratic Governance and How to Preserve Meaningful Elections,” was convened by Chairman Alexander Kolodin (R-3) and live streamed from the Arizona State Capitol. The session was explicitly described as exploratory in nature — but with high-stakes as the lawmakers were among the first in the nation to hold formal hearings.
Representative Rachel Keshel (R-17) called it historic.
The hearing was designed to answer three core questions: whether Arizona should regulate artificial intelligence at all, whether now is the time to do so, and what such regulation might look like.
No legislation was debated or advanced during the hearing. Instead, the session placed Arizona among a growing number of states evaluating how AI could affect elections, free speech, and public trust.
As chair, Kolodin assembled a panel of national experts in constitutional law, technology policy, security analysis, and behavioral psychology to brief lawmakers on the risks and limitations of AI-driven systems. The hearing was investigative rather than legislative, aimed at assessing whether government action is warranted before lawmakers attempt to shape policy.
In an unusual opening, Kolodin’s first witness was not a person but an artificial intelligence model named Nova, which was presented as a demonstration of the technology lawmakers were there to examine.
When asked by State 48 News, Kolodin framed the purpose of the hearing this way:
Concern about the pace of AI development was voiced openly by Republican and Democratic lawmakers alike.
Rep. Keshel said during the hearing:
“I feel that we are behind the curve honestly.”
Though typically opposed to regulation, Keshel said AI may require an exception:
“
It is here and it is going to keep coming at us like a freight train.”
Rep. Betty Villegas (D-17) echoed that concern from across the aisle:
“
Today’s hearing reinforced that artificial intelligence is already changing our democracy and Arizona has a responsibility to respond but we must do it in a way that actually protects voters.”
Rep. John Gillette (R-30), who served as co-chair of the committee, said the issue reaches well beyond election administration:
“
Every committee is going to have to figure out a policy on how to regulate its use.”
Gillette told State 48 News the following:
Lawmakers heard from a group of five expert witnesses on the risks and challenges posed by artificial intelligence to democratic governance and election integrity. The testimony made clear that there is no consensus among experts on either the severity of the threat or the appropriate regulatory response.
Diane Cooke served as the first witness. Cooke is a nonresident AI Fellow in the International Security Program at the Center for Strategic and International Studies, with expertise in AI risks, generative-AI threats, human-machine teaming, and deepfakes related to elections and democratic discourse. During her testimony, she warned the committee that “we have reached an inflection point with AI technologies,” and said AI-generated audio, images, and video, have become so realistic that in some studies detecting real versus fake content was “tantamount to flipping a coin.” Cooke urged lawmakers to consider transparency measures such as digital watermarking or content identification for AI-generated media, even outside election cycles. She also recommended expanding public education efforts.
David Inserra, a Fellow for Free Expression and Technology at the Cato Institute, also testified. Inserra took a more cautious view of the threat timeline, stating that “many of the major concerns … have not materialized,” at least not yet. He noted that much of the visible AI-generated content has been parody or satire, forms of expression generally protected under the First Amendment. Inserra warned that overly broad regulation could stifle speech and technological development and urged lawmakers instead to use existing laws addressing fraud, harassment, and defamation to deal with misconduct. He also endorsed public education as a key tool, arguing voters should be equipped to identify misleading content independently rather than relying solely on government restrictions.
Connor Leahy, founder and CEO of the AI-alignment research firm Conjecture, also appeared before the committee. Leahy framed artificial intelligence not only as an election issue but as a long-term existential risk. He told lawmakers that as AI systems evolve toward greater autonomy and intelligence, it may become impossible to guarantee they can be safely controlled or aligned with human values. His testimony emphasized the need for stronger safety standards and ethical safeguards in AI development, warning that without them the societal risks could extend well beyond elections.
Nick Dranias, a constitutional law and public-policy attorney, also testified. Dranias is the author of a forthcoming book on AI ethics and has previously held senior litigation positions with the Arizona Attorney General’s Office and the Goldwater Institute. During the hearing, he criticized current AI-training practices, arguing they reward performance rather than honesty or ethical reasoning. He warned that without appropriate regulation and oversight, systems could be optimized for manipulation rather than responsibility. Dranias urged lawmakers to consider policies that impose higher ethical standards on AI development, with an emphasis on transparency, accountability, and governance built around long-term public interest.
Finally, Dr. Robert Epstein, Senior Research Psychologist at the American Institute for Behavioral Research and Technology, testified about the influence AI-driven platforms already exert on public opinion and elections. Epstein warned lawmakers that concentrated control over information platforms poses a risk to democratic governance by enabling the manipulation of perception and opinion at scale. In stark terms, he told legislators, “I think we’re screwed,” referring to the potential long-term consequences if artificial intelligence remains unchecked.
Together, the expert testimony ranged from immediate concerns about synthetic media and voter deception to broader warnings about AI alignment, ethics, and institutional stability. In opening remarks, Kolodin described the hearing’s goal as determining whether the state should regulate artificial intelligence and, if so, when and how that regulation should occur.
To watch the entire hearing, you can access it here.





