As the reach of generative AI continues to stretch beyond society’s wildest imagination, lawmakers are worried about the role it could play in upcoming elections. With just a few clicks, artificially generated images could destroy an entire campaign.
In an attempt to get ahead of an issue that is almost certain to pop up during this year’s election, Arizona lawmakers are considering a bill that would give any Arizona resident, or candidate for public office who will be on the ballot, two years to take action against “digital impersonations” — or deep fakes — created without their consent.
“It has already popped up in a number of different states where generative artificial technology has led to reproductions of the human voice, and likeness in video, with such convincing clarity that it is hard to distinguish between the person themselves and the deep fake version of them,” Rep. Alexander Kolodin, R-Scottsdale, and the sponsor of the legislation told the House Municipal Oversight & Elections committee last week when it considered his bill.
As a self proclaimed “First Amendment absolutist,” Kolodin said he wants to approach the uncharted territory of AI regulation with a light touch, so as to not impede on anyone’s freedom of speech.
He said his House Bill 2394 is “narrowly tailored” so that it only applies to deep fakes that do not already disclose their generative origins if it is not obvious to the average person. The bill was given unanimous approval by the House committee.
“Artificial-generative technologies have a very legitimate role to play in our public discourse. They could be used for parody, they could be used for satire… (and for) artistic expression,” Kolodin said.
For this reason, the only remedy the bill provides for public figures who can provide sufficient evidence to debunk a deep fake in court is “declaratory relief” from a judge — confirming the falsity of the deep fake, rather than “injunctive relief” that would require the deep fake be deleted by its publishers.
“At least if you have a piece of paper from a court saying, we’ve looked at the evidence, we’ve examined it, and at least preliminarily it doesn’t appear to really be you,” Kolodin said.
Everyday people, however, would be allow to sue for damages if the deep fakes depict them in the nude or engaging in sexual acts, but public figures — including candidates — could not. Kolodin said he doesn’t believe that deletion would do much for public figures, whose deep fakes would certainly live forever online.
Considering the controversial nature of this year’s upcoming election, Kolodin said some sort of regulation is needed to protect candidates from slander disguised as self-confession.
“I am worried that, with this upcoming presidential election and how contentious it is, there’s going to be people and organizations — and let’s face it, countries — on both sides of that fight,” Kolodin said.
During the committee hearing, Democratic Rep. Laura Terech commended Kolodin’s bill, while recommending it picks up some of the language from legislation drafted in Wisconsin and Michigan, both of which require any election materials made wholly or partially by AI to feature a disclosure saying so.
Similar legislation has popped up in statehouses across the nation, including in Florida, Colorado and Alaska, where lawmakers from both parties have introduced bills with disclosure requirements.
Meanwhile, a bill proposed by Kentucky Republicans would ban the circulation of deep fakes if the people depicted have not consented to the images, providing grounds for the victims to sue.
The Arizona bill contains an emergency clause, which means it would go into effect immediately if it is passed by two-thirds supermajorities in both chambers and signed into law by the governor. New laws typically go into effect 90 days after the legislative session ends.
The bill heads next to the full House of Representatives for debate and a possible vote.
This story first appeared in AZ Mirror.