It seems unlikely that Congress will pass any of the laws it has promised to protect this year’s elections from the threat of artificial intelligence.
Experts warned voters about the threat to democracy posed by misleading AI, and lawmakers said they needed to act — introducing several bills this year that could have banned deep-pocketed election fraud and mandated clear labeling of AI-generated content. Despite the desire and bipartisan support of the laws, these efforts are waning.
But interviews with lawmakers and Hill staff and a reading of the short remaining legislative schedule for the fall suggest that time is already running out on what should have been the top priority — protecting elections from a powerful tool of fraud.
“I would definitely like to see something on the floor. But I’m not sure we’re going to do that,” said Sen. Martin Heinrich (DN.M.), one of four members of a bipartisan Senate panel tapped by Democratic Majority Leader Chuck Schumer to work on artificial intelligence last year.
It’s a dramatic setback since the Senate began grappling with generative artificial intelligence last year, devoting an entirely closed-door meeting with experts to the electoral threat posed by artificial intelligence. Speaking at the end of the session last November, Schumer warned that the upcoming vote would be “the first national election with widely available artificial intelligence technologies that can accelerate the spread of falsehood and disinformation, so we need to act quickly.”
Sen. Todd Young, Republican of Indiana, who is part of Schumer’s handpicked bipartisan team to shepherd AI legislation through the Senate, told POLITICO in January that AI in elections “is probably one of the most important things that we’re looking at as a possibility.”
Schumer did not confirm whether leading bills to regulate AI-generated election content will be toast in this Congress, but in an emailed statement, she hinted that she has extended her timeline.
“Electoral AI can and should continue beyond the 2024 election,” Schumer wrote to POLITICO in response to questions about this story.
Young and his fellow Republican on Schumer’s AI team, Sen. Mike Rounds of South Dakota, did not immediately respond to questions about AI and the election.
Senate bills to regulate AI-generated content aimed at voters have already been blocked by Republicans, with no sign of anything changing before Nov. 5. The House of Representatives is even further behind, as a planned report containing legislative recommendations on several AI issues — including the use of generative AI in elections — has yet to be released.
In the remaining weeks of the 2024 campaign, that means the only safeguards against serious electoral fraud lie in states that have passed their own laws and federal agencies that have stepped in — even if they’re bogged down by power lines and partisan infighting. .
“We’ve seen that deep rigging is decisive in elections around the world,” argued Robert Weissman, co-president of the non-profit organization Public Citizen. “It’s irresponsible to act as if it can’t happen.”
Americans first became aware of the malicious use of generative AI in elections in February, when fake calls deeply mimicking President Joe Biden’s voice targeted New Hampshire voters and urged them not to vote in the state’s primary.
By May, Sen. Amy Klobuchar (D-Minn.) had introduced three bills for the increase. Election supervisors would be prepared for artificial intelligence. Another would ban deep rigging of federal political candidates, and the third would require the disclosure of AI-manipulated political ads.
The Senate Rules Committee advanced all three in May, despite Republican fears of harming technological innovation and free speech. But two AI-generated content bills aimed at voters failed to pass a unanimous Senate vote in July. The Rules Committee’s ranking member, Sen. Deb Fischer (R-Neb.), whose opposition led to unanimous approval, did not immediately respond to a request for comment.
Klobuchar wrote in a statement to POLITICO in late August: “As I said in the Senate in July, this is hair on fire and we need to take action.” Still, he seemed to be making backup plans. An aide wrote: “Senator Klobuchar plans to ask for unanimous consent again. And if the legislation doesn’t pass as a stand-alone bill or as part of a package, he will introduce it in the next Congress.
Klobuchar’s bills were the most high-profile on AI in the election, garnering support from Republicans including Sens. Josh Hawley of Missouri, Susan Collins of Maine and Lisa Murkowski of Alaska. None of them immediately responded to requests for comment.
In Parliament, artificial intelligence work has stalled even more. The Bipartisan House leadership launched the task force in February. The working group has met regularly to discuss legislative proposals related to artificial intelligence, but it has not yet recommended laws for adoption. The task force’s mandated output – a report to guide Parliament’s action on artificial intelligence – is still in the drafting process with no set release date. Task force chairman Jay Obernolte (R-Calif.) and Ted Lieu (D-Calif.) did not immediately respond to requests for comment.
Given the state of the proposals and the legislative pace in Washington, “it’s not realistic to wait to act” before the election, said Minnesota Secretary of State Steve Simon, a Democrat who has been watching Congress closely as his state passes its own law in 2023 to criminalize deep counterfeiting. elections and updated it the following year to remove candidates from running or from office if they are found guilty of using serious fraud in elections.
Eighteen other states have passed laws restricting the use of deep-pocketing in elections, according to Public Citizen’s Tracker. Weissman said the laws don’t replace federal laws, but they show a willingness to act.
“The clock is running out, and what should be a common-sense consensus issue has been tainted by reflexive bias,” he said.
In the meantime, agencies have used their powers to prosecute AI abuse under existing rules.
After the New Hampshire robocalls, the Federal Communications Commission issued a cease-and-desist order to a Texas telecommunications company that relayed the calls on its network. The agency later proposed a $2 million fine to Lingo Telecom (which was later settled at $1 million) and a $6 million fine to 55-year-old Steven Kramer of New Orleans for using call-spoofing technology.
While FCC Chairwoman Jessica Rosenworcel has said she wants to write new rules to regulate artificial intelligence in campaign material on television and radio, the effort is coinciding with the same Washington calendar. He released the proposal in July, and the FCC is collecting public comments on the rules until almost mid-October. It is highly unlikely that the commission will vote to finalize the rule before Election Day, and almost impossible for the rule to go into effect before the vote.
There has been a bit of a battle between the FCC and the Federal Election Commission, with Republican Chairman Sean Cooksey insisting that only his agency has the authority to enforce election law. Writing to Rosenworcel about his proposals, he said, “I believe these would encroach on the jurisdiction of the FEC.” Cooksey laid out his take on the FEC’s role in an August publication, “The FEC Has No Business Regulating AI.”
FEC Democratic Vice Chair Ellen Weintraub told POLITICO she saw little chance her Republican-controlled panel would pass new AI rules on elections. At best, he said, the FEC could potentially decide individual cases of election fraud.
John Hendel contributed to this report.