Judge blocks California law targeting deepfake campaign ads

Judge blocks California law targeting deepfake campaign ads


With the rise of deepfake video and audio in political campaigns, California enacted its toughest restrictions to date in September: a law banning political ads within 120 days of an election that contain misleading, digitally generated or altered content. Unless the ads are labeled as “manipulated”. ,

On Wednesday, a federal judge temporarily blocked the law, saying it violates the First Amendment.

Other laws against misleading campaign advertisements are on the books in California, including a law that requires candidates and political action committees exposure When using artificial intelligence to create or make major changes to advertising content. But preliminary injunction granted against Assembly Bill 2839 This means there will be no blanket ban on individuals using artificial intelligence to clone a candidate’s image or voice and misrepresenting them without disclosing that the images or words are fake.

The injunction was sought by Christopher Kohls, a conservative commentator who has created several deepfake videos satirizing Democrats, including the party’s presidential nominee, Vice President Kamala Harris. Governor Gavin Newsom was quoted one of those videos — which featured a clip of Harris while a deepfake version of her voice claimed to be the “ultimate diversity hire” and professed both ignorance and incompetence — when she signed AB 2839, but the measure was actually a Harris video from Kohl’s Was introduced in February long before. Went viral on X.

When asked about the decision on X, Kohls said, “Freedom prevails! For now.”

Deepfake videos satirizing politicians, including one targeting Vice President Kamala Harris, have gone viral on social media.

(Darko Vojinovic/Associated Press)

ruling US District Judge John A. Written by Mendez, it reflects the tension between efforts to protect against AI-driven fraud that could influence elections and stronger safeguards in the bill of rights for political speech.

In granting the preliminary injunction, Mendez wrote, “When political speech and electoral politics are at issue, the First Amendment almost clearly dictates that courts are permitted to promote speech, not to have the state suppress it. Attempt to retain…(M)ost AB 2839 acts as a hammer rather than a scalpel, acting as a blunt instrument that hinders humorous expression and unconstitutionally Prevents the free and unfettered exchange that is so vital to American democratic debate.

Public Citizen co-chairman Robert Weissman countered, “The First Amendment should not tie our hands in addressing a serious, foreseeable, real threat to our democracy.”

Robert Weissman, executive director of Public Citizen, speaks at a press conference

Robert Weissman of the consumer advocacy organization Public Citizen says 20 other states have adopted laws similar to AB 2839 — but with key differences.

(Nick Wass/Associated Press)

Weissman said 20 states have adopted laws following the same basic approach: requiring ads that use AI to manipulate content to be labeled as such. But AB 2839 had some unique elements that may have influenced Mendez’s thinking, Weissman said, including the requirement that the disclosure be displayed as large as the largest text seen in advertising.

In his ruling, Mendez said the First Amendment also extends to false and misleading speech. Even on a topic as important as election security, lawmakers can regulate expression only in the least restrictive ways, he wrote.

AB 2839 – which required political videos to continuously display required disclosures about manipulation – did not use the least restrictive means to protect election integrity, Mendez wrote. A less restrictive approach would be “per se speech,” he wrote, though he did not specify what that would mean.

Weissman responded, “Speech per se is not a sufficient measure.” The problem with deepfakes, he said, is not that they make false claims or insinuations about a candidate; “The problem is that they are making the candidate appear to say or do something that he or she actually did not do.” Targeted candidates are left with the almost impossible task of explaining what they didn’t actually do or say, he said, which is much more difficult than countering a false allegation made by an opponent or by a political action committee.

Requiring disclosure of manipulation is not a perfect solution to the challenges posed by deepfake ads, he said. But this is the least restrictive measure.

Liana Keesing’s number oneThe pro-democracy advocacy group said the creation of deepfakes is not necessarily a problem. “The important thing is the spread of that false and misleading content,” said Keesing, the group’s campaign manager.

Alix Fraser, Director of Technical Improvement number oneSaid that the most important thing lawmakers can do is to address how technology platforms are designed. “What are the guardrails around that? Basically there are none,” he said. “As we see it, that’s the main problem.”


Comments

No comments yet. Why don’t you start the discussion?

    Leave a Reply

    Your email address will not be published. Required fields are marked *