California moves quickly to tackle deepfakes ahead of election

California moves quickly to tackle deepfakes ahead of election


Days after Vice President Kamala Harris announced her candidacy for president, a video created with the help of artificial intelligence went viral.

“I am … your Democratic nominee for president because Joe Biden finally exposed his old age in the debates,” a voice resembling Harris said in a fake audio track that was used in one of her campaign ads. “I was chosen because I am the best at diversity.”

Billionaire Elon Musk — who has backed Harris’ Republican opponent, former President Trump — shared the video on Twitter, then clarified two days later that it was in fact meant as a parody. His initial tweet was viewed 136 million times. A follow-up calling the video a parody was viewed 26 million times.

including Democrats, California Governor Gavin NewsomThe incident was no laughing matter, fueling calls for greater regulation to deal with AI-generated videos with political messages and sparking a fresh debate over the proper role of government in governing the emerging technology.

On Friday, California lawmakers gave final approval to a bill that would ban the distribution of deceptive campaign ads, or “electioneering communications,” within 120 days of an election. Assembly Bill 2839 It targets manipulated content that could harm a candidate’s reputation or electoral prospects, as well as undermine confidence in the outcome of the election. It’s intended to address videos like the one Musk shared about Harris, though it includes an exception for parody and satire.

“We’re entering our first election in California during which misinformation powered by generative AI is going to pollute our information ecosystem more than ever before and millions of voters won’t know which images, audio, or video they can trust,” said Assemblymember Gail Pellerin (D-Santa Cruz). “So we have to do something.”

Newsom said He indicated he would sign the billWhich will come into effect immediately, just before the elections in November.

The law updates a California law that prohibits people from distributing misleading audio or visual media within 60 days of an election with the intent to harm a candidate’s reputation or deceive voters. State lawmakers say the law needs to be strengthened during an election cycle in which people are already putting digitally altered videos and photos known as deepfakes on social media.

The use of deepfakes to spread misinformation during past election cycles has alarmed lawmakers and regulators. These fears have been further fueled by the release of new AI-powered tools such as chatbots that can rapidly create images and videos. From fake robocalls to bogus celebrity endorsements of candidates, AI-generated content is testing tech platforms and lawmakers.

Under AB 2839, a candidate, election committee, or election official can seek a court order to remove a deepfake. They can also sue for damages the person who distributed or republished the misleading content.

The law also applies to misleading media posted 60 days after the election, including content that misrepresents voting machines, ballots, polling places, or other election-related property that is likely to undermine confidence in the outcome of the election.

This does not apply to satires or parodies that are labeled as such, or to broadcast stations that tell viewers that what is depicted is not an accurate representation of a speech or event.

Technology industry groups are opposing AB 2839 as well as other bills that target online platforms for not properly moderating misleading election content or labeling AI-generated content.

“This would result in stifling and blocking constitutionally protected free speech,” said Carl Szabo, NetChoice’s vice president and general counsel. The group’s members include Google, X and Snap, as well as Facebook’s parent company Meta and other tech giants.

Online platforms have their own rules about manipulated media and political advertising, but their policies can vary.

Unlike Meta and X, TikTok does not allow political ads and says it may remove ads with labels AI-generated content If it depicts a public figure such as a celebrity, “when it is used for political or business endorsement.” Truth Social, the platform created by Trump, does not address manipulated media in its rules about what is not allowed on its platform.

Federal and state regulators are already cracking down on AI-generated content.

The Federal Communications Commission in May proposed a $6 million fine against Steve Kramer, a Democratic political consultant behind a robocall that used AI to mimic President Biden’s voice. The fake calls discouraged participation in New Hampshire’s Democratic presidential primary in January. Kramer, who told nbc news He planned the call to draw attention to the dangers of AI in politics, and also faces criminal charges of voter suppression and impersonating a candidate.

Szabo said existing laws are sufficient to address concerns about election deepfakes. NetChoice has sued various states to block certain laws aimed at protecting children on social media, alleging they violate free speech protections under the First Amendment.

“Just making new laws is not going to do much to stop bad behavior; you need to actually enforce the laws,” Szabo said.

More than two dozen states, including Washington, Arizona and Oregon, have enacted, passed or are working on laws to regulate deepfakes, according to the consumer advocacy nonprofit. public citizen,

In 2019, California enacted a law aimed at combating manipulated media. A video has gone viral in which it looks like House Speaker Nancy Pelosi is drunk Enforcing this law on social media has been a challenge.

“We had to narrow it down,” said Assemblyman Marc Berman (D-Menlo Park), who authored the report. Bill“It drew a lot of attention to the potential risks of this technology, but I was concerned that actually, at the end of the day, it didn’t do much.”

Instead of taking legal action, political candidates may choose to dismiss deepfakes or ignore them to limit their spread, said Danielle Citron, a professor at the University of Virginia School of Law. By the time they can get through the court system, the content has already gone viral.

“These laws are important because they send a message. They teach us something,” he said, adding that they let people who share deepfakes know that there’s a price to pay.

This year, lawmakers worked closely with the California Initiative for Technology and Democracy, a project of the nonprofit California Common Cause, on several bills to combat political deepfakes.

Some target online platforms that are exempt under federal law from being held liable for content posted by users.

Berman introduced a bill that would require online platforms with at least 1 million California users to remove or label certain misleading election-related content within 120 days of the election. Platforms would also have to take action within 72 hours of a user reporting the post. AB 2655The bill, which passed the Legislature on Wednesday, would also require platforms to have processes in place to identify, remove, and label fake content. It also does not apply to parody or satire or news outlets that meet certain requirements.

Another bill co-written by Assembly member Buffy Wicks (D-Oakland) requires online platforms to label AI-generated content. While NetChoice and TechNet, another industry group, are opposing the bill, ChatGPT maker OpenAI is supporting it AB 3211, Reuters Report.

However, both of these bills will only take effect after the elections, highlighting that passing new laws amid the rapid development of technology will be challenging.

“Part of my hope in introducing this bill is that it will get people’s attention, and hopefully put some pressure on social media platforms to behave in a timely manner,” Berman said.




Comments

No comments yet. Why don’t you start the discussion?

    Leave a Reply

    Your email address will not be published. Required fields are marked *