-
California prop results: How voters decided retail theft, minimum wage - about 1 hour ago
-
Trump’s Election Raises Inflation Fears as Fed Prepares Second Rate Cut - 2 hours ago
-
Legendary WWE Champion Announces Retirement Match - 7 hours ago
-
Germany’s Coalition Collapses, Leaving the Government Teetering - 7 hours ago
-
Bay Area council member accused of sexually abusing underage relative - 8 hours ago
-
California Faces ‘Dangerous’ Fire Threat Amid Strong Winds, Low Humidity - 14 hours ago
-
Trump victory puts California climate and pollution goals at risk - 15 hours ago
-
Trump Victory Will Signal a Shift in Ukraine War. To What Is Unclear. - 18 hours ago
-
San Diego man sentenced for sexually abusing teenager aboard flight - 21 hours ago
-
Joe Rogan Reacts to Donald Trump Winning Election - 1 day ago
Judge blocks California law that targeted deepfake campaign ads
With deepfake video and audio making their way into political campaigns, California enacted its toughest restrictions yet in September: a law prohibiting political ads within 120 days of an election that include deceptive, digitally generated or altered content unless the ads are labeled as “manipulated.”
On Wednesday, a federal judge temporarily blocked the law, saying it violated the 1st Amendment.
Other laws against deceptive campaign ads remain on the books in California, including one that requires candidates and political action committees to disclose when ads are using artificial intelligence to create or substantially alter content. But the preliminary injunction granted against Assembly Bill 2839 means that there will be no broad prohibition against individuals using artificial intelligence to clone a candidate’s image or voice and portraying them falsely without revealing that the images or words are fake.
The injunction was sought by Christopher Kohls, a conservative commentator who has created a number of deepfake videos satirizing Democrats, including the party’s presidential nominee, Vice President Kamala Harris. Gov. Gavin Newsom cited one of those videos — which showed clips of Harris while a deepfake version of her voice talked about being the “ultimate diversity hire” and professing both ignorance and incompetence — when he signed AB 2839, but the measure actually was introduced in February, long before Kohls’ Harris video went viral on X.
When asked on X about the ruling, Kohls said, “Freedom prevails! For now.”
The ruling by U.S. District Judge John A. Mendez illustrates the tension between efforts to protect against AI-powered fakery that could sway elections and the strong safeguards in the Bill of Rights for political speech.
In granting a preliminary injunction, Mendez wrote, “When political speech and electoral politics are at issue, the 1st Amendment has almost unequivocally dictated that courts allow speech to flourish rather than uphold the state’s attempt to suffocate it…. [M]ost of AB 2839 acts as a hammer instead of a scalpel, serving as a blunt tool that hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas which is so vital to American democratic debate.”
Countered Robert Weissman, co-president of Public Citizen, “The 1st Amendment should not tie our hands in addressing a serious, foreseeable, real threat to our democracy.”
Weissman said that 20 states had adopted laws following the same core approach: requiring ads that use AI to manipulate content to be labeled as such. But AB 2839 had some unique elements that might have influenced Mendez’s thinking, Weissman said, including the requirement that the disclosure be displayed as large as the largest text seen in the ad.
In his ruling, Mendez noted that the 1st Amendment extends to false and misleading speech too. Even on a subject as important as safeguarding elections, he wrote, lawmakers can regulate expression only through the least restrictive means.
AB 2839 — which required political videos to continuously display the required disclosure about manipulation — did not use the least restrictive means to protect election integrity, Mendez wrote. A less restrictive approach would be “counter speech,” he wrote, although he did not explain what that would entail.
Responded Weissman, “Counter speech is not an adequate remedy.” The problem with deepfakes isn’t that they make false claims or insinuations about a candidate, he said; “the problem is that they are showing the candidate saying or doing something that in fact they didn’t.” The targeted candidates are left with the nearly impossible task of explaining that they didn’t actually do or say those things, he said, which is considerably harder than countering a false accusation uttered by an opponent or leveled by a political action committee.
For the challenges created by deepfake ads, requiring disclosure of the manipulation isn’t a perfect solution, he said. But it is the least restrictive remedy.
Liana Keesing of Issue One, a pro-democracy advocacy group, said the creation of deepfakes is not necessarily the problem. “What matters is the amplification of that false and deceptive content,” said Keesing, a campaign manager for the group.
Alix Fraser, director of tech reform for Issue One, said the most important thing lawmakers can do is address how tech platforms are designed. “What are the guardrails around that? There basically are none,” he said, adding, “That is the core problem as we see it.”
Source link