-
California can have both public safety and criminal justice reform - 34 mins ago
-
Winter Storm Warning for Five States As Thousands Told To Avoid Traveling - 2 hours ago
-
Trump’s 2nd-Term Agenda Could Transform Government and Foreign Affairs - 3 hours ago
-
California prop results: How voters decided retail theft, minimum wage - 7 hours ago
-
Trump’s Election Raises Inflation Fears as Fed Prepares Second Rate Cut - 7 hours ago
-
Legendary WWE Champion Announces Retirement Match - 13 hours ago
-
Germany’s Coalition Collapses, Leaving the Government Teetering - 13 hours ago
-
Bay Area council member accused of sexually abusing underage relative - 14 hours ago
-
California Faces ‘Dangerous’ Fire Threat Amid Strong Winds, Low Humidity - 19 hours ago
-
Trump victory puts California climate and pollution goals at risk - 20 hours ago
Fake Explicit Taylor Swift Images Swamp Social Media
Fake, sexually explicit images of Taylor Swift probably generated by artificial intelligence spread rapidly across social media platforms this week, disturbing fans who saw them and reigniting calls from lawmakers to protect women and crack down on the platforms and technology that spread such images.
One image shared by a user on X, formerly Twitter, was viewed 47 million times before the account was suspended on Thursday. X suspended several accounts that posted the faked images of Ms. Swift, but the images were shared on other social media platforms and continued to spread despite those companies’ efforts to remove them.
While X said it was working to remove the images, fans of the pop superstar flooded the platform in protest. They posted related keywords, along with the sentence “Protect Taylor Swift,” in an effort to drown out the explicit images and make them more difficult to find.
Reality Defender, a cybersecurity company focused on detecting A.I., determined that the images were probably created using a diffusion model, an A.I.-driven technology accessible through more than 100,000 apps and publicly available models, said Ben Colman, the company’s co-founder and chief executive.
As the A.I. industry has boomed, companies have raced to release tools that enable users to create images, videos, text and audio recordings with simple prompts. The A.I. tools are wildly popular but have made it easier and cheaper than ever to create so-called deepfakes, which portray people doing or saying things they have never done.
Researchers now fear that deepfakes are becoming a powerful disinformation force, enabling everyday internet users to create nonconsensual nude images or embarrassing portrayals of political candidates. Artificial intelligence was used to create fake robocalls of President Biden during the New Hampshire primary, and Ms. Swift was featured this month in deepfake ads hawking cookware.
“It’s always been a dark undercurrent of the internet, nonconsensual pornography of various sorts,” said Oren Etzioni, a computer science professor at the University of Washington who works on deepfake detection. “Now it’s a new strain of it that’s particularly noxious.”
“We are going to see a tsunami of these A.I.-generated explicit images. The people who generated this see this as a success,” Mr. Etzioni said.
X said it had a zero-tolerance policy toward the content. “Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them,” a representative said in a statement. “We’re closely monitoring the situation to ensure that any further violations are immediately addressed, and the content is removed.”
Although many of the companies that produce generative A.I. tools ban their users from creating explicit imagery, people find ways to break the rules. “It’s an arms race, and it seems that whenever somebody comes up with a guardrail, someone else figures out how to jailbreak,” Mr. Etzioni said.
The images originated in a channel on the messaging app Telegram that is dedicated to producing such images, according to 404 Media, a technology news site. But the deepfakes garnered broad attention after being posted on X and other social media services, where they spread rapidly.
Some states have restricted pornographic and political deepfakes. But the restrictions have not had a strong impact, and there are no federal regulations of such deepfakes, Mr. Colman said. Platforms have tried to address deepfakes by asking users to report them, but that method has not worked, he added. By the time they are flagged, millions of users have already seen them.
“The toothpaste is already out of the tube,” he said.
Ms. Swift’s publicist, Tree Paine, did not immediately respond to requests for comment late Thursday.
The deepfakes of Ms. Swift prompted renewed calls for action from lawmakers. Representative Joe Morelle, a Democrat from New York who introduced a bill last year that would make sharing such images a federal crime, said on X that the spread of the images was “appalling,” adding: “It’s happening to women everywhere, every day.”
“I’ve repeatedly warned that AI could be used to generate non-consensual intimate imagery,” Senator Mark Warner, a Democrat from Virginia and chairman of the Senate Intelligence Committee, said of the images on X. “This is a deplorable situation.”
Representative Yvette D. Clarke, a Democrat from New York, said that advancements in artificial intelligence had made creating deepfakes easier and cheaper.
“What’s happened to Taylor Swift is nothing new,” she said.