California calls on Elon Musk’s xAI to halt creation of sexual deepfakes

California calls on Elon Musk’s xAI to halt creation of sexual deepfakes
Rob Bonta
Rob Bonta
California Attorney General Rob Bonta speaks during a news conference in 2023. (File photo courtesy of the California Attorney General’s office)

This story was originally published by CalMatters. Sign up for their newsletters.

California Attorney General Rob Bonta sent a cease and desist letter to xAI ordering the Elon Musk company to stop creating and distributing nonconsensual sexual images.

In the letter, Bonta cited “numerous examples of xAI taking ordinary, clothed images of women and children” and allowing users “to depict the people in suggestive and sexually explicit scenarios and ‘undress’ them, all without the subjects’ knowledge or consent.”

Providing such imagery, Bonta wrote, violates state laws related to public decency and a “deepfake” pornography law that went into effect just two weeks ago. Bonta said he appreciates that xAI took steps to address the issue, including by blocking Grok use in places where generating images of people in bikinis is illegal, but he said the impact of those changes is unclear.

Bonta sent the letter Friday. Two days prior, he announced an investigation into how and whether xAI and its social platform X broke the law in the past few weeks by enabling the spread of naked or sexual imagery without consent. That came after an update to the company’s Grok artificial intelligence tool to allow image editing.

“The avalanche of reports detailing the non-consensual sexually explicit material that xAI has produced and posted online in recent weeks is shocking,” Bonta wrote Wednesday. “This material… has been used to harass people across the internet. I urge xAI to take immediate action to ensure this goes no further.”

Bonta urged Californians who want to report depictions of them or their children undressed or committing sexual acts to visit oag.ca.gov/report. In a response emailed Wednesday, xAI did not address questions about the investigation.

Research obtained by Bloomberg found that X users utilizing Grok posted more non-consensual naked or sexual imagery than those of any other website. In a posting on X, Musk promised “consequences” for people who made illegal content with the tool. On Friday, Grok limited image editing to paying subscribers.

 X and xAI appear to be violating the provisions of the pornographic deepfake ban, known as AB 621, said Sam Dordulian, who previously worked in the sex crimes unit of the Los Angeles District Attorney’s Office but today works in private practice as a lawyer for people in cases involving deepfakes or revenge porn.

Assemblymember Rebecca Bauer-Kahan, author of the law, told CalMatters in a statement last week that she reached out to prosecutors, including the attorney general’s office and the city attorney of San Francisco, to remind them that they can act under the law. What’s happening on X, Bauer-Kahan said, is what AB 621 was designed to address. 

“Real women are having their images manipulated without consent, and the psychological and reputational harm is devastating,” the San Ramon Democrat wrote. “Underage children are having their images used to create child sexual abuse material, and these websites are knowingly facilitating it.”

A global concern

Bonta’s inquiry also comes shortly after a call for an investigation by Gov. Gavin Newsom, backlash from regulators in the European Union and India and bans on X in Malaysia, Indonesia, and potentially the United Kingdom. As Grok app downloads rise in Apple and Google app stores, lawmakers and advocates are calling for the smartphone makers to prohibit the application.

Why Grok created the feature the way it did and how it will respond to the controversy around it is unclear, and answers may not be forthcoming, since an analysis recently concluded that it’s the least transparent of major AI systems available today.

Future reforms

Every new state law helps give lawyers like Dordulian a new avenue to address harmful uses of deepfakes, but he said more needs to be done to help people protect themselves. He said his clients face challenges proving violation of existing laws since they require distribution of explicit materials, for example with a messaging app or social media platform, for protections to kick in. In his experience, people who use nudify apps typically know each other, so distribution doesn’t always take place, and if it does, it can be hard to prove.

For example, he said, he has a client who works as a nanny who alleges that the father of the kids she takes care of made images of her using photos she posted on Instagram. The nanny found the images on his iPad. This discovery was disturbing for her and caused her emotional trauma, but since he can’t use deepfake laws he has to sue on the basis of negligence or emotional distress and laws that were never created to address deepfakes. Similarly, victims told CNBC last year that the distinction between creating and distributing deepfakes left a gap in the law in a number of U.S. states.

“The law needs to keep up with what’s really happening on the ground and what women are experiencing, which is just the simple act of creation itself is the problem,” Dordulian said.

California is at the forefront of passing laws to protect people from deepfakes, but existing law isn’t meeting the moment, said Jennifer Gibson, cofounder and director of Psst, a group created a little over a year ago that provides pro bono legal services to tech and AI workers interested in whistleblowing. A California law that went into effect Jan. 1 protects whistleblowers inside AI companies but only if they work on catastrophic risk that can kill more than 50 people or cause more than $1 billion in damages. If the law protected people who work on deepfakes, former X employees who detailed witnessing Grok generating illegal sexually explicit material last year to Business Insider would, Gibson said, have had protections if they  shared the information with authorities.

“There needs to be a lot more protection for exactly this kind of scenario in which an insider sees that this is foreseeable, knows that this is going to happen, and they need somewhere to go to report to both to keep the company accountable and protect the public.”

CalMatters is a nonpartisan and nonprofit news organization bringing Californians stories that probe, explain and explore solutions to quality of life issues while holding our leaders accountable.