Sign up for your FREE personalized newsletter featuring insights, trends, and news for America's Active Baby Boomers

Newsletter
New

X Files Lawsuit Against Minnesota’s Deepfake Election Law, Citing Free Speech And Safe Harbor

Card image cap

Adding to its list of ongoing lawsuits, social media company X (formerly Twitter) has filed a legal challenge against an anti-deepfake law in the US state of Minnesota. The law in question regulates the use of deepfakes during elections. 

X argues that this law violates its rights and its users’ rights under the First Amendment of the U.S. Constitution (which includes the right to free speech). It also claims that the law violates Section 230 of the U.S. Communications Decency Act, which protects platforms from liability for what their users post (safe harbor).

“While the law’s reference to banning ‘deepfakes’ might sound benign, in reality it would criminalize innocuous, election-related speech, including humor, and make social media platforms criminally liable for not censoring such speech,” the company said in a blog post explaining its concerns with the law.

What does the law include?

It says that anyone who disseminates deepfakes or enters into a contract to disseminate deepfakes will be in violation of the law, provided that:

  • The deepfake is made without the depicted person’s consent.
  • The deepfake is made with the intent to injure a candidate or influence the result of an election.
  • The deepfake dissemination takes place either within 90 days of an election or after the start of the absentee voting period

The law states that people who post deepfakes during the election period can be sentenced to imprisonment for up to five years or fined up to US$3,000 if they have one or more prior convictions. In other cases, non-compliance can result in a prison sentence of up to 90 days or a fine of US$1,000. 

X’s ongoing lawsuits around free speech: 

This isn’t the first time that X has filed a lawsuit challenging a law. In November last year, the company challenged a similar law regulating deepfakes during elections in the U.S. state of California. As in the current case, X’s key opposition to the law lies in its implications for free speech and safe harbor.

Similarly, in India, the company is legally challenging the government’s use of Section 79(3)(b) of the Information Technology Act, 2000, to issue content takedown requests to social media platforms. Section 79 deals with safe harbor, or protection from liability for user-generated content, similar to Section 230 of the U.S. Communications Decency Act. Part (3)(b) of Section 79 states that platforms can lose this liability if they fail to remove unlawful content after the government or its agencies notify them about said content. 

Safe harbor in the age of AI:

In June last year, the U.S. Subcommittee on Science and Technology conducted a hearing to discuss a proposal to “sunset” safe harbor protections for platforms under Section 230. During the hearing, Congresswoman Kathy McMorris Rodgers mentioned that U.S. courts have expanded the scope of safe harbor, and it is becoming harder to hold platforms responsible if they amplify illegal content. “As more and more companies integrate generative artificial intelligence technologies into their platforms, these harms will only get worse, and AI will redefine what it means to be a publisher, potentially creating new legal challenges for companies,” she argued. 

In the Indian context, the question of safe harbor for AI-generated content came up in 2023 when the government sent an advisory to social media platforms, warning them that if they fail to take down deepfake content, they could lose safe harbor protections. While governments, whether in the U.S. or India, may have reasons to act against deepfakes, X’s arguments focus on the fact that people don’t just use deepfakes for harmful purposes but also for legitimate content, such as social commentary using AI. The company has reportedly presented the same perspective to the central committee currently conducting consultations on deepfake regulation.

Also read:

The post X Files Lawsuit Against Minnesota’s Deepfake Election Law, Citing Free Speech and Safe harbor appeared first on MEDIANAMA.


Recent