Can California Crack Down on Deepfakes Without Violating First Amendment Rights?

A California lawmaker says he knew something had to be done after watching a video of Barack Obama calling President Trump “a total and complete dipsh*t.”

Set in what appears to be the Oval Office, the video also depicts the former president speaking fondly of the militant anti-colonial villain of the “Black Panther” comic franchise and claiming that Housing Secretary Ben Carson is brainwashed.

The video was a fake, of course—a collaboration between the website Buzzfeed and filmmaker Jordan Peele. It’s Peele who speaks through Obama’s digitally re-rendered mouth to illustrate the dangers of A.I.-constructed “deepfake” videos.

With the exception of some discoloration around the jaw and a not-entirely-convincing voice, it’s a solid forgery. And the technology used to make it is only getting better.

“I immediately realized, ‘Wow, this is a technology that plays right into the hands of people who are trying to influence our elections like we saw in 2016,’” said Assemblyman Marc Berman, a Democrat whose district includes Silicon Valley.

So Berman, chair of the Assembly’s election committee, has introduced a bill that would make it illegal to “knowingly or recklessly” share “deceptive audio or visual media” of a political candidate within 60 days of an election “with the intent to injure the candidate’s reputation or to deceive a voter into voting for or against the candidate.”

The bill would apply to state-of-the-art deepfakes, as well as to lower tech fabrications. It also makes an exception if the video or audio has a clear disclaimer that digital monkey business has been performed.

Libel and forgeries are hardly new phenomena in politics. But as technological developments make it increasingly difficult to sort fake from real news, and to crack down on the dissemination of false information once it finds its way online, lawmakers like Berman are struggling to find some way to fight back.

“I don’t want to wake up after the 2020 election, like we did in 2016, and say, ‘dang, we should have done more,’” said Berman.

But there is at least one limit on what can be done. The First Amendment of the U.S. Constitution guarantees the right to free speech—making it unclear whether a ban on convincing video forgeries would pass constitutional muster.

The American Civil Liberties Union of California, the California News Publishers Association and the California Broadcasters Association all oppose the bill on First Amendment grounds.

The bill cleared a hurdle last week by winning approval from a Senate committee. But at the hearing Whitney Prout, staff attorney with the publishers’ association, called the bill “an ineffective and frankly unconstitutional solution that causes more problems than it solves.” She warned that, if enacted into law, it could discourage social media users from sharing any political content online, lest it be a fake and they be held legally liable. Another possible consequence is that campaigns plaster every attack ad with a deepfake disclosure to shield themselves from lawsuits, leaving the public even more confused.

“The law surrounding the First Amendment really has evolved in a pre-Internet world,” said Louis Tompros, a partner at the law office of WilmerHale and a lecturer at Harvard. The enactment of laws such as the one Berman proposes would “force the courts to really reconcile the whole body of First Amendment law with these new phenomenon.”

The method behind “deepfakery” is technically sophisticated, but its producers don’t need to be. These days, anyone with access to a YouTube tutorial and enough computing power can produce their own videographic forgery.

Hence the proliferation of so many comedic or satirical deepfakes. Some strive to make a point, like the one created by Peele or a more recent depiction of Facebook founder Mark Zuckerberg bragging about stealing your data.

Others are just Internet-grade goofy. Consider the Q&A with the actress Jennifer Lawrence who speaks to reporters with the face of Steve Buscemi. (When shown the fake on The Late Show with Stephen Colbert, Buscemi seemed remarkably unfazed; “I’ve never looked better,” he said).

But the technology has, of course, been used for seedier purposes. The most popular application seems to be pornographic, with online forgers digitally grafting the faces of Hollywood celebrities onto the bodies of adult film actresses—without the knowledge or consent of either party.

In the case of Rana Ayyub, an Indian investigative journalist, the use was even more sinister. Last year, a fake sex video “starring” Ayyub was leaked online in an act of apparent retribution for her reporting that was sharply critical of Prime Minister Narendra Modi and his Hindu nationalist Bharatiya Janata Party. As Ayyub told the Huffington Post, the harassment and humiliation that followed sent her to the hospital with heart palpitations and led her to withdraw from online life.

Earlier this year, Berman introduced another bill that would give anyone involuntarily depicted in a sexually explicit video—including a digital fake—the right to sue.

But it seems only a matter of time before someone attempts to use the method for political purposes, he said.

That conclusion was reinforced a few weeks ago when an edited video of Nancy Pelosi went viral, in which the Democratic Speaker of the House appeared to be slurring her words as if drunk or cognitively impaired.

The video wasn’t a deepfake. Rather than use machine-learning algorithms, its producers opted for the more primitive technological methods of slowing down the footage and raising the pitch of the voice. But it still elicited a wave of bipartisan angst about the threat that forged video poses to our democratic institutions.

Florida GOP Sen. Marco Rubio recently characterized “very realistic fake videos” as a national security threat akin to aircraft carriers and nuclear weapons. And at a House hearing last month, Democratic Congressman Adam Schiff from Burbank warned of the possible “nightmarish scenario” in which “a state-backed actor creates a deepfake video of a political candidate accepting a bribe.”

Just as worrisome, he said, the mere existence of deepfakes allows bad actors to more convincingly dismiss real information as fake.

But some civil liberties groups are concerned that lawmakers will overreact.

“Congress must tread carefully if it seeks to address the actual problem without censoring lawful and socially valuable speech—such as parodies and satires,” analysts with the Electronic Frontier Foundation wrote in response to the hearing. The Foundation said they are still reviewing Berman’s bill and do not yet have a position.

Tompros said it would be very difficult to craft a law banning socially harmful deepfakes without sweeping up more traditional forms of political speech.

“Is it a ‘deceptive audio or visual media’ if, for example, I take a ten minute very nuanced policy speech and I clip out five seconds in the middle where it sounds like the person is taking an extreme position?” he said.

Under that standard, a significant share of attack ads produced over the last half-century would be illegal. Still, Berman’s proposal is much narrower than past legislative attempts.

In 2017, Assemblyman Ed Chau, a Democrat from Monterey Park, introduced a bill that would have banned the online dissemination of any false information about a political candidate. Chau pulled the bill in the face of fierce pushback from civil liberties groups.

The focus on video and audio specifically could put this year’s proposal on firmer legal ground, said Eugene Volokh, a law professor at UCLA and the founder of the Volokh Conspiracy, a law blog hosted by the libertarian magazine, Reason. Unlike a comment on climate change or the fiscal impact of tax legislation, where there is plenty of “dispute about what the actual truth is…with altered video or altered images at least the person who is originating it will tend to know what’s true and what’s false,” he said.

He points to the 24 states that have criminal defamation laws that make it a punishable offense to knowingly or recklessly spread false information about a person. The U.S. Supreme Court has generally allowed these laws to remain on the books, although civil liberties organizations are fighting to change that.

Berman said he thinks his bill falls into that same category.

“There are restrictions around the First Amendment, including around the issue of fraud,” said Berman. “I don’t think the First Amendment applies to somebody’s ability to put fake words in my mouth.”

That might have once been a figure of speech, but no more. In the latest iteration of the technology, a handful of researchers at Adobe and American and German universities, produced a new editing method that allows anyone to insert new words into a video transcript and have a person in the video speak them.

The effect: using technology to literally put words into someone else’s mouth.

When the researchers showed their creations to a small survey of viewers, more than half mistook the fakes for the real thing.

CALmatters.org is a nonprofit, nonpartisan media venture explaining California policies and politics.

One Comment

  1. > So Berman, chair of the Assembly’s election committee, has introduced a bill that would make it illegal to “knowingly or recklessly” share “deceptive audio or visual media” of a political candidate within 60 days of an election “with the intent to injure the candidate’s reputation or to deceive a voter into voting for or against the candidate.”

    OH MY GOD! THINK OF THE CHILDREN!

    If this bill passes, people with Trump Derangement Situation (TDS) will burst into flames, or worse.

    Not bash Trump for 60 days?! That’s like asking anteaters to not eat ants.

Leave a Reply

Your email address will not be published. Required fields are marked *