View all Insights

Legislative gap leaves New Zealand exposed to deepfakes

Legislative gap leaves New Zealand exposed to deepfakes

Written by:
Diana Clement

The march of technology means almost anyone can create deepfakes, often in a matter of minutes

New Zealand’s law is woefully behind when it comes to the issue of deepfakes. But the best law in the world will not be enough to combat the digital manipulation of a person’s likeness unless bolstered by other enforcement measures, says Alex Sims, a professor in the Department of Commercial Law at the University of Auckland Business School. The march of technology means almost anyone can create deepfakes, often in a matter of minutes. There is very little that victims in New Zealand can do, except ask social media platforms to take them down. They are used in scams and sometimes in real-time video calls. Often, the images or videos are pornographic. American popstar Taylor Swift knows firsthand the impact. In late January 2024, sexually graphic artificial intelligence generated deepfake images of the musician were proliferated on multiple social media platforms, including Facebook and Reddit. One deepfake post was viewed 47 million times within 17 hours before it was taken down. The White House condemned the incident, deeming the counterfeit images “alarming”, and legislators in other countries expressed concern. Deepfakes – images, videos or audio that have been digitally altered using artificial intelligence (AI) to look or sound like a person, but which aren’t real – are not a new problem. And Swift is just one of many celebrities, public figures and teenagers who have fallen victim to sexualised deepfakes in recent years. Statistics about online deepfake videos have consistently found that 90% to 95% of them are non-consensual pornography. Almost all of them (90%) depict women. The technology can, of course, be used for good, says Antonia Modkova, director of IP and innovation at AI company Soul Machines: “For example, to reduce the cost of film production or allow people with motor disabilities to express themselves with a synthetic voice.” Deepfakes can also be used in parody or satire, she says. But selling pornographic deepfakes or scamming people with them are becoming lucrative enterprises. And in some cases, the intent is to harm.

First port of call

The first port of call for New Zealand lawyers is the Harmful Digital Communications Act 2015 (HDC). Few would argue that deepfake porn isn’t harmful but the issue is that it doesn’t meet the definition of “intimate visual material” under s 22A of the Act. Deepfakes don’t actually feature a victim’s body. It’s all artificial, says Arran Hunt, a partner at McVeagh Fleming. “No matter how close it gets to looking real, the definition is focused on something that captures reality, a particular point in time, not something that has been created.” “The way they worded [it] seemed to be focused on somebody's actual body being in the picture rather than a representation of their body. The person [creating] the deepfake could argue it's not their real body,” Hunt says, adding it would take a very adventurous bench to bring a deepfake within the interpretation. Even if deepfakes were covered by the HDC Act, remedying the harm will depend on the nature of the breach. The statute has both criminal and civil remedies. For the breach to be criminal, there must be an intent to cause harm. On the other hand, if the motivation is financial, sexual gratification, or notoriety, the breach falls under the civil remedies in the Act, which usually means an apology and removal of the content at best. The criminal pathway is also prohibitively expensive. Hunt says another issue is that New Zealand police officers largely believe that if it’s online, just ignore it. “It's not reality. You can't ignore these things because they're everywhere. They're massive. They spread. ”Many in the legal profession agree that legislators missed an opportunity when the Harmful Digital Communications (Unauthorised Posting of Intimate Visual Recording)Amendment Act was passed in 2022. Hunt says it didn’t cover deepfakes, despite multiple submissions, including his own, calling for the inclusion. Modkova says the failure to include deepfake porn under the HDC Act when it was amended, was a classic example of laws not keeping up with technology. “It's not clear to me why Parliament didn't clarify the wording to unambiguously include deepfakes given that I saw most submissions in favour of covering deepfakes and the fact that a deepfake would have the same harmful effect on a victim as a recording.”

Other laws

Deepfakes can, in theory, be covered by many other laws. For example, deepfakes are proliferating in politics and Modkova points to s 197 of the Electoral Act 1993 (interfering with or influencing voters) and s 199A (publishing false statements to influence voters) as potentially helpful provisions. The presence of an overseas perpetrator could, however, complicate matters. Another potential avenue for redress is the Crimes Act 1961,which could be used where scammers use deepfakes to defraud. The Defamation Act 1992 and Harassment Act 1997could be used, says Hunt. “But those are both old legislation, which is, again, very court-heavy, expensive, and mostly civil remedies. ”Bella Stuart, a recent law graduate from the University of Otago, whose Honours dissertation focused on deepfakes, cites other laws, such as the Films, Videos and Publications Act 1993.It establishes New Zealand’s content censorship regime by criminalising, among other things, the making and distributing of objectionable publications. However, the Court of Appeal has restricted the definition of objectionable publications to those dealing with the activity of sex. “While paradigmatic deepfake pornography could be objectionable, deepfake imagery falling short of sexual activity(such as mere nudity) could not,” Stuart writes. Also there maybe issues establishing injury to the public good where only an individual is targeted. In general, current legislation isn’t necessarily fit for purpose, she says.

Overseas

Asked if New Zealand lags behind in legislation for deepfakes, Hunt says: “I think perhaps in the occasional sport or the occasional area of industry we do well, but when it comes to things like law, we are so far behind constantly.” “It used to be that we make laws when it starts to hit the fan, and [we think] ‘oh, yeah, quick, we've got to [look] like we're acting’. It was very clear this [deepfake legislation] should have been sorted out when they did the amendment. And the original legislation was badly done in the first place. So, it's just been amess to a mess,” he says. Instead of waiting, New Zealand and other countries should have acted last year when the deepfakes of Swift and other high-profile young women started circulating. Stuart adds: “Swift’s experience is a timely reminder that New Zealand only has so long to take proactive action before we are left scrambling to respond. Parliament must heed this timely warning and act quickly to protect New Zealanders from this newest manifestation of image-based sexual abuse. ”MPs can take a leaf out of the Australian book and other overseas jurisdictions. Australia’s Online Safety Act 2021 was legislated with deepfakes in mind and provides civil penalties for non-consensual sharing of intimate images including those that have been altered, says Hunt. He cites the s 15 definition of intimate images, which includes material that depicts or appears to depict certain body parts in circumstances in which an ordinary reasonable person would reasonably expect to be afforded privacy. The UK has a very similar law, which took inspiration from the Australian statute, Hunt explains. The UK Online Safety Act 2023 added a new provision to the Sexual Offences Act 2003, spelling out that images created or altered by computer graphics or any other way fall within the definition of photograph or film. It also added, in a provision concerned with the sharing of or threatening to share intimate photographs or film, the words “appears to show” another person in an intimate state. That Act is also triggered if religious or cultural attire is removed. “They were thinking beyond just sexual, but about other things that could cause offence,” Hunt says. He goes further to say that simply creating AI images of somebody could cause offence. The US DEFIANCE Act 2024, which stands for “Disrupt Explicit Forged Images and Non-Consensual Edits”, provides a civil remedy for digital forgeries depicting a victim in the nude, or engaged in sexually explicit conduct or sexual scenarios. A bipartisan bill called the AI Labeling Act of 2023 has also been introduced in the US. It would require clear labelling and disclosures on AI-generated content and chatbots. Modkova highlights the Federal Communications Commission, which in February banned robocalls that feature voices generated by AI. The move was aimed at stemming the tide of AI-generated scams and misinformation campaigns. Businesses must now obtain consent for automated telemarketing calls using AI-generated voices under the Telephone Consumer Protection Act.

The law might not protect

All the law in the world may not protect victims, says Professor Sims, who argues the answer may not lie in the law but in using a combination of tools such as social media takedown mechanisms, helping people becoming savvier about what they see online, and harnessing technology. “Even if we have laws, enforcing those laws is the problem,” Sims says. “Even if a prosecution is successful, the harm is done. ”People need to change their behaviour, she says. “First of all, people need to learn not to trust photos, videos or voice recording. Even by reputable sources. There is a potential for using tech mechanisms, including blockchain, to authenticate images to see that (a) they were issued by the person claiming to have issued them and (b) that they haven’t been altered. But again, this requires that people do a bit of digging or verification before accepting something, which goes against human nature. ”Victims need to use social media takedown policies when they become aware of deepfakes circulation. “Social media has zero tolerance for posting non-consensual nudity, that actually works,” says Sims. The phenomenon of people impersonating others in a believable way isn’t new, the professor says. Manipulated images have been used for a long time and scammers have often impersonated others. Deepfakes are simply a new way of doing it. Live deepfakes – people being created on live calls through AI – are more worrying in that they are on the cusp of becoming believable, says Sims. A widely reported case in early February involved a Hong Kong-based employee of a financial firm who was duped into transferring NZ$42m to scammers after being involved in a video call with what appeared to be the company’s chief financial officer and other colleagues – only the other parties to the call were fakes who resembled living, breathing people. It’s not the only deepfake scam to surface recently. In one instance reported last year, a mother answered the phone to hear her daughter screaming and sobbing. The scammers had put the girl’s voice through an AI engine and the scam would have succeeded, had the daughter not been asleep in bed at home.

Real or fake faces

Sims points out that many New Zealanders in recent years have been scammed out of money in a business redirection scam. Hackers access a business' email, then send an email from that account to a customer wanting to change their payment details. However, the new account belongs to the scammer. Deepfakes can be used in this type of scam; people can inadvertently trust AI-created faces and voices. A study by Dr Sophie Nightingale from Lancaster University and Professor Hany Farid from the University of California, Berkeley, asked study participants to identify whether 800 faces were real or fake. The study found that participants trusted AI-created synthetic faces more than the real faces. People may also struggle to detect real voices from fake ones. University College London academic Kimberly T Mai and colleagues published research in August 2023 that suggested humans cannot reliably detect speech deepfakes. Listeners in both Mandarin and English only correctly spotted the deepfakes 73% of the time. Tech solutions are being developed to counter deepfakes, although Sims warns that people need to be wary of them. Criminals are usually one step ahead of such software – and tech solutions aren’t failsafe. The BBC in 2023 put Intel’s deepfake detector, Fake Catcher, to the test. While the technology, which detects changes in people’s blood flow, worked for most videos where someone’s mouth and voice had been altered, Fake Catcher wasn’t so effective with real, authentic videos. Videos that it said were fake, were actually real, and the more pixelated a video was, the harder it was to pick up blood flow.

Article contributors (left to right): Bella Stuart, Antonia Modkova, Arran Hunt, Professor Alex Sims.

Credit: This article originally appeared in LawNews and is republished here with permission from the author and the publication.

If you would like to talk further about legal matters related to this article, then contact article contributor Arran Hunt: ahunt@mcveaghfleming.co.nz


----------------------------------------------------------------------------------------------------------

© McVeagh Fleming 2024

This article is published for general information purposes only.  Legal content in this article is necessarily of a general nature and should not be relied upon as legal advice.  If you require specific legal advice in respect of any legal issue, you should always engage a lawyer to provide that advice.

© McVeagh Fleming 2024

This article is published for general information purposes only.  Legal content in this article is necessarily of a general nature and should not be relied upon as legal advice.  If you require specific legal advice in respect of any legal issue, you should always engage a lawyer to provide that advice.

Subscribe to receive updates

I would like to receive updates for:
Thank you for subscribing. Your submission has been received!
Oops! Something went wrong while submitting the form. Please try again.