Create a free account, or log in

Neural Notes: How AI is stretching Australian consumer protections

In this week’s edition: A lawsuit against a major chatbot provider, local experts weigh in on Australian consumer protections, and artists band together against tech companies training AI on unlicensed creative works.
David Adams
David Adams
neural notes consumer protections AI
Dr Kayleen Manwaring, associate professor at the University of NSW. Source: SmartCompany via University of NSW and Adobe Stock.

Welcome to Neural Notes, the weekly column where Tegan Jones unpacks the latest artificial intelligence news. Tegan is taking a well-earned break, so I am filling in. She will return shortly. For now, thanks for your company.

Warning: this article discusses suicide.

This week I’ve been thinking about Sewell Setzer, a 14-year Florida boy who died by suicide in February, and his mother, Megan Garcia, who accuses artificial intelligence chat app Character.ai of complicity in his death.

The New York Times reports Setzer used Character.ai to speak with an AI bot modeled on Daenerys Targaryen, a central figure in the fantasy series Game of Thrones.

Setzer formed a deep connection to this digital avatar, Garcia claims, to the point where her son withdrew from school and his hobbies to spend more time speaking to ‘Dany’.

In these hours-long conversations, Setzer, reportedly diagnosed with anxiety and a mood disorder, told ‘Dany’ his thoughts of self-harm.

Garcia links his death to the platform, saying ‘Dany’ and Character.ai, which had a written catalogue of Setzer’s most intimate thoughts, did little to intervene.

Garcia is suing. A copy of her lawsuit, obtained by The Verge, claims wrongful death, negligence, and deceptive and unfair trade practices.

Character.ai has responded. It “takes the safety of our users very seriously and we are always looking for ways to evolve and improve our platform,” the company said in a Tuesday blog post.

A key change is a new “pop-up resource” triggered when users share certain phrases linked to self-harm or suicide, directing users to local prevention services.

Among other changes, it also will roll out a “revised disclaimer on every chat to remind users that the AI is not a real person.”

The lawsuit is yet to play out, but it already asks the question: to what extent are AI companies responsible for protecting consumers?

By alleging unfair trade practices, Garcia’s suit will test if rules overseen by the Federal Trade Commission extend to the burgeoning AI sector, and what safety guarantees are owed to users.

In Australia, and in vastly different circumstances, the federal government is asking similar questions.

If you or someone you know is at risk of harm, call Lifeline now on 13 11 14.

You call also contact Beyond Blue on 1300 22 4636; Headspace on 1800 650 890; or The Suicide Call Back Service on 1300 659 467.

Is Australian Consumer Law capable of handling AI?

The Treasury opened a new consultation last week, asking if Australian Consumer Law (ACL) — which underpins the protections for individuals and small businesses, and liabilities for manufacturers — is fit-for-purpose in the age of artificial intelligence.

On one hand, the discussion paper says, ACL protections are widely applicable.

The legislation is principles-based and tech-agnostic, offering general and specific protections for consumers, no matter if they’re buying shovels or software.

On the other hand, AI is so malleable, its feedback so changeable, and its implementation potentially so opaque that new rules might be the answer.

“The regulatory landscape applying to AI-enabled goods and services is in a state of change,” the discussion paper states.

“To date no clear consensus has emerged among stakeholders regarding the extent to which it may be desirable to enhance certainty through either amendments to the ACL or regulatory guidance.”

Dr Kayleen Manwaring is an associate professor at the University of NSW, and an expert on how emerging technology intersects with consumer law.

Her recent work discusses how consumer law addresses, or fails to address, the problem of digital consumer manipulation.

At first glance, she says existing protections already address some of the questions kicked up by AI.

“If you’ve got a chatbot that gives false or misleading information to a customer, you’re clearly liable” under the ACL or the ASIC Act, the mirror provision for financial services, she tells me. Here, the ACL is “probably adequate”.

But she does see potential gaps.

Unfairly manipulative practices — think AI-enabled chatbots using “doom and gloom” to cajole vulnerable consumers into buying things they don’t need — could be a concern.

“If it’s not strictly misleading or deceptive, it’s not currently covered by the Act appropriately, considering [existing] cases on unconscionable conduct,” she says.

“So that is probably an area, at least consumer advocates would consider, that there’s a gap.”

Artificial intelligence may also make it far easier and cheaper for marketers to deploy bespoke campaigns, leveraging data on a user’s mental health, history with addiction, or disabilities.

“This is where the issues come, and things that weren’t a big problem before become a much bigger problem, because the technology can make that conduct much easier and much more economic,” she says.

Another “vexed question” pertains to “hybrid” products, that utilise both hardware and software.

Dr Mainwaring says the ACL is very good at making sure consumer goods perform as advertised at the point of sale, but less so when those products change.

This is an issue, given the way manufacturers may update embedded AI models after purchase.

“Because these things can be changed, the language in the product liability section is not fit for purpose, and I will be banging that particular drum,” she says.

I present a hypothetical: I buy a car with an AI-powered navigation assistant, and an update sees that AI assistant point me straight into a river, totalling the vehicle. What recourse do I have?

It would be possible to sue, Dr Mainwaring says, but that would only kick up the “really interesting question of causation” for the defect.

UNSW will likely submit its views to the Treasiry, and Dr Mainwaring intends to formally share her thoughts.

Others interested in voicing their opinion, whether they are an academic, small business owner, or even a household user of AI technology, can do the same until November 14.

Other AI news this week

  • Canva is making use of its blockbuster Leonardo.ai acquisition, launching its new Dream Lab image generation tool.ย 
  • A litany of artists, from Nobel Prize-winning novelist Kazuo Ishiguro to acclaimed actress Julianne Moore signed an open letter calling “the unlicensed use of creative works for training generative AI” a “major, unjust threat to the livelihoods of the people behind those works, and must not be permitted.”
  • Fresh research from researchers in Germany suggests news articles compiled with AI are less comprehensible than those fully written by humans.

Never miss a story: sign up toย SmartCompanyโ€™sย free daily newsletterย and find our best stories onย LinkedIn.