Home > Latest News > AI Chatbots Need to Tell You Their Model, Version, Training

AI Chatbots Need to Tell You Their Model, Version, Training

Consumers risk being duped by the next generation of AI unless it offers complete transparency.

A bank customer may not know that the generative AI model giving them personal investment advice had been inadequately trained on current financial options or not know the customer’s investment history.

Likewise, a female job applicant may not know they were rejected for employment because the AI’s training had been based on mainly men successfully being hired previously.

Speaking at the Tech Leaders’ Conference at Pokolbin, an expert in regulation and governance wants consumers to be armed with the details of AI systems that assess or help them, should they need to question whether they were fit for purpose, or even take legal action.

Associate professor Rob Nicholls, from the University of New South Wales Business School, said consumers must have access to an AI model’s name, version, model type, details of training data used, and model limitations and bias.

An AI chatbot could be asked directly to provide its model details, otherwise the AI developer should provide the information.

Rob Nicholls, UNSW Business School

And employment agencies, governments and others should be required to disclose to clients when they use AI.

Transparency was vital as society looks towards regulating both AI and the new generative AI large language models, Assoc Prof Nicholls said.

Despite the concerns about artificial intelligence and “unintended bias”, some people are prepared to put more trust in AI than their fellow humans.

The conference also heard from Stela Solar, the director of Australia’s Artificial Intelligence Centre. She quoted research from sapia.ai that 30 percent more women would apply for a job if they were told they would be assessed by AI and not a human.

“AI itself is neither good or bad; it’s actually how we use it,” she told the conference.

She remained positive about AI technology despite its much discussed drawbacks.

Organisations that used AI exhibited better customer experiences, faster decision making, improved products and services and better productivity.

“AI was fast becoming a leading indicator for the commercial competitiveness of a business, and in turn for an economy.”

She said Australia ranked “incredibly highly” in terms of AI research but the community’s understanding of AI was below global average.

A recent IPSOS survey of consumer sentiment found that Australia was “the most nervous country” about AI. Another analysis found Australia had the lowest trust in AI systems.

She said 30 percent of professionals had tried ChatGPT at work, but 68 percent were using AI tools without telling their organisation.

“This is a great sign that people are finding ways of doing things more productively and effectively.” But this brought risks that organisations were not aware of.

AI was used to forecast, predict and model emissions and would provide quality health care as there were not enough medical professionals in the world to offer it.

Stela Solar, director of Australia’s Artificial Intelligence Centre.

Ms Solar said Royal Perth Hospital ran a program called Health in Virtual Environments where four medical professionals monitored 200 patients remotely rather than use a walk-by examination of the patient.

Organisations revealed their longer term plans for generative AI in the future. 

Suncorp CIO Adam Bennett said chatbots would be used to discuss personal financial with customers in the future.

AWS chief technologist for ANZ Rada Stanic talked of more complex interactions with chatbots on an online e-commerce website, and with flight bookings, and insurance claims processing. She indicated it might be possible in the future for the Amazon Store or other retailers to offer a shop-assistant style interaction where you’d ask it questions as you would in-store.

There was the prospect of customer data bases knowing more personal details about you such as your footy team, and a chatbot starting a conversation with you by discussing highlights from last weekend’s game, gleaned from a published match report.

But businesses needed to embrace responsible AI standards.

Risk advisory partner at Deloitte Ela Wurth spoke of the need for a coherent set of Australian standards that would guide organisations implementing AI systems.

Dozens of countries have signed on to the OECD’s set of AI principles and more than 190 have adopted a global agreement on the ethics of artificial intelligence. There were AI principles governing international trade.

There were other revelations at the conference. Zachary Zeus from PyxGlobal declared blockchain a failure, citing its long-term inability to be adopted as mainstream technology as his evidence.

Chris Griffith attended the Tech Leaders’ conference at Pokolbin courtesy of MediaConnect.



You may also like
Telstra Outlines AI Integration Roadmap
Nvidia Could Turn To Samsung For GPU Production
AI’s Poor Diet Of Bad Data Could Lead To “Model Collapse”
Ecovacs Cleans Up With Three New Robots
Gemini Live Launched On Android, You’ll Never Have To Think Again

Popular Posts

McIntosh 2-In-1 Amp and Streamer Can Be ‘Cornerstone Of Audio Setup’
Latest News
/
/
Anthony Heraghty, CEO and group managing director of Super Retail Group
Super Retail Back In Federal Court Defending Itself
Latest News
/
/
I am David. So Why Does HCF Insist On Calling Me Antoinette?
Latest News
/
/
Is Google Set To Become The New BOM, With Better Weather Accuracy?
Latest News
/
/
Google Sues Over Mandatory Oversight Of Payment Arm
Latest News
/
/

Digital Magazines

Recent Post

McIntosh 2-In-1 Amp and Streamer Can Be ‘Cornerstone Of Audio Setup’
Latest News
/
//
Comments are Off
McIntosh’s new two-channel streaming integrated amplifier – the MSA5500 – has an FTC Power Output Rating of 100W and is...
Read More