Home > Latest News > ChatGPT Under Investigation By Federal Trade Commission

ChatGPT Under Investigation By Federal Trade Commission

OpenAi’s ChatGPT is under investigation by the Federal Trade Commission (FTC) after accusations of harming people through the publishing of false information came forward, posing a potential threat to the app.

A civil subpoena claims the investigation is focusing on whether ChatGPT “engaged in unfair or deceptive practices relating to risks of harm to consumers, including reputational harm.”

The company was asked to “describe in detail the extent to which you have taken steps to address or mitigate risks that your large language model products could generate statements about real individuals that are false, misleading or disparaging.”
The investigation is under Chair Lina Khan, and marks a significant escalation of the federal government’s role in policing the AI technology.

Khan explained that the FTC is concerned ChatGPT and other AI apps aren’t checked on the data mined.

“We’ve heard about reports where people’s sensitive information is showing up in response to an inquiry from somebody else. We’ve heard about libel, defamatory statements, flatly untrue things that are emerging. That’s the type of fraud and deception that we are concerned about.”

Founder of Chamber of Progress, Adam Kovacevich said, “When ChatGPT says something wrong about somebody and might have caused damage to their reputation, is that a matter for the FTC’s jurisdiction? I don’t think that’s clear at all.”

These matters “are more in the realm of speech and it becomes speech regulation, which is beyond their authority.”

Chief Executive of OpenAI, Sam Altman tweeted that it’s “disappointing to see the FTC’s request start with a leak and does not help build trust,” claiming the company will work with the FTC.

Marc Rotenberg heads a group that filed a complaint over ChatGPT said it was unclear whether the FTC has jurisdiction over defamation, however, “misleading advertising is clearly within the FTC’s purview. And disinformation relating to commercial practices is already, according to the FTC, an area within its authority.”

The complaint claimed it was “biased, deceptive and a risk to privacy and public safety,” arguing it satisfied none of the FTC’s guidelines for AI use.

The FTC has a broad authority when it comes to policing unfair and deceptive business practices that can harm consumers, along with unfair competition.

Khan has previously been reported to push authority too far, coming under fire over the agency’s investigation into Twitter’s privacy protections for consumers.

It has been claimed the probe was driven by liberals angry of the takeover by Elon Musk, and his loosening of content moderation policies.

A 2022 settlement has been requested to be terminated by Twitter after it was claimed to be subjected to “burdensome and vexatious enforcement investigation.”

Khan claimed the agency was only interested in protecting users privacy and that “we are doing everything to make sure Twitter is complying with the order.”

The FTC has asked OpenAI many detailed questions surrounding its data security practices, citing a 2020 incident of the company disclosing a bug allowing users to see information around other users’ chats and some payment related information.

Other topics that were covered include, marketing efforts, practices for training AI models, and handling users’ personal information.

The Biden administration is looking into whether checks need to be placed on AI tools, and the White House’s Office of Science Technology Policy is working on the development of strategies addressing the benefits of AI, along with harms.

Regulating AI has become a priority for the current Congress, along with the worry AI tools can be abused to manipulate voters, discriminate against minority groups, commit sophisticated financial crimes, displace millions of workers, along with other harms.

More specifically, lawmakers are very concerned about deepfake-videos.

New legislation is likely still months away, but lawmakers are worries significant action will risk the slowing of US innovation.

Altman has called on lawmakers to create licensing and safety standards for advanced AI systems.

“We understand that people are anxious about how it can change the way we live. We are too. If this technology goes wrong, it can go quite wrong.”

You may also like
Google Customers Locked Out Of Cloud Services
Samsung Says Galaxy S24 Series Is AI Focused
iRobot Shares Rise, Amazon Deal Close
OpenAI Launches Free ChatGPT Voice Feature Amid CEO Turmoil
Gates Says AI To Cut Workweek, Enter OpenAI’s Project Q*