US regulator investigates ChatGPT maker over potential harm to consumers
The US Federal Trade Commission has opened an investigation into OpenAI, the maker of ChatGPT, on claims it has run afoul of consumer protection laws by putting personal reputations and data at risk, according to an FTC demand for information sent to the company.
The move marks the strongest regulatory threat to the Microsoft-backed startup that kicked off the frenzy in generative artificial intelligence, enthralling consumers and businesses while raising concerns about its potential risks.
The FTC this week sent a 20-page demand for records about how OpenAI addresses risks related to its AI models. The agency is investigating whether the company engaged in unfair or deceptive practices that resulted in “reputational harm” to consumers.
One of the questions has to do with steps OpenAI has taken to address the potential for its products to “generate statements about real individuals that are false, misleading, or disparaging.”
The Washington Post was first to report the probe. The FTC declined comment. OpenAI did not immediately respond to a request for comment.
As the race to develop more powerful AI services accelerates, regulatory scrutiny of the technology that could upend the way societies and businesses operate is growing.
Global regulators are aiming to apply existing rules covering everything from copyright and data privacy to two key issues: the data fed into models and the content they produce, Reuters reported in May.
In the United States, Senate Majority leader Chuck Schumer has called for “comprehensive legislation” to advance and ensure safeguards on AI and will hold a series of forums later this year.
OpenAI in March also ran into trouble in Italy, where the regulator had ChatGPT taken offline over accusations OpenAI violated the European Union’s GDPR – a wide-ranging privacy regime enacted in 2018.
ChatGPT was reinstated later after the U.S. company agreed to install age verification features and let European users block their information from being used to train the AI model.
Reuters – by Diane Bartz, Mrinmay Dey and Samrhitha Arunasalam