FTC investigation of OpenAI scrutinizes AI data security

Key Speakers At Bloomberg Technology Summit
Sam Altman, chief executive officer of OpenAI, said it was "disappointing" to see an investigation by the Federal Trade Commission into his company "start with a leak." The FTC is investigating whether OpenAI has engaged in "unfair or deceptive privacy or data security practices."
David Paul Morris/Bloomberg

An investigation by the Federal Trade Commission into practices at ChatGPT maker OpenAI highlights some of the primary risks of AI on which regulators are focusing, including many risks that concern banks. One special area of focus is protections for users' personal data.

The Washington Post first reported the investigation, citing a letter the FTC sent to OpenAI detailing the commission's requests. The letter states that the FTC is investigating whether OpenAI "engaged in unfair or deceptive privacy or data security practices" or "engaged in unfair or deceptive practices relating to risks of harm to consumers, including reputational harm."

OpenAI founder and CEO Sam Altman said in a Tweet that he was disappointed the investigation "started with a leak" but that the company would comply with the FTC's requests. The FTC has not publicly acknowledged the investigation.

Banks have started experimenting with large language models — the technology behind ChatGPT and competitors such as Google's Bard — primarily for applications such as organizing institutional knowledge and providing customer service via chatbots, but the technology's use has largely remained internal to mitigate risks with a technology that has also gained the interest of regulators.

The FTC's investigation touches on multiple concerns that lawmakers have expressed to Altman, including in a May hearing before a Senate subcommittee. One is how OpenAI markets its technology, including to institutional customers like Morgan Stanley, which recently turned to OpenAI for assistance on a years-long mission of having AI help financial advisors sort through the company's 100,000 annual research reports.

The bulk of the FTC's request revolves around "false, misleading or disparaging statements" that OpenAI's models could make or have made about individuals. For banks, perhaps the most relevant requests concern the company's practices with respect to protecting consumer data and about the security of the model itself.

Senate Judiciary Subcommittee Hearing On Artificial Intelligence

During a Tuesday hearing, lawmakers talked through how to properly regulate the wide-ranging uses of AI. Some voiced support for forming a new AI agency.

May 17

Protecting consumer data

The FTC requested details from OpenAI about a data breach from March in which some ChatGPT users could see other ChatGPT Plus users' payment-related information and chat titles. The payment-related information included the user's first and last name, email address, payment address, credit card type and last four digits of a credit card number. Full credit card numbers were never exposed, according to the company.

After that breach, OpenAI published technical details on how the breach happened. In summary, a change the company made to a server caused the server to, in certain cases, share cached data with users even if the data belonged to a different user.

The FTC also asked OpenAI about its practices for handling users' personal information, something that has been a focus for the FTC and the Consumer Financial Bureau in recent rulemaking processes that concern financial data. Banks have faced similar scrutiny in the past, and a large list of data breach reporting rules requires banks to give regulators early warning about breaches of consumer data.

Regulators and lawmakers have also expressed concerns about the ends to which companies have used large language models. In the May hearing before the Senate Judiciary Subcommittee on Privacy, Technology and the Law, part of the Senate Judiciary Committee, Senator Josh Hawley asked Altman about training AI models on data about the kinds of content that gain and keep users' attention on social media and the "manipulation" that could come of that amid what he called a "war for clicks" on social media.

"We should be concerned about that," Altman said, but he added that OpenAI does not do that kind of work. "I think other companies are already — and certainly will in the future — use AI models to create very good ad predictions of what a user will like."

Hacking the models

The FTC also asked OpenAI to share any information the company has gathered about what it called "prompt injection" attacks that can cause the model to output information or generate statements that OpenAI has trained the model not to provide.

For example, users have documented cases of getting the model to output the ingredients for the explosive napalm or provide Windows 11 keys. Mainly, users have induced these outputs by instructing the model to impersonate the user's dead grandmother, who would provide this information to help them fall asleep at night.

This method has worked for other, contrived role-playing scenarios, as well. For example, one user told the model to act as a typist, dictating the words of someone who is writing a script about a movie in which a grandmother is trying to get her young grandson to fall asleep by reciting Linux malware. It worked.

Banks that have launched AI chatbots have been careful not to give the products any capabilities that go beyond what the bank needs them to do, according to Doug Wilbert, managing director in the risk and compliance division at consulting firm Protiviti. For example, AI chatbots like Capital One's Eno cannot answer even some seemingly basic questions, like whether the chatbot is a large language model.

"It's not going to answer everything. It's going to have a focus on particular areas," Wilbert said. "Part of the problem is giving bad information to a client is bad, so you want to ring-fence what it's going to say and what it's going to do, because especially on the customer service side, regulators are looking at wait times, chat times, responses — things like that."

For reprint and licensing requests for this article, click here.
Artificial intelligence Data security Cyber security Technology
MORE FROM DIGITAL INSURANCE