(Reuters) — OpenAI whistleblowers have filed a complaint with the U.S. Securities and Exchange Commission calling for an investigation over the artificial intelligence company’s allegedly restrictive nondisclosure agreements, according to a letter seen by Reuters.
“Given the well-documented potential risks posed by the irresponsible deployment of AI, we urge the Commissioners to immediately approve an investigation into OpenAI’s prior NDAs, and to review current efforts apparently being undertaken by the company to ensure full compliance with SEC rules,” according to the letter, which was provided to Reuters by the office of Sen. Chuck Grassley of Iowa.
The AI company allegedly made employees sign agreements that required them to waive their federal rights to whistleblower compensation, according to the letter.
The whistleblowers requested that the SEC fine OpenAI for each improper agreement made to the extent the agency deems appropriate.
An SEC spokesperson said in an emailed statement that it does not comment on the existence or nonexistence of a possible whistleblower submission.
OpenAI did not immediately respond to requests for a comment on the letter.
The news was first reported by the Washington Post.
The whistleblowers allege that OpenAI issued overly restrictive employment, severance and non-disclosure agreements to its employees, which could have led to penalties against workers who raised concerns about OpenAI to federal authorities.
The letter also says OpenAI required employees get prior consent from the company if they wanted to disclose information to federal regulators, adding that OpenAI did not create exemptions in the employee nondisparagement clauses for disclosing securities violations to the SEC.
The letter also asked the SEC to require OpenAI to produce every contract that contained a non-disclosure agreement, including employment agreements, severance agreements and investor agreements for inspection.
OpenAI’s chatbots with generative AI capabilities, such as engaging in human-like conversations and creating images based on text prompts, have stirred safety concerns as AI models become powerful.
OpenAI in May formed a Safety and Security Committee that will be led by board members, including CEO Sam Altman, as it begins training its next artificial intelligence model.