Skip to content

Support Grows for AI Whistleblower Protection Act

WASHINGTON – Senate Judiciary Committee Chairman Chuck Grassley (R-Iowa) welcomed growing support for his AI Whistleblower Protection Act from leading whistleblower and AI groups. This week, 22 groups, including the National Whistleblower Center, sent a letter backing Grassley’s legislation to Health, Education, Labor and Pensions (HELP) Committee Chairman Bill Cassidy (R-La.), whose committee has jurisdiction over the legislation. 

Grassley’s bill provides explicit whistleblower protections to those developing and deploying AI. Currently, AI companies’ alleged use of restrictive severance and nondisclosure agreements (NDAs) create a chilling effect on current and former employees looking to make whistleblower disclosures to the federal government, including Congress.   

“Transparency brings accountability. Today, too many people working in AI feel they’re unable to speak up when they see something wrong. Whistleblowers are one of the best ways to ensure Congress keeps pace as the AI industry rapidly develops. We need to act to make these protections crystal clear, and I’m proud to see so many groups supporting my legislation to increase accountability and protect AI whistleblowers,” Grassley said. 

The groups highlight the importance of whistleblowers as increased use of AI brings potential misuse, ethical lapses and unintended consequences.  

“Employees and industry insiders—rather than regulators—have consistently been among the first to warn about risks of the technologies they’re building. In Silicon Valley, engineers have exposed powerful AI models released without proper safeguards, former staff have surfaced data on youth digital harms, and researchers have stepped forward when serious risks were ignored. Their disclosures—often about conduct that was dangerous but not yet illegal—gave the public and policymakers the evidence needed to act,” the groups wrote. 

In their letter, the groups state some employees may be deterred from reporting issues due to fear of retaliation or professional repercussions. In June 2024, over a dozen current and former employees from leading AI companies publicly stated that confidentiality agreements and fear of retaliation prevented them from raising legitimate safety concerns. 

“Congress has the opportunity to protect individuals who come forward in good faith and to reinforce the principle that safety, ethics, and accountability must accompany innovation … [t]he AI Whistleblower Protection Act helps ensure that those working to develop and deploy AI systems are not punished for acting in the best interest of the public. Strong whistleblower protections are a cornerstone of responsible governance and essential to guiding AI development in a way that upholds our shared democratic values,” the groups continued. 

In addition to the National Whistleblower Center, the letter was signed by the Americans for Responsible Innovation, Center for Democracy & Technology, Center for Humane Technology, Center for Youth and AI, CoFund, Demand Progress, Design It For Us Coalition, Encode AI, Government Accountability Project, National Consumers League, National Decency Coalition, National Employment Law Project, NoSo November, Psst.org, Public Knowledge, Secure AI Project, The Anti-Fraud Coalition, The Tech Oversight Project, The Signals Network, Working Partnerships USA and Young People’s Alliance. 

Download the groups’ letter HERE. Download text of the bill HERE

Background:

Last year, Grassley sent a letter to OpenAI CEO Sam Altman raising concerns about the alleged use of illegally restrictive NDAs, as well as the company’s employment, severance and non-disparagement agreements. 

-30-