Skip to main content

CMU Supports NIST Guidelines on Red Teaming for Generative AI

Media Inquiries
Name
Peter Kerwin
Title
University Communications & Marketing

Carnegie Mellon University’s Block Center for Technology and Society(opens in new window) and K&L Gates Initiative in Ethics and Computational Technologies(opens in new window) released a white paper(opens in new window) that will support national efforts to ensure that AI systems are safe, secure and trustworthy. The white paper followed a workshop the groups hosted in late February on red teaming — strategic testing to identify flaws and vulnerabilities in AI systems. There, experts from academia and industry worked to gain a shared understanding of red teaming for generative AI.

The workshop was in response to an executive order(opens in new window) released by President Joe Biden that set his administration’s priorities related to artificial intelligence used by Americans. It called for the National Institute of Standards and Technology (NIST)(opens in new window) to develop tools and tests to help ensure that AI systems fit those standards. 

CMU frequently collaborates with NIST on AI issues, said Theresa Mayer(opens in new window), CMU’s vice president for research.

“Carnegie Mellon is proud to continue supporting this important work in providing the foundation of our nation's AI strategy as this technology continues to be implemented in the public sector. We've been deeply engaged with NIST and their ongoing work providing guidelines for this technology that will be vital in moving forward responsibly integrating AI tools and software into the federal government's everyday operations,” she said. 

Hoda Heidari(opens in new window), the K&L Gates Career Development Assistant Professor in Ethics and Computational Technologies in CMU’s School of Computer Science(opens in new window), was a conference organizer. She said there are significant questions about how to best use red teaming.

“In response to a rising concern surrounding the safety, security and trustworthiness of generative AI models, practitioners and regulators alike have pointed to AI red teaming as a key strategy for identifying and mitigating societal risks of these models,” Heidari said. “However, despite AI red teaming retaining a central role in recent policy discussions and corporate messaging, significant questions remain about what precisely it means, how it relates to conventional red teaming practices and cybersecurity ... how it should be conducted and what role it can play in the future evaluation and regulation of generative AI.”

The workshop included discussions on research, industry practices and the policy and legal implications of AI red teaming. In addition to the white paper summary, video recordings of the event are available on the Block Center’s YouTube (opens in new window)channel. 

Hoda Heidari

Hoda Heidari

Key Points from the White Paper

  • A functional definition of red teaming, its components, scope and limitations, is necessary for effective red teaming. 
  • Generative AI research and practice communities must move toward standards and best practices around red teaming.
  • The composition of the red team (in terms of diversity of backgrounds and expertise) is an important consideration.
  • Red teaming efforts should address the broader system — as opposed to individual components.
  • The broader political economy (e.g., market forces, regulations) will influence the practice of red teaming.

More Information(opens in new window)

— Related Content —