CMU and NIST Team To Manage AI Risk
Media Inquiries
Working closely with the National Institute of Standards and Technology(opens in new window) (NIST), the Responsible AI Initiative(opens in new window) of Carnegie Mellon University’s Block Center for Technology and Society(opens in new window) hosted a workshop this July with the goal of operationalizing the NIST AI Risk Management Framework(opens in new window) (AI RMF).
The framework, the result of a broad collaboration with the public and private sectors, provides guidelines to better manage the potential risks of AI systems at all levels of society. Its use, which is voluntary, can help integrate trustworthiness into AI — by considering how those AI systems are designed, developed, used and evaluated.
For nearly 70 years, Carnegie Mellon University has advanced artificial intelligence (AI) to shape the future of society. From health care(opens in new window), to robotics(opens in new window), to data science(opens in new window), to, occasionally, creating self-driving Zamboni machines(opens in new window), CMU researchers are at the forefront of the AI revolution.
To continue that mission, the Block Center will provide funds to CMU faculty teams pursuing research ideas to operationalize AI RMF that were generated at the workshop.
“Artificial intelligence isn’t a sector; it’s a tool that will be used in every sector,” said Steve Wray, executive director of the Block Center. “Carnegie Mellon is the right place for this effort because of the practical work we are doing on the ground. CMU can help users identify the issues they may be facing with AI and how to use that AI responsibly, because we know the incredible value that AI can bring. But if it’s not done well, it can be risky and dangerous.”
The event paired government officials and private sector leaders with CMU AI experts. Ramayya Krishnan, dean of the Heinz College of Information Systems and Public Policy, and faculty director of the Block Center who serves on the National Artificial Intelligence Advisory Committee, said, “Carnegie Mellon has served as the epicenter of AI, and our contribution to the field has only grown in recent decades. Our commitment to innovation with a focus on responsible operationalization of AI technology in consequential societal systems will inform both the policy and practice of important frameworks as AI RMF”.
Rayid Ghani, Distinguished Career Professor in the Machine Learning Department and the Heinz College of Information Systems and Public Policy at Carnegie Mellon University.
Jodi Forlizzi, left, the Herbert A. Simon Professor in Computer Science and HCII, and the associate dean for Diversity, Equity and Inclusion in the School of Computer Science; and Hoda Heidari, K&L Gates Career Development Assistant Professor in Ethics and Computational Technologies with joint appointments in the Machine Learning Department and the Software and Societal Systems Department.
Organizers included Rayid Ghani(opens in new window), a Distinguished Career Professor in the Machine Learning Department(opens in new window) and the Heinz College; Jodi Forlizzi(opens in new window), the Herbert A. Simon Professor of Computer Science and Human-Computer Interaction in the School of Computer Science(opens in new window) (SCS); and Hoda Heidari(opens in new window), the K&L Gates Career Development Assistant Professor in Ethics and Computational Technologies in the Machine Learning Department and the Software and Societal Systems Department(opens in new window). All three are among the co-leads of the Responsible AI Initiative.
Heidari said the seed funding distributed through the Responsible AI Initiative aims to support use-case-focused research projects that address AI risks, such as bias, data privacy, and lack of transparency, through the NIST AI Risk Management Framework.
“Our goal is to ensure that the high-quality research our faculty does gets translated into positive impact in policy and practice of AI,” Heidari said. “External partnerships are extremely important to this effort. They close the gap between our research and educational efforts and the needs of stakeholders on the ground.”
Martial Hebert, dean of the School of Computer Science, said that CMU has spent decades building a culture where people care about using technology to solve real problems.
“Building on the work that NIST has done and CMU’s knowledge of the NIST AI Risk Management Framework, we will work to ensure that we deploy this powerful technology in a way that acknowledges and manages the risks that accompany innovation and exploration. I am looking forward to participating in these conversations, and in furthering this relationship going forward,” Hebert said.
As for mitigating the risks and exploring the full potential of AI, Wray pointed out that all tools are only as good as the people who build them.
“If we're talking about AI, a lot of it was invented here at CMU. We’re still inventing it. We have engineers working with computer scientists, public policy folks working with our business school and ethicists. That interdisciplinary approach is just part of our DNA at Carnegie Mellon,” Wray said. “We understand AI, and we bring a willingness to roll up our sleeves and get to work.”