Skip to main content
panel discussion at 2025 K&L Gates–CMU Conference

Experts Tackle Generative AI Ethics and Governance at 2025 K&L Gates–CMU Conference

Media Inquiries
Name
Cassia Crogan
Title
University Communications & Marketing

Thought leaders from industry, academia, government and civil society gathered to discuss the ethics and governance of generative artificial intelligence (GenAI) March 10-11 at the K&L Gates-Carnegie Mellon University Conference in Ethics and Computational Technologies(opens in new window)

Sponsored by the K&L Gates Endowment for Ethics and Computational Technologies, the conference examined the new ethical considerations and societal implications of GenAI and weighed the strengths and weaknesses of existing approaches to the governance of the technology to ensure safe, responsible and ethical use.

“Many of us recognize AI as one of the most important and transformative intellectual developments of our time,” said CMU President Farnam Jahanian(opens in new window), as he welcomed conference attendees. “Today we’re truly at an inflection point because of the proliferation of these technologies. These innovations move at such a fast speed that our development of ethical and policy frameworks has to keep up. Our colleagues at Carnegie Mellon, and the scholars and researchers that collaborate with them, are deeply involved in helping to envision and build a future where people, policy and technology are better connected and better served. With that, conversations like these are vital to getting AI right and fully leveraging AI technologies for the benefit of humanity.” 

The conference examined the impact of AI on many sectors, including education, health care, transportation and national security. 

Theresa Mayer and DJ Patil

DJ Patil, general partner of GreatPoint Ventures and former U.S. chief data scientist delivered a keynote address and held a fireside chat with Theresa Mayer(opens in new window), CMU vice president for research, on the conference’s first day.

Atoosa Kasirzadeh

Atoosa Kasirzadeh(opens in new window), assistant professor of philosophy in the Dietrich College of Humanities and Social Sciences(opens in new window), spoke on a panel that explored the new ethical concerns that arise with GenAI compared to conventional predictive AI. “The question of what are the measurement challenges when we want to enforce and implement AI risk governance frameworks are quite important,” she said “I hope that in the next year you are going to see much more mature analysis of this question.” She was joined on stage by David Danks of the University of California, San Diego; Francesca Rossi, IBM AI ethics global leader; and panel moderator Sina Fazelpour of Northeastern University. 

Mike Doyle

Mike Doyle, former U.S. representative from Pennsylvania’s 18th congressional district and government affairs counselor at K&L Gates, moderated a panel on existing government policies and guidance on GenAI with Jesse Dunietz, AI standards, policy and international engagement lead for the National Institute of Standards and Technology; Marc Rotenberg, executive director of the Center for AI and Digital Policy; and Max Katz, policy adviser for U.S. Sen. Martin Heinrich. 

Lorrie Cranor

Lorrie Cranor(opens in new window), FORE Systems University Professor of Computer Science and Engineering and Public Policy and director and Bosch Distinguished Professor in Security and Privacy Technologies at CMU’S CyLab Security and Privacy Institute(opens in new window) moderated a debate exploring internal organizational government structures in major AI firms.

“If we’re going to have systems that can replace what a person could do over a year, that opens up a mind-boggling array of possibilities in terms of risk,” observed Zico Kolter(opens in new window), professor and director of the machine learning department(opens in new window). “These questions still feel a  little like science fiction. But these are genuine concerns and we don’t have as much time as we think before they come to pass.” 

David Lehman, a partner at K&L Gates, said that CMU has been at the forefront of the emerging field of AI governance, shaping its trajectory through groundbreaking research and policy engagement.

“As we see extraordinary developments in AI and the unprecedented momentum attracted by generative AI technology and its application in society, we’ve all become aware of how critical it is to have a well-informed, clear public discourse around the impacts and opportunities of AI,” he said. 

The joint conference was first held at CMU in 2018(opens in new window) and is led by initiative leaders Hoda Heidari(opens in new window) from the School of Computer Science(opens in new window) and Alex John London (opens in new window)from Dietrich College of Humanities and Social Sciences(opens in new window). A summary of the content produced through the discussions will be published through the K&L Gates Initiative’s website. 

 James H Garrett Jr.

Provost James H Garrett Jr. (opens in new window)opened day two of the conference, which focused on the practical implementation of AI governance and the concrete impacts of the technology that are already manifesting themselves in education, health care, the workforce and beyond. 

“The theme of AI ethics and governance deeply aligns with Carnegie Mellon’s institutional mission,” he said. “We’re committed to not only advancing technological frontiers, but also ensuring these advancements serve humanity ethically and responsibly.” 

Carolyn Austin

Carolyn Austin, director of practice innovation at K&L Gates, said while GenAI is being increasingly adopted by lawyers and law firms to enhance productivity, it does not absolve them of their legal and ethical responsibilities. “Lawyers have duties, including confidentiality, clear communication with clients, and partners have a duty to supervise lawyers and others reporting to them in their teams when they are carrying out the work, including appropriate use of generative AI technology. On the flip side, lawyers also have a duty of competence, which explicitly includes understanding the capacity and limitations of technology, including GenAI, and periodically updating that understanding. We can’t opt out. All of which underlines the need for thoughtful, internal AI governance policies and processes and for regular, thorough and targeted GenAI education.” 

Richard Scheines

Richard Scheines(opens in new window), dean of Dietrich College, moderated a fireside chat with Natasha Crampton, chief responsible AI officer for Microsoft. “First, it is the case, and this is very much my experience, that good governance is actually just good business,” Crampton said in response to a question about the potential for financial incentives to lead to unethical behavior in the AI industry. “But this is why, you know, it should not just be an industry endeavor, right? This is where academia plays a very important role. Governments play a very important role. Civil society plays an important role. There has to be a coming together of incentives to drive action across all of those players.”

Jodi Forlizzi, Sarah Fox, Marsha Lovett and Shiv Rao

Moderator Jodi Forlizzi(opens in new window), professor in computer science and CMU’s Human-Computer Interaction Institute(opens in new window) (HCII), led a conversation on the impacts of GenAI in education, employment, medicine and the environment with Sarah Fox(opens in new window), assistant professor in HCII; Marsha Lovett(opens in new window), vice provost for teaching and learning innovation at CMU; and Shiv Rao, founder and CEO of Abridge. 

Forlizzi asked Lovett about how to best incorporate AI when training new employees. She said it is important to examine both performance and learning outcomes.

“A learner may be able to perform better with the aid of an AI tool. It's a totally different question whether they have learned more with that AI tool,” Lovett said. 

— Related Content —