Carnegie Mellon’s Kolter Joins OpenAI Board of Directors, Safety and Security Committee
Media Inquiries
OpenAI has announced(opens in new window) the appointment of Carnegie Mellon University Professor Zico Kolter(opens in new window) to its board of directors.
Kolter will also join the board’s Safety and Security Committee(opens in new window). The committee is responsible for making recommendations on critical safety and security decisions for all OpenAI projects.
“I'm excited to announce that I am joining the OpenAI board of directors,” said Kolter, the head of the Machine Learning Department(opens in new window) in CMU’s School of Computer Science(opens in new window). “I'm looking forward to sharing my perspectives and expertise on AI safety and robustness to help guide the amazing work being done at OpenAI.”
Kolter’s work predominantly focuses on artificial intelligence safety, alignment and the robustness of machine learning classifiers. His research and expertise span new deep network architectures, innovative methodologies for understanding the influence of data on models, and automated methods for evaluating AI model robustness, making him an invaluable technical director for OpenAI’s governance.
“Zico adds deep technical understanding and perspective in AI safety and robustness that will help us ensure general artificial intelligence benefits all of humanity,” said Bret Taylor, chair of OpenAI’s board.
Kolter has been a key figure at CMU for 12 years with affiliations in the Computer Science Department(opens in new window), the Software and Societal Systems Department(opens in new window), the Robotics Institute(opens in new window), the CyLab Security and Privacy Institute(opens in new window), and the College of Engineering(opens in new window)'s Electrical and Computer Engineering Department(opens in new window). He completed his Ph.D. in computer science at Stanford University in 2010, followed by a postdoctoral fellowship at the Massachusetts Institute of Technology from 2010-12. Throughout his career, he has made significant contributions to the field of machine learning, authoring numerous award-winning papers at prestigious meetings such as the Conference on Neural Information Processing Systems (NeurIPS), the International Conference on Machine Learning (ICML) and the International Conference on Artificial Intelligence and Statistics (AISTATS).
Kolter’s research includes developing the first methods for creating deep learning models with guaranteed robustness. He also pioneered techniques for embedding hard constraints into AI models using classical optimization within neural network layers. More recently, in 2023, his team developed innovative methods(opens in new window) for automatically assessing the safety of large language models, demonstrating the potential to bypass existing model safeguards through automated optimization techniques. Alongside his academic pursuits, Kolter has worked closely within industry throughout his career, formerly as chief data scientist at C3.ai, and currently as chief expert at Bosch and chief technical adviser at Gray Swan, a startup specializing in AI safety and security.