Skip to Content Facebook Feature Image

Who is Zico Kolter? A professor leads OpenAI safety panel with power to halt unsafe AI releases

News

Who is Zico Kolter? A professor leads OpenAI safety panel with power to halt unsafe AI releases
News

News

Who is Zico Kolter? A professor leads OpenAI safety panel with power to halt unsafe AI releases

2025-11-02 21:21 Last Updated At:21:30

If you believe artificial intelligence poses grave risks to humanity, then a professor at Carnegie Mellon University has one of the most important roles in the tech industry right now.

Zico Kolter leads a 4-person panel at OpenAI that has the authority to halt the ChatGPT maker's release of new AI systems if it finds them unsafe. That could be technology so powerful that an evildoer could use it to make weapons of mass destruction. It could also be a new chatbot so poorly designed that it will hurt people's mental health.

“Very much we’re not just talking about existential concerns here,” Kolter said in an interview with The Associated Press. “We’re talking about the entire swath of safety and security issues and critical topics that come up when we start talking about these very widely used AI systems.”

OpenAI tapped the computer scientist to be chair of its Safety and Security Committee more than a year ago, but the position took on heightened significance last week when California and Delaware regulators made Kolter's oversight a key part of their agreements to allow OpenAI to form a new business structure to more easily raise capital and make a profit.

Safety has been central to OpenAI's mission since it was founded as a nonprofit research laboratory a decade ago with a goal of building better-than-human AI that benefits humanity. But after its release of ChatGPT sparked a global AI commercial boom, the company has been accused of rushing products to market before they were fully safe in order to stay at the front of the race. Internal divisions that led to the temporary ouster of CEO Sam Altman in 2023 brought those concerns that it had strayed from its mission to a wider audience.

The San Francisco-based organization faced pushback — including a lawsuit from co-founder Elon Musk — when it began steps to convert itself into a more traditional for-profit company to continue advancing its technology.

Agreements announced last week by OpenAI along with California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings aimed to assuage some of those concerns.

At the heart of the formal commitments is a promise that decisions about safety and security must come before financial considerations as OpenAI forms a new public benefit corporation that is technically under the control of its nonprofit OpenAI Foundation.

Kolter will be a member of the nonprofit's board but not on the for-profit board. But he will have “full observation rights” to attend all for-profit board meetings and have access to information it gets about AI safety decisions, according to Bonta's memorandum of understanding with OpenAI. Kolter is the only person, besides Bonta, named in the lengthy document.

Kolter said the agreements largely confirm that his safety committee, formed last year, will retain the authorities it already had. The other three members also sit on the OpenAI board — one of them is former U.S. Army General Paul Nakasone, who was commander of the U.S. Cyber Command. Altman stepped down from the safety panel last year in a move seen as giving it more independence.

“We have the ability to do things like request delays of model releases until certain mitigations are met,” Kolter said. He declined to say if the safety panel has ever had to halt or mitigate a release, citing the confidentiality of its proceedings.

Kolter said there will be a variety of concerns about AI agents to consider in the coming months and years, from cybersecurity – “Could an agent that encounters some malicious text on the internet accidentally exfiltrate data?” – to security concerns surrounding AI model weights, which are numerical values that influence how an AI system performs.

“But there’s also topics that are either emerging or really specific to this new class of AI model that have no real analogues in traditional security,” he said. “Do models enable malicious users to have much higher capabilities when it comes to things like designing bioweapons or performing malicious cyberattacks?”

“And then finally, there’s just the impact of AI models on people,” he said. “The impact to people’s mental health, the effects of people interacting with these models and what that can cause. All of these things, I think, need to be addressed from a safety standpoint.”

OpenAI has already faced criticism this year about the behavior of its flagship chatbot, including a wrongful-death lawsuit from California parents whose teenage son killed himself in April after lengthy interactions with ChatGPT.

Kolter, director of Carnegie Mellon's machine learning department, began studying AI as a Georgetown University freshman in the early 2000s, long before it was fashionable.

“When I started working in machine learning, this was an esoteric, niche area,” he said. “We called it machine learning because no one wanted to use the term AI because AI was this old-time field that had overpromised and underdelivered.”

Kolter, 42, has been following OpenAI for years and was close enough to its founders that he attended its launch party at an AI conference in 2015. Still, he didn't expect how rapidly AI would advance.

“I think very few people, even people working in machine learning deeply, really anticipated the current state we are in, the explosion of capabilities, the explosion of risks that are emerging right now,” he said.

AI safety advocates will be closely watching OpenAI's restructuring and Kolter's work. One of the company's sharpest critics says he's “cautiously optimistic,” particularly if Kolter's group "is actually able to hire staff and play a robust role.”

“I think he has the sort of background that makes sense for this role. He seems like a good choice to be running this,” said Nathan Calvin, general counsel at the small AI policy nonprofit Encode. Calvin, who OpenAI targeted with a subpoena at his home as part of its fact-finding to defend against the Musk lawsuit, said he wants OpenAI to stay true to its original mission.

“Some of these commitments could be a really big deal if the board members take them seriously,” Calvin said. “They also could just be the words on paper and pretty divorced from anything that actually happens. I think we don’t know which one of those we’re in yet.”

Carnegie Mellon University Head of Machine Learning, Zico Kolter delivers a keynote speech at AI Horizons Summit in Bakery Square on Thursday, Sept. 11, 2025 in Pittsburgh. (Sebastian Foltz/Pittsburgh Post-Gazette via AP)

Carnegie Mellon University Head of Machine Learning, Zico Kolter delivers a keynote speech at AI Horizons Summit in Bakery Square on Thursday, Sept. 11, 2025 in Pittsburgh. (Sebastian Foltz/Pittsburgh Post-Gazette via AP)

Carnegie Mellon University Head of Machine Learning, Zico Kolter delivers a keynote speech at AI Horizons Summit in Bakery Square on Thursday, Sept. 11, 2025 in Pittsburgh. (Sebastian Foltz/Pittsburgh Post-Gazette via AP)

Carnegie Mellon University Head of Machine Learning, Zico Kolter delivers a keynote speech at AI Horizons Summit in Bakery Square on Thursday, Sept. 11, 2025 in Pittsburgh. (Sebastian Foltz/Pittsburgh Post-Gazette via AP)

DENVER (AP) — A Colorado woman suspected of killing two of her young children during a custody dispute with her ex-husband two years ago has been extradited from Britain to the United States to face charges in their deaths.

Kimberlee Singler was arrested in December 2023 in London just over a week after her 9-year-old daughter and 7-year-old son were found dead in their home in Colorado Springs. Her 11-year-old daughter was injured but survived. The girl eventually told an investigator that her mother said God made her do it, according to court documents in the United Kingdom.

Singler, who had superficial knife wounds, told police that a man who entered the apartment was responsible. She was initially considered a victim.

The surviving daughter backed up Singler’s claim at first, but police sought to arrest Singler on Dec. 26, 2023, after they said the girl changed her story. By then Singler was gone. She was found four days later in London’s upscale Chelsea neighborhood and arrested. It is not known why she ended up there.

Singler, 37, fought extradition and denied attacking her children. Her London defense attorney, Edward Fitzgerald, argued that Singler should not be extradited because if convicted of first-degree murder in Colorado, she would face a mandatory sentence of life without parole — a sentence that violates European human rights law. Fitzgerald represented Wikileaks founder Julian Assange in his long fight against extradition to face espionage charges in the U.S.

Singler does not yet have a U.S.-based attorney listed as representing her in court documents, according to the court clerk’s office.

A judge rejected Singler's challenge in January 2025, and her bid for an appeal was rejected in November.

Singler has been charged with two counts of first-degree murder in the deaths of the two children. The family had been staying with Singler's mother during the custody battle, but the mother was away at the time, according to court documents in the U.K.

Singler also faces one count of attempted murder, three counts of child abuse and one count of assault.

According to U.K. court documents, the children's bodies were found by police shortly after midnight on Dec. 19, 2023. Police said they found no footprints in the snow leading to a patio where Singler said an intruder entered through an unlocked door and attacked her, causing her to lose consciousness.

She told police that her ex-husband “had previously dreamt about killing his family, that the children’s father was always trying to ‘frame her’ and ‘get her arrested’ and to have the kids taken away from her,” Judge John Zani said in a January ruling against Singler in Westminster Magistrates' Court.

Police said GPS records showed that her ex-husband was driving a truck at the time of the killings about 80 miles (130 kilometers) away.

The day before the bodies of the children were found, a judge in Colorado ordered Singler to comply with a previous order to allow the father to take custody of them for the holidays, according to state court records. She was told to either give the children to her ex-husband on her own or bring them to a Dec. 20, 2023 court hearing to exchange custody of them there.

On the day of the hearing, Singler asked the judge to delay it, writing in a motion that she and her children had been attacked and that two of the children were murdered. She asked for time to grieve the loss of her children and “gain my bearings after this incident.”

FILE - Law enforcement respond to a location where children were found dead inside of a condo in the Palomino Ranch Point complex, Dec. 19, 2023, in Colorado Springs, Colo. (Parker Seibold/The Gazette via AP, File)

FILE - Law enforcement respond to a location where children were found dead inside of a condo in the Palomino Ranch Point complex, Dec. 19, 2023, in Colorado Springs, Colo. (Parker Seibold/The Gazette via AP, File)

Recommended Articles