Open AI’s new boss looks just like the old boss. But the company and the artificial intelligence industry may have been profoundly changed by the high-stakes soap opera of the past five days. OpenAI’s CEO, co-founder and figurehead Sam Altman was fired by the board of directors on Friday. By Tuesday night, after a mass protest by most of the startup’s employees, Altman was walking back, and most of the existing board was gone. But that board, largely independent of OpenAI’s operations, was bound by a mission statement of “for the benefit of humanity” and was critical to the company’s uniqueness.
Altman toured the world in 2023, warning the media and governments about the existential dangers of the technology he was developing, portraying OpenAI’s nonprofit structure as a spark against the reckless development of powerful AI. . Whatever Altman does with Microsoft’s billions, the team can keep him and other company leaders in check. If he starts acting dangerously or against human interest, in the view of the board, the board can expel him. “The board can fire me, and I think that’s important,” Altman told Bloomberg in June.
“They couldn’t fire him, that’s bad,” says Toby Ort, a senior researcher in philosophy at the University of Oxford.
The board was reshuffled to include corporate figures and former US Secretary of the Treasury Larry Summers on OpenAI’s troubled leadership reset technology. Two directors associated with the “effective altruism” movement, both women, were removed from the board. It has crystallized existing divisions over how the future of AI should be managed. The outcome is viewed very differently by Toomers who worry that AI is going to destroy humanity; Inhumans who think technology will accelerate a utopian future; Believers in freewheeling market capitalism; And advocates of tighter regulation to control untrustworthy tech giants balance the potential harms of powerful disruptive technology with the desire to make money.
“To some extent, it’s a collision course that’s been in the works for a long time,” says Art, who is also credited with co-founding the efficient altruism movement, parts of which have become obsessed with the doomer end of the AI risk spectrum. “If OpenAI’s nonprofit management team is fundamentally powerless to actually influence its behavior, I think it’s a good thing to expose that powerlessness.”
Why OpenAI’s team decided to go against Altman remains a mystery. Its announcement that Altman had stepped down as CEO said he had “continued to be dishonest in his communications with the board, impeding its ability to carry out its responsibilities.” An internal OpenAI memo later clarified that Altman’s ouster “was not made in response to wrongdoing.” Emmett Shear, the second of two interim CEOs to run the company between Friday night and Wednesday morning, wrote after assuming the role that he asked why Altman had been fired. “The board did not fire Sam because of a specific disagreement over security,” he wrote. “Their rationale is completely different from that.” He promised to investigate the reasons for Altman’s dismissal.
The vacuum left room for rumors, including that Altman was devoting too much time to side projects or that he was simply too deferential to Microsoft. It fostered conspiracy theories like OpenAI created Artificial General Intelligence (AGI), and the board flipped the kill switch on the advice of chief scientist, co-founder and board member Ilya Sutzkever.
“I know for sure that we don’t have AGI,” said David Shrier, professor of training, AI and innovation at Imperial College Business School in London.