UPDATED
As the world turns, so do the days of AI…
Now that the dust is settling a bit, the behind-the-scenes story is beginning to piece together.
As we all know, Sam Altman, CEO of OpenAI, was suddenly fired by the board, setting in motion an epic implosion that threatened the future of the company, investor stakes, and partner relationships.
This series of events also cast a bright light on the importance of trust, governance, and purpose. I’m thankful for this, at least.
Now it’s coming to light that there were two factions: 1) Chief Scientist Ilya Sutskever and board member Helen Toner, and 2) Sam Altman and Greg Brockman.
At the heart of the matter appears to be a battle for humanity. No, I’m serious.
The Unique Org Structure that Governs OpenAI’s For Profit and Nonprofit Investments
OpenAI started in 2015 as a nonprofit to create a nonprofit to build A.I. that was safe and beneficial to humanity. It needed to if it were to achieve its divine mission of building a superintelligent system that could rival the human brain. What started as a private donor-funded enterprise evolved into a commercial need and opportunity.
In 2018, the company created a for-profit subsidiary that raised billions, including $1 billion from Microsoft. This new subsidiary would be controlled by the nonprofit board governed by a duty to “humanity, not OpenAI investors.”
With the unprecedented popularity of ChatGPT, the scale of humanity’s and OpenAI’s balance went off-kilter, according to some board members.
In particular, a rift ensued between Altman and Helen Toner, a (former) board member and Director of Strategy and Foundational Research Grants at Georgetown’s Center for Security and Emerging Technology (CSET).
Helen Toner’s association with Open Philanthropy, who pledged $30 million dollars to OpenAI early on, may have helped her earn a seat on the board.
Seeing Open ‘A’ Eye-to-Eye
It would later emerge that Toner and Altman no longer saw (Open A) eye-to-eye.
It’s now being reported, that a few weeks ago, they met to talk about a paper she co-authored that appeared to criticize OpenAI while praising Anthropic, the company’s main rival. Anthropic was started by a senior Open AI scientist and researchers who left after a series of disagreements with Altman. They asked the board in 2021 to oust Altman, and when that didn’t happen, they left the company. These past events would play a role in later developments…
Altman complained to Toner that the paper criticized OpenAI’s approach to safety and ethics. His point was that her words were dangerous to the company and its investors.
Altman later sent an email expressing that they weren’t “on the same page regarding the damage of all this.” He emphasized that “any amount of criticism from a board member carries a lot of weight.”
And, he’s right. It does. This is why it’s important to have an organizational and board structure that aligns with the company’s purpose, mission, and strategy.
So what exactly did Toner say?
Here’s an excerpt from her report:
“Anthropic’s decision represents an alternate strategy for reducing ‘race-to-the-bottom’ dynamics on AI safety. Where the GPT-4 system card acted as a costly signal of OpenAI’s emphasis on building safe systems, Anthropic’s decision to keep their product off the market was instead a costly signal of restraint. By delaying the release of Claude until another company put out a similarly capable product, Anthropic was showing its willingness to avoid exactly the kind of frantic corner-cutting that the release of ChatGPT appeared to spur.”
Let’s read that last part again…”exactly the kind of frantic corner-cutting that the release of ChatGPT appeared to spur.”
For an academic paper, this sentence is out of place. It’s personal judgement tied to an accusation rather than presented as a scientific research finding.
Altman discussed his concerns with Chief Scientist Ilya Sutskever whether Toner should be removed from the board. Instead, Sutskever sided with Toner. The events that led to the creation of Anthropic likely contributed to his rationale.
Instead of Toner being ousted, it was Altman.
As we all know now, Altman’s sudden firing seemed more imprudent than strategic and methodical. Microsoft’s CEO, Satya Nadella, arguably OpenAI’s most valuable partner and investor, only received notice one minute before the news was announced.
Hours later, the board was confronted by employees who emphasized that their decision had put the company in grave danger.
But the board remained defiant. Toner reminded employees of its mission to create artificial intelligence that “benefits all of humanity.” And according to The New York Times, Toner went a drastic step further, stating if the company was destroyed, “that could be consistent with its mission.”
Record scratch. Freeze frame. Picture of Sam Altman looking shocked. Voiceover says, “Yep, that’s me. You’re probably wondering how I got here.”
It was a coup in two directions.
Some felt Altman was moving too fast, not playing the game by rules that benefit humanity, and not listening to people who voiced concerns or contrasting ideas.
Sutskever would later realize the damage caused to a company he cared deeply about only after co-founder Greg Brockman resigned, almost every one of its 800 employees threatened to quit, and Microsoft offered everyone roles in a new AI research division that would be created for them.
He would later Tweet, or is it Xeet now? That’s another conversation we need to have. “I deeply regret my participation in the board’s action,” he confessed. “I never intended to harm OpenAI,” he continued.
He did harm the company, though. And it may have been at the bidding of a board that intended so.
Before the saga ended, the board appointed Emmett Shear as interim CEO. One of his first mandates was to find evidence that supported Altman’s firing and threatened to quit if he didn’t receive it. Narrator: “He never received the evidence.” Though, credit is deserved for helping to set the stage for a reunion. Not bad for a three-day stint.
But there’s more.
Reuters reported that several staff researchers sent a letter to the board warning of a powerful AI discovery codenamed Q* (Q-Star) that could threaten humanity.
On November 16, Altman had shared publicly that OpenAI had recently made a huge breakthrough, one that pushes “the veil of ignorance back and the frontier of discovery forward.” To add rolling thunder to the initial boom, he added, that this was the fourth such breakthrough in its 8-year history.
The Information reported that OpenAI made an AI breakthrough that stoked “excitement and concern.”
This is a story that will continue to unfold…
Return of the Alt’man
As we all know by now, Altman is back at OpenAI in the CEO role, for now, without a board seat. Brockman also returns, but like Altman, without a board seat. The board also received a makeover with Bret Taylor serving as Chair, joined by Larry Summers with Adam D’Angelo remaining from the original board. The Verge reported that the board is now seeking to expand up to nine people to reset the governance of OpenAI.
The damage is done. But the silver lining is that, while the board misfired, it did effectively, and expensively, shine a light on the incredible need for AI ethics, safety, and governance.
Now, the real work begins.
Trust must be re-earned, not just for the company, but for the entire AI industry and movement. Existential threats must not succumb to unfettered capitalism or short-termism. Humanity needs its benefactors and protectors.
Every new feature and breakthrough requires careful analysis, outside voices, philosophical debate, and a board that empowers innovation balanced with ethics and safety.
There’s much to sort through and analyze. If anything, the importance of governance, trust, and purpose converge to represent the heart of the matter.
This is a time to learn from mistakes and to lean forward, open the door to a diversity of thoughtful perspectives, balance progress with humanity, and communicate transparently on what’s right and what’s not right to do.
And never forget, every successful company knows that it is nothing “without its people.“
Happy Thanksgiving, everyone!
Please subscribe to my newsletter, a Quantum of Solis.
Sources
The New York Times, Cade Metz, Tripp Mickle, Mike Isaac
Bloomberg, in particular, Emily Chang, Katie Roof, Ed Ludlow
The Information, Jessica Lessin, Emir Efrati
The Verge, Nilay Pate, Alex Heath
Siqi Chen (@blader)
Kara Swisher
reddit/OpenAI
Leave a Reply