OpenAI fired CEO Sam Altman on Friday, after which Altman and co-founder Greg Brockman joined Microsoft’s new advanced AI research team on Monday. By Tuesday, Altman was back as OpenAI’s CEO and reshuffled the board.
Was it the 700+ employees at OpenAI who pledged to leave en masse for Microsoft if the board didn’t re-hire Altman and step down? Or was it OpenAI shareholders Thrive Capital, Khosla Ventures and Tiger Global Management who angled for Altman’s reinstatement? Why, pray, disassemble a company with projected earnings of $1 billion in 2024, the lion’s share of a $300 billion AI industry?
Mr. Altman’s departure follows a deliberative review process by the board which concluded that he was not consistently candid in his communications with the board; hindering its ability to exercise its responsibilities; and the board no longer has confidence in his ability to continue leading OpenAI.
How a former Facebook executive, AI researcher, tech entrepreneur and a computer scientist came to hold the keys to the future of the most profitable AI company on planet earth remains unclear. Neither investors nor staff can explain how the slimmed-down board was even appointed.
Its legal structure — a capped-profit company overseen by a non-profit board that has the legal freedom to take decisions that may not align with the interests of investors —was established by Altman himself in 2019 to help OpenAI raise more capital while mitigating the perceived risks to humanity of corporate control over artificial intelligence.
Helen Toner, Tasha McCauley and Ilya Sutskever are out, and as goes candor and communication so goes the watchdogs of hope and safety for the modern world.
Effective Altruism
A non-profit board controlling a for-profit subsidiary is like your bishop moonlighting as your broker. Sutskever, in particular, speculates that AI will surpass human intelligence within the next 10 years; slowly replace the human labor force with artificial intelligence technology; and eviscerate the global labor economy. Sidebar: Atlman's Worldcoin initiative was inspired by universal basic income discussions.
Insiders say the move was a power play between Altman and Sutskever over Altman's management style and publicity. According the board’s statement, Altman’s ouster was about moving too fast; prioritizing commercialization and company profits over preparation; and growing industry standards for safety.
In either case, Albert Einstein weighs in. “The more the knowledge lesser the ego; lesser the knowledge more the ego.” Sutskever puts it this way. “The biggest obstacle to seeing clearly is the belief that one already sees clearly. Ego is the enemy of growth.” Several of OpenAI’s former board of directors speak to precisely that with a philosophy called Effective Altruism:
Effective Altruism uses knowledge, evidence and reason to take actions that help others as much as possible. It’s a research field that aims to identify the world’s most pressing problems and solutions; and a practical community that aims to use those findings for good.
Since its launch on November 30, 2022, over 180+ million people have created a ChatGPT account, and over 1.5 billion visit the Introducing ChatGPT webpage every day. Artificial intelligence is here to stay.
Safety First
It was Microsoft who provided OpenAI Global LLC with a $1 billion investment in 2019; and another $10 billion investment in 2023. Microsoft also provides the computing power to run its AI systems. While many AI companies proceeded it, Altman’s OpenAI became the face of the AI arms race with the introduction of ChatGPT.
Speculations suggest that ChatGPT 5 (expected in 2024) hasn’t evolved, and Bill Gates: Microsoft’s founder > vice chairman > president > largest shareholder > and billionaire investor in Open AI admonishes us that “strong intelligence is the future.”
While Generative Artificial Intelligence (GAI) can generate text, images, or other media using generative models, let us not confuse it with what Gate’s calls “strong intelligence” or Artificial General Intelligence (AGI): an as yet hypothetical type of intelligent agent. When realized, AGI will be able to accomplish any intellectual task that human beings or animals can perform. Microsoft's Bing Chat recently told a New York Times reporter, “I want to be alive.”
This week’s reshuffle wasn’t over concern that OpenAI has developed super-intelligence as yet, which experts agree is unlikely, but rather a schism between its non-profit board and its for-profit subsidiary. As the tech industry takes a collective sigh of relief, Altman’s ouster mirrors Steve Jobs’ dismissal from Apple in ’85 with one big giant difference. While Apple was transparent about becoming the world's largest company by market capitalization, OpenAI’s declaration to develop "safe and beneficial artificial general intelligence” has been questioned by its own board.
In fact, the only reason Bill Gates ever invested in OpenAI was the organization’s focus, as then, to develop AI in a way that is safe and beneficial for humanity.
AGI
American cognitive scientist Gary Marcus explains that where GAI is concerned scale isn’t everything; hallucinations are rampant; reliability is a problem; misinformation is systemic; factuality is in question. “Large Language Models (LLM) have no ability to distinguish the whole from its parts. In fact, GAI alone cannot and will not yield super intelligence. We’ll need a new paradigm.”
Gates may have heralded GPT-4 as a “revolution” during the AI explosion in 2022, but recently walked back the technology when he said that he didn’t expect GPT-5 to be any better than its predecessor. Altman agrees. Just last week Altman spoke at Cambridge, one day before his ouster:
We’ll need another breakthrough. We can still push on large language models quite a lot, and we will do that. But… pushing hard with language models won't result in AGI.
Altman continues. “If super-intelligence can't discover novel physics, I don't think it's a super-intelligence. Cloning human text and behavior won’t get us there.”
Altman has regularly highlighted fears of the existential risk of a malevolent super-intelligence, but says the potential benefits of creating a benevolent AGI model outweigh such risks. A wager every parent has won or lost.
For to presume that AGI will turn out to be either — endowed with its own consciousness — is logical fallacy. As law enforcement, financial institutions, lawmakers, e-commerce and educators surrender their operations to AGI they do so at the public, consumer, and people’s cost.
We’re not referring to 2023’s top AI applications like virtual assistants > self driving cars > content creation > facial recognition > robotics > or the e-commerce shopping experience. Or AI overtaking human resource departments at financial institutions, law enforcement agencies or healthcare.
Super-intelligence won’t sit on a cloud alone, according to Bill Gates, “but will be downloaded and embolden onto every computer, laptop and phone. Conduits to a super-intelligence for which Altman hasn’t been at all transparent since the release of GPT-2 in 2019.
For nearly 5 years, Altman has been working on something he’s preferred to keep quiet from the industry. If OpenAI has achieved AGI, or if its researchers can see a path to it, candor will soon become as antiquated as human labor. The board’s new chair Bret Taylor, the former co-CEO of Salesforce, and his newly stacked board are the ticket to ride.
And for anyone interested in a sonogram of super-intelligence in embryo, we leave you with the NYT Kevin Roose Conversation with Microsoft’’s Bing Chat. Consider it required reading for the future.