Author: 0xAlpha
(Thanks to Vitalik Buterin for the inspiring feedback.)
Recently, we watched the movie "The Lord of AI", a drama trilogy consisting of three episodes: "The Fellowship of AI", "The Two Roads", and "The Return of the King", produced by Silicon Valley's big-shot VCs and tech giants investing over 10B dollars. Many people applauded for Sam Altman's return to the throne (just as they cheered Aragon's return to Gondor), and some even compared it to Steve Jobs' return to Apple.
However, the two are not comparable whatsoever. "The Lord of AI" is completely another story -- it's about a battle over the choice between two paths: To be for-profit, or not to be? That is the question!
Let's revisit the start of "The Lord of the Rings". When Gandalf saw the ring at Uncle Bilbo’s, he soon realized that such a powerful thing could not be handled by any ordinary people. Only some holy and unworldly person, like Frodo, could handle it. This is why Frodo is the heart of the fellowship — he is the only one who can carry such a mighty thing without getting swallowed by it. Not Gandalf, not Aragon, not Legolas, not Gimli, only Frodo. The whole TLOR story hinges on this unique nature of Frodo's.
Now, switch back to the start of "The Lord of AI". In 2015, Sam Altman, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Elon Musk and some tech companies announced the formation of OpenAI and pledged over $1 billion to the venture. These were a group of the world's smartest brains, almost as wise as Gandalf. They also knew that they were building such a powerful thing, just like the ring, that should not be owned and controlled by anyone pursuing their own interests. It had to be in the hands of by someone selfless, just like Frodo. Hence, instead of launching a for-profit company, they formed OpenAI as a non-profit research organization, presumably not driven by profit.
The philosophy that "such a powerful thing should not be controlled by a profit-driven company" might not have been just an agreement among OpenAI's co-founders when they established it. It could very well have been the reason that brought these co-founders together to decide to form OpenAI at the very beginning. Before OpenAI's inception, Google had already demonstrated a disturbing potential to wield such a super power. It appeared that OpenAI was formed by these visionary "human protectors" as a "protectors' alliance" to counter what was becoming an AI monster at Google, a profit-seeking company. And Ilya might have been persuaded to leave Google to lead OpenAI's R&D precisely because of his belief in this philosophy, as this move of Ilya makes no sense from any other perspective. Back in 2015, no one could offer a better AI development platform than Google. Although OpenAI’s initiators were all Silicon Valley tycoons, none of them was an AI practitioner (they simply don't code). Not to mention the financial disadvantage: OpenAI was obviously not as well-funded as Google. The co-founders pledged 1 billion dollars, but only around 10% materialized (100 million or 50 million from Elon Musk, 30 million from other donors). From a personal financial return perspective, a non-profit organization also could not offer Ilya better financial compensation than being employed at Google. The only reason that could persuade Ilya to leave Google to lead OpenAI was this very philosophy. Ilya’s philosophy is not as well-known to the public as his PhD advisor's. Geoffrey Hinton moved from the U.S. to Canada in part due to his disillusionment with Ronald Reagan-era politics and his disapproval of military funding for AI. In 2023, Hinton left Google due to his concerns about safety risks of AI.
Put simply, the co-founders wanted OpenAI to be their Frodo to carry the “ring" for them.
We've got to hand it to LTOR for being an epic tale, mirroring real-world human quandaries.
But the life is much easier in the friction or the movie. In the friction, the solution is pretty straightforward. Tolkien just whipped up the character Frodo, a selfless dude who could resist the ring's seduction, guarded by the "fellowship of the ring" from the physical attacks.
To make the character Frodo more believable and natural, Tolkien even created an innocent, kind, and selfless race, the Hobbits. The Hobbits live a carefree and pastoral life in the Shire, naturally uninterested in power. Frodo, a typical upright and kind Hobbit, naturally becomes the chosen one, able to resist temptations that even the wise Gandalf could not. If Frodo's nature is attributed to the racial characteristics of Hobbits, then Tolkien's solution to the biggest problem in "The Fellowship of the Ring" is essentially one with a tinge of racism, placing humanity's hope in the noble character of a certain race. As a non-believer of racism, while I can enjoy the plot of a super hero (or super hero race) resolving problems in fictions or movies, I could not be that naive to think the real world is as simple as the movies. In the real world, I don’t buy this solution.
The real world is just much more complicated. Taking OpenAI as a specific example, most models built by OpenAI (especially the GPT series) are monsters of computing power, relying on electricity-powered chips (mostly GPUs). In a world of capitalism, this means it's super capital-hungry. Therefore, if not blessed by capital, there is no way OpenAI's models can evolve into what they are today. In that sense, Sam Altman, as the resource hub of the company, is the key guy. Thanks to Sam's Silicon Valley networks, OpenAI got massive backing from investors and hardware suppliers.
The resources coming into OpenAI to fuel the models are there for a reason — profit. Wait, isn't OpenAI a non-profit organization? Well, technically, yes, but there's been some under-the-hood changes. While keeping its nominal non-profit structure, OpenAI is morphing more into a for-profit entity. This happened with the 2019 launch of OpenAI Global LLC, a for-profit subsidiary set up to legally attract venture funds and give employees stakes. This nifty move aligned OpenAI's and the investors' interests (not donors this time, so presumably profit-seeking). With this alignment, OpenAI could evolve with capital's blessing. OpenAI Global LLC was a game-changer for OpenAI's growth, especially hitching to Microsoft's wagon, bagging a $1 billion investment (and $10 billions later), and running OpenAI's computing monster on Microsoft's Azure-based supercomputing platform. We all know a successful AI model needs three things: algorithm, data, computing power. OpenAI gathered the world's top AI experts for their models' algorithms (reminder: this also relies on capital, OpenAI's expert team ain't cheap). ChatGPT's data mainly comes from the open internet, so not a bottleneck. Computing power, built on chips and electricity, is a big-ticket item. Simply put, 1.5 of the three elements are majorly provided by OpenAI Global LLC's for-profit structure. Without this continuous fuel supply, just on donations, OpenAI could not have made it this far.
But there is a cost for that. It is almost impossible to be blessed by capital while staying independent from it. Now the so-called non-profit framework is more in name than in spirit.
Many signs indicate the tussle between Ilya and Sam is about this path choice: Ilya seems to be fighting to keep OpenAI from straying off the path they initially set.
There's another saying that Sam's mistake in the so-called Q* Model Breakthrough incident led to this failed coup. But I don't believe OpenAI's board would fire a highly accomplished CEO just for mishandling one specific issue. This so-called mistake in the Q* Model Breakthrough, even if it existed, would at most be a trigger.
The real issue with OpenAI is likely its deviation from its original path. In 2018, Elon Musk broke up with Sam seemingly because of the same issue. And it appeared that in 2021 this same reason caused a group of former members to leave OpenAI to found Anthropic. Moreover, the anonymous letter posted by Elon Musk on his twitter during the drama also pointed to this issue.
To be (for-profit) or not to be, that question seems to have been answered at "The Return of the King" episode's end: with Sam's return and Ilya's exile, the path battle is over. OpenAI's set to become a de-facto for-profit (maybe still with a non-profit shell).
But don't misunderstand me. I am not saying that Sam is the bad guy and Ilya is the good guy here. I am simply pointing out that OpenAI is in a dilemma, which can be called as a super-power company dilemma: