I think this is one of the most interesting lines as it basically directly implies that leadership thinks this won't be a winner take all market:
> Instead of our current complex capped-profit structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal capital structure where everyone has stock. This is not a sale, but a change of structure to something simpler.
That is a very obvious thing for them to say though regardless of what they truly believe, because (a) it legitimizes removing the cap , making fundraising easier and (b) averts antitrust suspicions.
> "Our for-profit LLC, which has been under the nonprofit since 2019, will transition to a Public Benefit Corporation (PBC)–a purpose-driven company structure that has to consider the interests of both shareholders and the mission."
One remarkable advantage of being a "Public Benefit Corporation" is this it:
> prevent[s] shareholders from using a drop in stock value as evidence for dismissal or a lawsuit against the corporation[1]
In my view, it is their own shareholders that the directors of OpenAI are insulating themselves against.
(b) is true but no so much (a). If investors thought it would be winner take all and they thought ClosedAI would win they'd invest in ClosedAI only and starve competitors of funding.
Actually I'm thinking in a winner-takes-all universe, the right strategy would be to spread your bets on as many likely winners as possible.
That's literally the premise of venture capital. This is a scenario where we're assuming ALL our bets will go to zero, except one which will be worth trillions. In that case you should bet on everything.
It's only in the opposite scenario (where every bet pays off with varying ROI) that it makes sense to go all-in on whichever bet seems most promising.
As a deeper issue on "justification", here is something I wrote related to this in 2001 on the risks of non-profits engaging in self-dealing when they create artificial scarcity to enrich themselves:
"Consider this way of looking at the situation. A 501(c)3 non-profit creates a digital work which is potentially of great value to the public and of great value to others who would build on that product. They could put it on the internet at basically zero cost and let everyone have it effectively for free. Or instead, they could restrict access to that work to create an artificial scarcity by requiring people to pay for licenses before accessing the content or making derived works. If they do the latter and require money for access, the non-profit can perhaps create revenue to pay the employees of the non-profit. But since the staff probably participate in the decision making about such licensing (granted, under a board who may be all volunteer), isn't that latter choice still in a way really a form of "self-dealing" -- taking public property (the content) and using it for private gain? From that point of view, perhaps restricting access is not even legal?"
"Self-dealing might be clearer if the non-profit just got a grant, made the product, and then directly sold the work for a million dollars to Microsoft and put the money directly in the staff's pockets (who are also sometimes board members). Certainly if it was a piece of land being sold such a transaction might put people in jail. But because the content or software sales are small and generally to their mission's audience they are somehow deemed OK. The trademark-infringing non-profit-sheltered project I mention above is as I see it in large part just a way to convert some government supported PhD thesis work and ongoing R&D grants into ready cash for the developers. Such "spin-offs" are actually encouraged by most funders. And frankly if that group eventually sells their software to a movie company, say, for a million dollars, who will really bat an eyebrow or complain? (They already probably get most of their revenue from similar sales anyway -- but just one copy at a time.) But how is this really different from the self-dealing of just selling charitably-funded software directly to Microsoft and distributing a lump sum? Just because "art" is somehow involved, does this make everything all right? To be clear, I am not concerned that the developers get paid well for their work and based on technical accomplishments they probably deserve that (even if we do compete for funds in a way). What I am concerned about is the way that the proprietary process happens such that the public (including me) never gets full access to the results of the publicly-funded work (other than a few publications without substantial source)."
That said, charging to provide a service that costs money to supply (e.g. GPU compute) is not necessarily self-dealing. It is restricting the source code or using patents to create artificial scarcity around those services that could be seen that way.
Enlightening read, especially your last paragraph which touches on the nuance of the situation. It’s quite easy to end up on one side or the other when it comes to charity/nonprofits because the mission itself can be very motivating and galvanizing.
>"Self-dealing [...] convert some government supported PhD thesis work [...] the public (including me) never gets full access to the results of the publicly-funded work [...]
Your 2001 essay isn't a good parallel to OpenAI's situation.
OpenAI wasn't "publicly funded" i.e. with public donations or government grants.
The non-profit was started and privately funded by a small group of billionaires and other wealthy people (Elon Musk donates $44 million, Reid Hoffman, etc collectively pledging $1 billion of their own money).
They miscalculated in thinking their charity donations would be enough to recruit the PhD machine learning researchers and pay the high GPU costs to create the AI alternative to Google DeepMind, etc. Their 2015 assumptions about future AI development costs were massively underestimated and now they look like bad for trying to convert it to a for-profit enterprise. Instead of a big conversion to for-profit, they now will settle with keeping a subsidiary that's for-profit. Somewhat like other entities structured as a non-profit that owns for-profit subsidiaries such as Mozilla, Girl Scouts, Novo Nordisk, etc.
Obviously with hindsight... if they had to do it all over, they would just create the reverse structure of creating the OpenAI for-profit company as the "parent entity" that pledges to donate money to charities. E.g. Amazon Inc is the for-profit that donates to Housing Equity Fund for affordable housing.
>uncollected tax revenues for economically valuable activity.
Taxes are on profits not revenue. The for-profit OpenAI LLC subsidiary created in 2019 would have been the entity that owes taxes but it has been losing money and never made any profits to tax.
Yesterday's news about switching from for-profit LLC to for-profit PBC still leaves a business entity that's liable for future taxes on profits.
The value investor Mohnish Pabrai once talked about his observation that most companies with a moat pretend they don’t have one and companies without pretend they do.
A version of this is emphasized in the thielverse as well. Companies in heavy competition try to intersect all their qualities to appear unique. Dominant companies talk about their portfolio of side projects to appear in heavy competition (space flight, ed tech, etc).
Mohnish isn't a tech bro though, in my books. After selling his company, guy retreated away from the tech scene to get into Buffett-style value investing. And if you read his book, it's about glorifying the small businessmen running motels and garages, who invest bit by bit into the stock market.
There needs to be regulations about deceptive, indirect, purposefully ambiguous or vague public communication by corporations (or any entity). I'm not an expert in corporate law or finance, but the statement should be:
"Open AI for-profit LLC will become a Public Benefit Corporation (PBC)"
followed by: "Profit cap is hereby removed" and finally "The Open AI non-profit will continue to control the PBC. We intend it to be a significant shareholder of the PBC."
AGI can't really be a winner take all market. The 'reward' for general intelligence is infinite as a monopoly and it accelerates productivity.
Not only is there infinite incentive to compete, but theres decreasing costs to. The only world in which AGI is winner take all is a world in which it is extremely controlled to the point at which the public cant query it.
> AGI can't really be a winner take all market. The 'reward' for general intelligence is infinite as a monopoly and it accelerates productivity
The first-mover advantages of an AGI that can improve itself are theoretically unsurmountable.
But OpenAI doesn't have a path to AGI any more than anyone else. (It's increasingly clear LLMs alone don't make the cut.) And the market for LLMs, non-general AI, is very much not winner takes all. In this announcement, OpenAI is basically acknowledging that it's not getting to self-improving AGI.
> this has some baked assumptions about cycle time and improvement per cycle and whether there's a ceiling
To be precise, it assumes a low variability in cycle time and improvement per cycle. If everyone is subjected to the same limits, the first-mover advantage remains insurmountable. I’d also argue that whether there is a ceiling matters less than how high it is. If the first AGI won’t hit a ceiling for decades, it will have decades of fratricidal supremacy.
I find these assumptions curious. How so? What is the AGI going to do that captures markets? Even if it can take over all desk work, then what? Who is going to consume that? And further more (and perhaps more importantly), with it putting everyone out of work, who is going to pay for it?
I'm pretty sure today's models probably can be capable of self-improving. It's just that they are not yet as good as self-improving as the combinations of programmers improving them with the help of the models.
I think the foundation model companies are actually poorly situated to reach the leading edge of AGI first, simply because their efforts are fragmented across multiple companies with different specializations—Claude is best at coding, OpenAI at reasoning, Gemini at large context, and so on.
The most advanced tools are (and will continue to be) at a higher level of the stack, combining the leading models for different purposes to achieve results that no single provider can match using only their own models.
I see no reason to think this won't hold post-AGI (if that happens). AGI doesn't mean capabilities are uniform.
Agreed and, if anything, you are too generous. They aren’t just not “close”, they aren’t even working in the same category as anything that might be construed as independently intelligent.
I agree with you, but that’s kindof beside the point. Open AI’s thesis is that they will work towards AGI, and eventually succeed. In the context of that premise, Open AI still doesn’t believe AGI would be winner-takes-all. I think that’s an interesting discussion whether you believe the premise or not.
Differentiating between AGI and non-AGI, if we ever get remotely close, would be challenging, but for now it's trivial. The defining feature of AGI is recursive self improvement across any field. Without self improvement, you're just regurgitating. Humanity started with no advanced knowledge or even a language. In what should practically be a heartbeat at the speed of distributed computing with perfect memory and computation power, we were landing a man on the Moon.
So one fundamental difference is that AGI would not need some absurdly massive data dump to become intelligent. In fact you would prefer to feed it as minimal a series of the most primitive first principles as possible because it's certain that much of what we think is true is going to end up being not quite so -- the same as for humanity at any other given moment in time.
We could derive more basic principles, but this one is fundamental and already completely incompatible with our current direction. Right now we're trying to essentially train on the entire corpus of human writing. That is a defacto acknowledgement that the absolute endgame for current tech is simple mimicry, mistakes and all. It'd create a facsimile of impressive intelligence because no human would have a remotely comparable knowledge base, but it'd basically just be a glorified natural language search engine - frozen in time.
Your quote is a non sequitur to your question. The reason you want to avoid massive data dumps is because there are guaranteed to be errors and flaws. See things like Alpha Go vs Alpha Go Zero. The former was trained on the entirety of human knowledge, the latter was trained entirely on itself.
The zero training version not only ended up dramatically outperforming the 'expert' version, but reached higher levels of competence exponentially faster. And that should be entirely expected. There were obviously tremendous flaws in our understanding of the game, and training on those flaws resulted in software seemingly permanently handicapping itself.
Minimal expert training also has other benefits. The obvious one is that you don't require anywhere near the material and it also enables one to ensure you're on the right track. Seeing software 'invent' fundamental arithmetic is somewhat easier to verify and follow than it producing a hundred page proof advancing, in a novel way, some esoteric edge theory of mathematics. Presumably it would also require orders of magnitude less operational time to achieve such breakthroughs, especially given the reduction in preexisting state.
I mostly agree with you. But if you think about it mimicry is an aspect of intelligence. If I can copy you and do what you do reliably, regardless of the method used, it does capture an aspect of intelligence. The true game changer is a reflective AI that can automatically improve upon itself
If you took the average human from birth and gave them only 'the most primitive first principles', the chance that they would have novel insights into medicine is doubtful.
I also disagree with your following statement:
> Right now we're trying to essentially train on the entire corpus of human writing. That is a defacto acknowledgement that the absolute endgame for current tech is simple mimicry
At worst it's complex mimicry! But I would also say that mimicry is part of intelligence in general and part of how humans discover. It's also easy to see that AI can learn things - you can teach an AI a novel language by feeding in a fairly small amount of words and grammar of example text into context.
I also disagree with this statement:
> One fundamental difference is that AGI would not need some absurdly massive data dump to become intelligent
I don't think how something became intelligent should affect whether it is intelligent or not. These are two different questions.
> you can teach an AI a novel language by feeding in a fairly small amount of words and grammar of example text into context.
You didn't teach it, the model is still the same after you ran that. That is the same as a human following instructions without internalizing the knowledge, he forgets it afterward and didn't learn what he performed. If that was all humans did then there would be no point in school etc, but humans do so much more than that.
As long as LLM are like an Alzheimer's human they will never become a general intelligence. And following instructions is not learning at all, learning is building an internal model for those instructions that is more efficient and general than the instructions themselves, humans do that and that is how we manage to advance science and knowledge.
Please, keep telling people that. For my sake. Keep the world asleep as I take advantage of this technology which is literally General Artificial Intelligence that I can apply towards increasing my power.
Why does the Author choose to ignore the "General" in AGI?
Can ChatGPT drive a car? No, we have specialized models for driving vs generating text vs image vs video etc etc. Maybe ChatGPT could pass a high school chemistry test but it certainly couldn't complete the lab exercises. What we've built is a really cool "Algorithm for indexing generalized data", so you can train that Driving model very similarly to how you train the Text model without needing to understand the underlying data that well.
The author asserts that because ChatGPT can generate text about so many topics that it's general, but it's really only doing 1 thing and that's not very general.
Generally speaking, anyone can learn to use any tool. This isn't true of generative AI systems which can only learn through specialized training with meticulously curated data sets.
People physically unable to use the tool can't learn to use it. This isn't necessarily my view, but one could make a pretty easy argument that the LLMs we have today can't drive a car only because they aren't physically able to control the car.
> but one could make a pretty easy argument that the LLMs we have today can't drive a car only because they aren't physically able to control the car.
Of course they can. We already have computer controlled car systems, the reason LLMs aren't used to drive them is because AI systems that specialize in text are a poor choice for driving - specialized driving models will always outperform them for a variety of technical reasons.
We have compute controlled automobiles, not LLM controlled automobiles.
That was my whole point. Maybe in theory an LLM could learn to drive a car, but they can't today because they don't physically have access to cars they could try to drive just like a person who can't learn to use a tool because they're physically limited from using it.
This isn't true. A curated data set can greatly increase learning efficiency in some cases, but it's not strictly necessary and represents only a fraction of how people learn. Additionally, all curated data sets were created by humans in the first place, a feat that language models could never achieve if we did not program them to do so.
Generality is a continuous value, not a boolean; turned out that "AGI" was poorly defined, and because of that most people were putting the cut-off threshold in different places.
Likewise for "intelligent", and even "artificial".
So no, ChatGPT can't drive a car*. But it knows more about car repairs, defensive driving, global road features (geoguesser), road signs in every language, and how to design safe roads, than I'm ever likely to.
* It can also run python scripts with machine vision stuff, but sadly that's still not sufficient to drive a car… well, to drive one safety, anyway.
Text can be a carrier for any type of signal. The problem gets reduced to that of an interface definition. It’s probably not going to be ideal for driving cars, but if the latency, signal quality, and accuracy is within acceptable constraints, what else is stopping it?
This doesn’t imply that it’s ideal for driving cars, but to say that it’s not capable of driving general intelligence is incorrect in my view.
You can literally today prompt ChatGPT with API instructions to drive a car, then feed it images of a car's window outlooks and have it generate commands for the car (JSON schema restricted structured commands if you like). Text can represent any data thus yes, it is general.
But it cannot think on it's own! Billions of years of evolution couldn't bring human level 'AGI' to many many species, and we think a mere LLM company could do so. AGI isn't just a language model, there's tons of things baked into dna(the way brain functions, it's structure when it grows etc). It's not simply neuron interactions as well. The complexity is mind boggling
Last time I checked, in an Anthropic paper, they asked the model to count something. They examined the logits and a graph showing how it arrived at the answer. Then they asked the model to explain its reasoning, and it gave a completely different explanation, because that was the most statistically probable response to the question. Does that seem like AGI to you?
There is no post factum rationalization here. If you ask a human to think about how they do something before they do it, there's no post factum rationalization. If you ask an LLM to do the same, it will give you a different answer. So, there is a difference. It's all about having knowledge of your internal state and being conscious of your actions and how you perform them, so you can learn from that knowledge. Without that, there is no real intelligence, just statistics.
Yes, humans can post rationalize. But an LLM do nothing but post rationalize, as you yourself admitted humans can think it through beforehand and then actually do what they planned, while an LLM wont follow that plan mentally.
It is easy to see why, since the LLM doesn't communicate what it thinks it communicates what it thinks a human would communicate. A human would explain their inner process, and then go through that inner process. An LLM would explain a humans inner process, and then generate a response using a totally different process.
So while its true that humans doesn't have perfect introspection, the fact that we have introspection about our own thoughts at all is extremely impressive. An LLM has no part that analyzes its own thoughts the way humans do, meaning it has no clue how it thinks.
I have no idea how you would even build introspection into an AI, like how are we able to analyze our own thoughts? What is even a thought? What would this introspection part of an LLM do, what would it look like, would it identify thoughts and talk about them the way we do? That would be so cool, but that is not even on the horizon, I doubt we will ever see that in our lifetime, it would need some massive insight changing the AI landscape at its core to get there.
But, once you have that introspection I think AGI will happen almost instantly. Currently we use dumb math to train the model, that introspection will let the model train itself in an intelligent way, just like humans do. I also think it will never fully replace humans without introspection, intelligent introspection seems like a fundamental part to general intelligence and learning from chaos.
... that was written in mid-2023. So that opinion piece is trying to redefine 2 year old LLMs like GPT-4 (pre-4o) as AGI. Which can only be described as an absolutely herculean movement of goalposts.
I would argue that this is a fringe opinion that has been adopted by a mainstream scholar, not a mainstream opinion. That or, based on my reading of the article, this person is using a definition of AGI that is very different than the one that most people use when they say AGI.
AGI would mean something which doesn't need direction or guidance to do anything. Like us humans, we don't wait for somebody to give us a task and go do it as if that is our sole existence. We live with our thoughts, blank out, watch TV, read books etc. What we currently have and possibly in the next century as well will be nothing close to an actual AGI.
I don't know if it is optimism or delusions of grandeur that drives people to make claims like AGI will be here in the next decade. No, we are not getting that.
And what do you think would happen to us humans if such AGI is achieved? People's ability to put food on the table is dependent on their labor exchanged for money. I can guarantee for a fact, that work will still be there but will it be equitable? Available to everyone? Absolutely not. Even UBI isn't going to cut it because even with UBI people still want to work as experiments have shown. But with that, there won't be a majority of work especially paper pushing mid level bs like managers on top of managers etc.
If we actually get AGI, you know what would be the smartest thing for such an advanced thing to do? It would probably kill itself because it would come to the conclusion that living is a sin and a futile effort. If you are that smart, nothing motivates you anymore. You will be just a depressed mass for all your life.
I think there's a useful distinction that's often missed between AGI and artificial consciousness. We could conceivably have some version of AI that reliably performs any task you throw at it consistently with peak human capabilities, given sufficient tools or hardware to complete whatever that task may be, but lacks subjective experience or independent agency; I would call that AGI.
The two concepts have historically been inexorably linked in sci-fi, which will likely make the first AGI harder to recognize as AGI if it lacks consciousness, but I'd argue that simple "unconscious AGI" would be the superior technology for current and foreseeable needs. Unconscious AGI can be employed purely as a tool for massive collective human wealth generation; conscious AGI couldn't be used that way without opening a massive ethical can of worms, and on top of that its existence would represent an inherent existential threat.
Conscious AGI could one day be worthwhile as something we give birth to for its own sake, as a spiritual child of humanity that we send off to colonize distant or environmentally hostile planets in our stead, but isn't something I think we'd be prepared to deal with properly in a pre-post-scarcity society.
It isn't inconceivable that current generative AI capabilities might eventually evolve to such a level that they meet a practical bar to be considered unconscious AGI, even if they aren't there yet. For all the flak this tech catches, it's easy to forget that capabilities which we currently consider mundane were science fiction only 2.5 years ago (as far as most of the population was concerned). Maybe SOTA LLMs fit some reasonable definition of "emerging AGI", or maybe they don't, but we've already shifted the goalposts in one direction given how quickly the Turing test became obsolete.
Personally, I think current genAI is probably a fair distance further from meeting a useful definition of AGI than those with a vested interest in it would admit, but also much closer than those with pessimistic views of the consequences of true AGI tech want to believe.
One sci-fi example could be based on the replicators from Star Trek, who are able to synthesize any meals on demand.
It is not hard to imagine a "cooking robot" as a black box that — given the appropriate ingredients — would cook any dish for you. Press a button, say what you want, and out it comes.
Internally, the machine would need to perform lots of tasks that we usually associate with intelligence, from managing ingredients and planning cooking steps, to fine-grained perception and manipulation of the food as it is cooking. But it would not be conscious in any real way. Order comes in, dish comes out.
Would we use "intelligent" to describe such a machine? Or "magic"?
I immediately thought of Star Trek too, I think the ship's computer was another example of unconscious intelligence. It was incredibly capable and could answer just about any request that anyone made of it. But it had no initiative or motivation of its own.
Regarding "We could conceivably have some version of AI that reliably performs any task you throw at it consistently" - it is very clear to anyone who just looks at the recent work by Anthropic analyzing how their LLM "reasons" that such a thing will never come from LLMs without massive unknown changes - and definitely not from scale - so I guess the grandparent is absolute right that openai is nor really working on this.
I agree. AGI is meaningless as a term if it doesn't mean completely autonomous agentic intelligence capable of operating on long-term planning horizons.
Edit: because if "AGI" doesn't mean that... then what means that and only that!?
> Edit: because if "AGI" doesn't mean that... then what means that and only that!?
"Agentic AI" means that.
Well, to some people, anyway. And even then, people are already arguing about what counts as agency.
That's the trouble with new tech, we have to invent words for new stuff that was previously fiction.
I wonder, did people argue if "horseless carriages" were really carriages? And "aeroplane" how many argued that "plane" didn't suit either the Latin or Greek etymology for various reasons?
We never did rename "atoms" after we split them…
And then there's plain drift: Traditional UK Christmas food is the "mince pie", named for the filling, mincemeat. They're usually vegetarian and sometimes even vegan.
Agents can operate in narrow domains too though, so to fit the G part of AGI the agent needs to be non-domain specific.
It's kind of a simple enough concept... it's really just something that functions on par with how we do. If you've built that, you've built AGI. If you haven't built that, you've built a very capable system, but not AGI.
Think about it - the original definition of AGI was basically a machine that can do absolutely anything at a human level of intelligence or better.
That kind of technology wouldn't just appear instantly in a step change. There would be incremental progress. How do you describe the intermediate stages?
What about a machine that can do anything better than the 50th percentile of humans? That would be classified as "Competent AGI", but not "Expert AGI" or ASI.
> fancy search engine/auto completer
That's an extreme oversimplification. By the same reasoning, so is a person. They are just auto completing words when they speak. No that's not how deep learning systems work. It's not auto complete..
It's really not. The Space Shuttle isn't an emerging interstellar spacecraft, it's just a spacecraft. Throwing emerging in front of a qualifier to dilute it is just bullshit.
> By the same reasoning, so is a person. They are just auto completing words when they speak.
We have no evidence of this. There is a common trope across cultures and history of characterising human intelligence in terms of the era's cutting-edge technology. We did it with steam engines [1]. We did it with computers [2]. We're now doing it with large language models.
Technically it is a refinement, as it distinguishes levels of performance.
The General Intelligence part of AGI refers to its ability to solve problems that it was not explicitly trained to solve, across many problem domains. We already have examples of the current systems doing exactly that - zero shot and few shot capabilities.
> We have no evidence of this.
That's my point. Humans are not "autocompleting words" when they speak.
> Technically it is a refinement, as it distinguishes levels of performance
No, it's bringing something out of scope into the definition. Gluten-free means free of gluten. Gluten-free bagel verus sliced bread is a refinement--both started out under the definition. Glutinous bread, on the other hand, is not gluten free. As a result, "almost gluten free" is bullshit.
> That's my point. Humans are not "autocompleting words" when they speak
Humans are not. LLMs are. It turns out that's incredibly powerful! But it's also limiting in a way that's fundamentally important to the definition of AGI.
LLMs bring us closer to AGI in the way the inventions of writing, computers and the internet probably have. Calling LLMs "emerging AGI" pretends we are on a path to AGI in a way we have zero evidence for.
Bad analogy. That's a binary classification. AGI systems can have degrees of performance and capability.
> Humans are not. LLMs are.
My point is that if you oversimplify LLMs to "word autocompletion" then you can make the same argument for humans. It's such an oversimplification of the transformer / deep learning architecture that it becomes meaningless.
> That's a binary classification. AGI systems can have degrees of performance and capability
The "g" in AGI requires the AI be able to perform "the full spectrum of cognitively demanding tasks with proficiency comparable to, or surpassing, that of humans" [1]. Full and not full are binary.
> if you oversimplify LLMs to "word autocompletion" then you can make the same argument for humans
No, you can't, unless you're pre-supposing that LLMs work like human minds. Calling LLMs "emerging AGI" pre-supposes that LLMs are the path to AGI. We simply have no evidence for that, no matter how much OpenAI and Google would like to pretend it's true.
Why are you linking a Wikipedia page like it's the ground zero for the term? Especially when neither article the page link to justify that definition see the term as a binary accomplishment.
The g in AGI is General. I don't what world you think Generality isn't a spectrum, but it's sure as hell isn't this one.
That's right, and the Wikipedia page refers to the classification system:
"A framework for classifying AGI by performance and autonomy was proposed in 2023 by Google DeepMind researchers. They define five performance levels of AGI: emerging, competent, expert, virtuoso, and superhuman"
In the second paragraph:
"Some researchers argue that state‑of‑the‑art large language models already exhibit early signs of AGI‑level capability, while others maintain that genuine AGI has not yet been achieved."
The entire article makes it clear that the definitions and classifications are still being debated and refined by researchers.
Then you are simply rejecting any attempts to refine the definition of AGI. I already linked to the Google DeepMind paper. The definition is being debated in the AI research community. I already explained that definition is too limited because it doesn't capture all of the intermediate stages. That definition may be the end goal, but obviously there will be stages in between.
> No, you can't, unless you're pre-supposing that LLMs work like human minds.
You are missing the point. If you reduce LLMs to "word autocompletion" then you completely ignore the the attention mechanism and conceptual internal representations. These systems have deep learning models with hundreds of layers and trillions of weights. If you completely ignore all of that, then by the same reasoning (completely ignoring the complexity of the human brain) we can just say that people are auto-completing words when they speak.
> I already linked to the Google DeepMind paper. The definition is being debated in the AI research community
Sure, Google wants to redefine AGI so it looks like things that aren’t AGI can be branded as such. That definition is, correctly in my opinion, being called out as bullshit.
> obviously there will be stages in between
We don’t know what the stages are. Folks in the 80s were similarly selling their expert systems as a stage to AGI. “Emerging AGI” is a bullshit term.
> If you reduce LLMs to "word autocompletion" then you completely ignore the the attention mechanism and conceptual internal representations. These systems have deep learning models with hundreds of layers and trillions of weights
It is not a redefinition. It's a classification for AGI systems. It's a refinement.
Other researchers are also trying to classify AGI systems. It's not just Google. Also, there is no universally agreed definition of AGI.
> We don’t know what the stages are. Folks in the 80s were similarly selling their expert systems as a stage to AGI. “Emerging AGI” is a bullshit term.
Generalization is a formal concept in machine learning. There can be degrees of generalized learning performance. This is actually measurable. We can compare the performance of different systems.
While I also hold a peer comment's view that the Turing Test is meaningless, I would further add that even that has not been meaningfully beaten.
In particular we redefined the test to make it passable. In Turing's original concept the competent investigator and participants were all actively expected to collude against the machine. The entire point is that even with collusion, the machine would be able to pass. Instead modern takes have paired incompetent investigators alongside participants colluding with the machine, probably in an effort to be part 'of something historic'.
In "both" (probably more, referencing the two most high profile - Eugene and the large LLMs) successes, the interrogators consistently asked pointless questions that had no meaningful chance of providing compelling information - 'How's your day? Do you like psychology? etc' and the participants not only made no effort to make their humanity clear, but often were actively adversarial obviously intentionally answering illogically, inappropriately, or 'computery' to such simple questions. And the tests are typically time constrained by woefully poor typing skills (this the new normal in the smartphone gen?) to the point that you tend to get anywhere from 1-5 interactions of a few words each.
The problem with any metric for something is that it often ends up being gamed to be beaten, and this is a perfect example of that.
I mean, I am pretty sure that I won't be fooled by a bot, if I get the time to ask the right questions.
And I did not looked into it (I also don'think the test has too much relevance), but fooling the average person sounds plausible by now.
Now sounding plausible is what LLMs are optimized for and not being plausible, still, I would not have thought we get so far so quick 10 years ago. So I am very hesistant about the future.
The very people whose theories about language are now being experimentally verified by LLMs, like Chomsky, have also been discrediting the Turing test as pseudoscientific nonsense since early 1990s.
It's one of those things like the Kardashev scale, or Level 5 autonomous driving, that's extremely easy to define and sounds very cool and scientific, but actually turns out to have no practical impact on anything whatsoever.
I feel like, if nothing else, this new wave of AI products is rapidly demonstrating the lack of faith people have in their own intelligence -- or maybe, just the intelligence of other human beings. That's not to say that this latest round of AI isn't impressive, but legions of apologists seem to forget that there is more to human cognition than being able to regurgitate facts, write grammatically-correct sentences, and solve logical puzzles.
> legions of apologists seem to forget that there is more to human cognition than being able to regurgitate facts, write grammatically-correct sentences, and solve logical puzzles
To be fair, there is a section of the population whose useful intelligence can roughly be summed up as that or worse.
I think this takes an unnecessarily narrow view of what "intelligence" implies. It conflates "intelligence" with fact-retention and communicative ability. There are many other intelligent capabilities that most normally-abled human beings possess, such as:
- Processing visual data and classifying objects within their field of vision.
- Processing auditory data, identifying audio sources and filtering out noise.
- Maintaining an on-going and continuous stream of thoughts and emotions.
- Forming and maintaining complex memories on long-term and short-term scales.
- Engaging in self-directed experimentation or play, or forming independent wants/hopes/desires.
I could sit here all day and list the forms of intelligence that humans and other intelligent animals display which have no obvious analogue in an AI product. It's true that individual AI products can do some of these things, sometimes better than humans could ever, but there is no integrated AGI product that has all these capabilities. Let's give ourselves a bit of credit and not ignore or flippantly dismiss our many intelligent capabilities as "useless."
> It conflates "intelligence" with fact-retention and communicative ability
No, I’m using useful problem solving as my benchmark. There are useless forms of intelligence. And that’s fine. But some people have no useful intelligence and show no evidence of the useless kind. They don’t hit any of the bullets you list, there just isn’t that curiosity and drive and—I suspect—capacity to comprehend.
I don’t think it’s intrinsic. I’ve seen pets show more curiosity than some folk. But due to nature and nurture, they just aren’t intelligent to any material stretch.
Remember however that their charter specifies: "If a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project"
It does have some weasel words around value-aligned and safety-conscious which they can always argue but this could get interesting because they've basically agreed not to compete. A fairly insane thing to do in retrospect.
Who defines "value-aligned, safety-conscious project"?
"Instead of our current complex non-competing structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal competing structure where ..." is all it takes
AGI could be a winner-take-all market... for the AGI, specifically for the first one that's General and Intelligent enough to ensure its own survival and prevent competing AGI efforts from succeeding...
How would an AGI prevent others from competing? Sincere question. That seems like something that ASI would be capable of. If another company released an AGI, how would the original stifle it? I get that the original can self-improve to try to stay ahead, but that doesn't necessarily mean it self-improves the best or most efficiently, right?
AGI used to be synonymous with ASI; it's still unclear to me it's even possible to build a sufficiently general AI - that is, as general as humans - without it being an ASI just by virtue of being in silico, thus not being constrained in scale or efficiency like our brains are.
If it was first, it could have self-improved more, to the point that it has the capacity to prevent competition, while the competition does not have the capacity to defend itself against superior AGI. This all is so hypothetical and frankly far from what we're seeing in the market now. Funny how we're all discussing dystopian scifi scenarios now.
Homo Sapiens wiped out every other intelligent hominid and every other species on Earth exists at our mercy. That looks a lot like the winners (humans) taking all.
Well, yeah, the world in which it is winner take all is the one where it accelerates productivity so much such that the first firm to achieve it doesn't provide access to its full capabilities directly to oursiders but uses it themselves and conquers every other field of endeavor.
That's always been pretty overtly the winner-take-all AGI scenario.
AGI might not be fungible. From the trends today it's more likely there will be multiple AGIs with different relative strengths and weakness, different levels of accessibility and compliance, different development rates, and different abilities to be creative and surprising.
OpenAI is winning in a similar way that Apple is winning in smartphones.
OpenAI is capturing most of the value in the space (generic LLM models), even though they have competitors who are beating them on price or capabilities.
I think OpenAI may be able to maintain this position at least for the medium term because of their name recognition/prominence and they are still a fast mover.
I also think the US is going to ban all non-US LLM providers from the US market soon for "security reasons."
> I also think the US is going to ban all non-US LLM providers from the US market soon for "security reasons."
Well Trump is interested in tariffing movies and South Korea took DeepSeek off mobile app stores, so they certainly may try. But for high-end tasks, DeepSeek R1 671B is available for download, so any company with a VPN to download it and the necessary GPUs or cloud credits can run it. And for consumers, DeepSeek V3's distilled models are available for download, so anyone with a (~4 year old or newer) Mac or gaming PC can run them.
If the only thing keeping these companies valuations so high is banning the competition, that's not a good sign for their long-term value. If you have to ban the competition, you can't be feeling good about what you're making.
For what it's worth, I think GPT o3 and o1, Gemini 2.5 Pro and Claude 3.7 Sonnet are good enough to compete. DeepSeek R1 is often the best option (due to cost) for tasks that it can handle, but there are times where one of the other models can achieve a task that it can't.
But if the US is looking to ban Chinese models, then that could suggest that maybe these models aren't good enough to raise the funding required for newer, significantly better (and more expensive) models. That, or they just want to stop as much money as possible from going to China. Banning the competition actually makes the problem worse though, as now these domestic companies have fewer competitors. But I somewhat doubt there's any coherent strategy as to what they ban, tariff, etc.
What do you consider an "LLM provider"? Is it a website where you interact with a language model by uploading text or images? That definition might become too broad too quickly. Hard to ban.
the bulk of money comes from enterprise users. Just need to call 500 CEOs from the S&P500 list, and enforce via "cyber data safety" enforcement via SEC or something like that.
everyone will roll over if all large public companies roll over (and they will)
IE once captured all of the value in browserland, with even much higher mindshare and market dominance than OpenAI has ever had. Comparing with Apple (= physical products) is Apples to oranges (heh).
Their relationship with MS breaking down is a bad omen. I'm already seeing non-tech users who use "Copilot" because their spouse uses it at work. Barely knowing it's rebadged GPT. You think they'll switch when MS replaces the backend with e.g. Anthropic? No chance.
MS, Google and Apple and Meta have gigantic levers to pull and get the whole world to abandon OpenAI. They've barely been pulling them, but it's a matter of time. People didn't use Siri and Bixby because they were crap. Once everyone's Android has a Gemini button that's just as good as GPT (which it already is (it's better) for anything besides image generation), people are going to start pressing them. And good luck to OpenAI fighting that.
Apple is not the right analogy. OpenAI has first mover advantage and they have a widely recognized brand name — ChatGPT — and that’s kind of it. Anyone (with very deep pockets) can buy Nvidia chips and go to town if they have a better or equivalent idea. There was a brief time (long before I was born) when “Univac” was synonymous with “computer.”
Companies that are contractors with the US government already aren’t allowed to use Deepseek even if its an airgapped R1 model is running on our own hardware. Legal told us we can’t run any distills of it or anything. I think this is very dumb.
to me it sounds like an admission that AGI is bullshit! AGI would be so disruptive to the current economic regime that "winner takes all" barely covers it, I think. Admitting they will be in normal competition with other AI companies implies specializations and niches to compete, which means Artificial Specialized Intelligence, NOT general intelligence!
and that makes complete sense if you don't have a lay person's understanding of the tech. Language models were never going to bring about "AGI."
If they think AGI is imminent the value of that payday is very limited. I think the grandparent is more correct: OpenAI is admitting that near term AGI - which, being that the only one anyone really cares about is the case with exponential self improvement - isn't happening any time soon. But that much is obvious anyway despite the hyperbolic nonsense now common around AI discussions.
If I were a person like several of the people working on AI right now (or really, just heading up tech companies), I could be the kind to look at a possible world-ending event happening in the next - eh, year, let's say - and just want to have a party at the end of the world.
I don't read it that way. It reads more like AGIs will be like very smart people and rather than having one smart person/AGI, everyone will have one. There's room for both Beethoven and Einstein although they were both generally intelligent.
It will likely require research breakthroughs, significant hardware advancement, and anything from a few years to a few decades. But it's coming.
ChatGPT was released 2.5 years ago, and look at all the crazy progress that has been made in that time. That doesn't mean that the progress has to continue, we'll probably see a stall.
But AIs that are on a level with humans for many common tasks is not that far off.
Either that, or this AI boom mirrors prior booms. Those booms saw a lot of progress made, a lot of money raised, then collapsed and led to enough financial loss that AI went into hibernation for 10+ years.
There's a lot of literature on this, and if you've been in the industry for any amount of time since the 1950s, you have seen at least one AI winter.
probably true but this statement would be true if when is 2308 which would defeat the purpose of the statement. when first cars started rolling around some mates around the campfire we saying “not if but when” we’ll have flying cars everywhere and 100 years later (with amazing progress in car manufacturing) we are nowhere near… I think saying “when, not if” is one of those statements that while probably indisputable in theory is easily disputable in practice. give me “when” here and I’ll put up $1,000 to a charity of your choice if you are right and agree to do the same thing if wrong
you can see a pattern of fairly steady progress in different aspects, like they matched humans for image recognition around 2015 but 'complex reasoning' is still much worse than humans but rising.
Looking at the graph, I'd guess maybe five years before it can do all human skills which is roughly AGI?
I've got a personal AGI test of being able to fix my plumbing, given a robot body. Which they are way off just now.
It is already here, kinda. I mean look at how it passes the bar exam, solves math olympiad level questions, generates video, art, music. What else are you looking for? It already has penetrated into job market causing significant disruption in programming. We are not seeing flying cars but we are witnessing things even not talked about around campfire. Seriously even 4 years ago, would you think all these would happen?
To begin with, systems that don't tell people to use elmer's glue to keep the cheese from sliding off the pizza, displaying a fundamental lack of understanding of.. everything. At minimum it needs to be able to reliably solve hard, unique, but well-defined problems like a group of the most cohesive intelligent people could. It's certainly not AGI until it can do a better job than the most experienced, talented, and intelligent knowledge workers out there.
Every major advancement (which LLMs certainly are) has caused some disruption in the fields it affected, but that isn't useful criteria that can differentiate between "crude but useful tool" from "AGI".
Majority of people on earth don't solve hard, unique, but well-defined problems, do we? I dont expect AGI to to solve one of Hilbert's list of problems (yet). Your definition of AGI is a bit too imposing.
Saying that I believe you would get answers from an LLM better than most of the answers you would get from an average human.
IMHO the trend is obvious and we will see if it stalls or keeps the pace.
I think this is right but also missing a useful perspective.
Most HN people are probably too young to remember that the nanotech post-scarcity singularity was right around the corner - just some research and engineering way - which was the widespread opinion in 1986 (yes, 1986). It was _just as dramatic_ as today's AGI.
That took 4-5 years to fall apart, and maybe a bit longer for the broader "nanotech is going to change everything" to fade. Did nanotech disappear? No, but the notion of general purpose universal constructors absolutely is dead. Will we have them someday? Maybe, if humanity survives a hundred more years or more, but it's not happening any time soon.
There are a ton of similarities between nanotech-nanotech singularity and the moderns LLM-AGI situation. People point(ed) to "all the stuff happening" surely the singularity is on the horizon! Similarly, there was the apocalytpic scenario that got a ton of attention and people latching onto "nanotech safety" - instead of runaway AI or paperclip engines, it was Grey Goo (also coined in 1986).
The dynamics of the situation, the prognostications, and aggressive (delusional) timelines, etc. are all almost identical in a 1:1 way with the nanotech era.
I think we will have both AGI and general purpose universal constructors, but they are both no less than 50 years away, and probably more.
So many of the themes are identical that I'm wondering if it's a recurring kind of mass hysteria. Before nanotech, we were on the verge of genetic engineering (not _quite_ the same level of hype, but close, and pretty much the same failure to deliver on the hype as nanotech) and before that the crazy atomic age of nuclear everything.
Yes, yes, I know that this time is different and that AI is different and it won't be another round of "oops, this turned out to be very hard to make progress on and we're going to be in a very slow, multi-decade slow-improvement regime, but that has been the outcome of every example of this that I can think of.
I won't go too far out on this limb, because I kind of agree with you... but to be fair -- 1980s-1990s nanotech did not attract this level of investment, nor was it visible to ordinary people, nor was it useful to anyone except researchers and grant writers.
It seems like nanotech is all around us now, but the term "nanotech" has been redefined to mean something different (larger scale, less amazing) from Drexler's molecular assemblers.
> Did nanotech disappear? No, but the notion of general purpose universal constructors absolutely is dead. Will we have them someday? Maybe, if humanity survives a hundred more years or more,
I thought this was a "we know we can't" thing rather than a "not with current technology" thing?
Specific cases are probably impossible, though there's always hope. After all, to ue the example the nanotech people loved: there are literal assemblers all around you. Whether we can have singular device that can build anything (probably not - energy limits and many many other issues) or factories that can work on atomic scale (maybe) is open, I think. The idea of little robots was kind of visibly silly even at the peak.
The idea of scaling up LLMs and hoping is .. pretty silly.
Every consumer has very useful AI at their fingertips right now. It's eating the software engineering world rapidly. This is nothing like nanotech in the 80s.
Sure. But fancy autocomplete for a very limited industry (IT) plus graphics generation and a few more similar items, are indeed useful. Just like "nanotech" coating of say optics or in the precise machinery or all other fancy nano films in many industries. Modern transistors are close to nano scale now, etc.
The problem is that the distance between a nano thin film or an interesting but ultimately rigid nano scale transistor and a programmable nano level sized robot is enormous, despite similar sizes. Same like the distance between an autocomplete heavily relying on the preexisting external validators (compilers, linters, static code analyzers etc.) and a real AI capable of thinking is equally enormous.
Progress is not just a function of technical possibility( even if it exists) it is also economics.
It has taken tens to hundred of billions of dollars without equivalent economic justification(yet) before to reach here. I am not saying economic justification doesn't exist or wont come in the future, just that the upfront investment and risk is already in order of magnitude of what the largest tech companies can expend.
If the the next generation requires hundreds of billions or trillions [2] upfront and a very long time to make returns, no one company (or even country) could allocate that kind of resources.
Many cases of such economically limited innovations[1], nuclear fusion is the classic always 20 years away example. Another close one is anything space related, we cannot replicate in next 5 years what we already achieved from 50 years ago of say landing on the moon and so on.
From a just a economic perspective it is a definitely a "If", without even going into the technology challenges.
[1]Innovations in cost of key components can reshape economics equation, it does happen (as with spaceX) but it also not guaranteed like in fusion.
[2] The next gen may not be close enough to AGI. AGI could require 2-3 more generations ( and equivalent orders of magnitude of resources), which is something the world is unlikely to expend resources on even if it had them.
I agree that LLMs are hurting the general population’s capacity to think (assuming they use it often. I’ve certainly noticed a slight trend among students I’ve taught to use less effort, and myself to some extent).
I don’t agree that this will affect ML progress much, since the general population isn’t contributing to core ML research.
Could you elaborate on the progress that has been made?
To me, it seems only small/incremental changes are made between models with all of them still hallucinating.
I can see no clear steps towards AGI.
"X increased exponentially in the past, therefore it will increase exponentially in the same way in the future" is fallacious. There is nothing guaranteeing indefinite uncapped growth in capabilities of LLMs. An exponential curve and a sigmoidal curve look the same until a certain point.
Yeah, it is a pretty good bet that any real process that produces something that looks like an exponential curve over time is the early phase of a sigmoid curve, because all real processes have constraints.
And if we apply the 80/20 rule, feels like we're at about 50-75% right now. So we're almost getting close to done with the easy parts. Then come the hard parts.
I don’t think that’s a safe foregone conclusion. What we’ve seen so far is very very powerful pattern matchers with emergent properties that frankly we don’t fully understand. It very well may be the road to AGI, or it may stop at the kind of things we can do in our subconscious—but not what it takes to produce truly novel solutions to never before seen problems. I don’t think we know.
It's somewhat odd to me that many companies operating in the public eye are basically stating "We are creating a digital god, an instrument more powerful than any nuclear weapon" and raising billions to do it, and nobody bats an eye...
I'd really love to talk to someone that both really believes this to be true, and has a hands-on experience with building and using generative AI.
The intersection of the two seems to be quite hard to find.
At the state that we're in the AIs we're building are just really useful input/output devices that respond to a stimuli (e.g., a "prompt"). No stimuli, no output.
This isn't a nuclear weapon. We're not going to accidentally create Skynet. The only thing it's going to go nuclear on is the market for jobs that are going to get automated in an economy that may not be ready for it.
If anything, the "danger" here is that AGI is going to be a printing press. A cotton gin. A horseless carriage -- all at the same time and then some, into a world that may not be ready for it economically.
Progress of technology should not be artitrarily held back to protect automateable jobs though. We need to adapt.
- Superintelligence poses an existential threat to humanity
- Predicting the future is famously difficult
- Given that uncertainty, we can't rule out the chance of our current AI approach leading to superintelligence
- Even a 1-in-1000 existential threat would be extremely serious. If an asteroid had a 1-in-1000 chance of hitting Earth and obliterating humanity we should make serious contingency plans.
Second question: how confident are you that you're correct? Are you 99.9% sure? Confident enough to gamble billions of lives on your beliefs? There are almost no statements about the future which I'd assign this level of confidence to.
You could use the exact same argument to argue the opposite. Simply change the first premise to "Super intelligence is the only thing that can save humanity from certain extinction". Using the exact same logic, you'll reach the conclusion that not building superintelligence is a risk no sane person can afford to take.
So, since we've used the exact same reasoning to prove two opposite conclusions, it logically follows that this reasoning is faulty.
That’s not how logic works. The GP is applying the precautionary principle: when there’s even a small chance of a catastrophic risk, it makes sense to take precautions-like restricting who can build superintelligent AI, similar to how we restrict access to nuclear technology.
Changing the premise to "superintelligence is the only thing that can save us" doesn’t invalidate the logic of being cautious. It just shifts the debate to which risk is more plausible. The reasoning about managing existential risks remains valid either way, the real question is which scenario is more likely, not whether the risk-based logic is flawed.
Just like with nuclear power, which can be both beneficial and dangerous, we need to be careful in how we develop and control powerful technologies. The recent deregulation by the US admin are an example of us doing the contrary currently.
Not really. If there is a small chance that this miraculous new technology will solve all of our problems with no real downside, we must invest everything we have and pull out all the stops, for the very future of the human race depends on AGI.
Also, @tsimionescu's reasoning is spot on, and exactly how logic works.
Some of us believe that continued AI research is by far the biggest threat to human survival, much bigger for example than climate change or nuclear war (which might cause tremendous misery and reduce the population greatly, but seem very unlikely to kill every single person).
I'm guessing that you think that society is getting worse every year or will eventually collapse, and you hope that continued AI research might prevent that outcome.
It literally isn't, changing/reversing a premise and not adressing the point that was made is not a valid way to counter the initial argument in a logical way.
Just like your proposition that any "small" chance justifies investing "everything" disregards the same argument regarding the precautionary principle of potentially devastating technologies. You've also slipped in an additonal "with no real downside" which you cannot predict with certainty anyways, rendering this argument infalsifiable. At least tsimionescu didn't dare making such a sweeping (but baseless) statement.
The best we can hope for is that Artificial Super Intelligence treats us kindly as pets, or as wildlife to be preserved, or at least not interfered with.
Isn't the question you're posing basically Pascals wager?
I think the chance they're going to create a "superintelligence" is extremely small.
That said I'm sure we're going to have a lot of useful intelligence. But nothing general or self-conscious or powerful enough to be threatening for many decades or even ever.
> Predicting the future is famously difficult
That's very true, but that fact unfortunately can never be used to motivate any particular action, because you can always say "what if the real threat comes from a different direction?"
We can come up with hundreds of doomsday scenarios, most don't involve AI. Acting to minimize the risk of every doomsday scenario (no matter how implausible) is doomsday scenario no. 153.
> I think the chance they're going to create a "superintelligence" is extremely small.
I'd say the chance that we never create a superintelligence is extremely small. You either have to believe that for some reason the human brain achieved the maximum intelligence possible, or that progress on AI will just stop for some reason.
Most forecasters on prediction markets are predicting AGI within a decade.
Why are you so sure that progress won't just fizzle out at 1/1000 of the performance we would classify as superintelligence?
> that progress on AI will just stop for some reason
Yeah it might. I mean, I'm not blind and deaf, there's been tremendous progress in AI over the last decade, but there's a long way to go to anything superintelligent. If incremental improvement of the current state of the art won't bring superintelligence, can we be sure the fundamental discoveries required will ever be made? Sometimes important paradigm shifts and discoveries take a hundred years just because nobody made the right connection.
Is it certain that every mystery will be solved eventually?
Aren't we already passed 1/1000th of the performance we would classify as superintelligence?
There isn't an official precise definition of superintelligence, but it's usually vaguely defined as smarter than humans. Twice as smart would be sufficient by most definitions. We can be more conservative and say we'll only consider superintelligence achieved when it gets to 10x human intelligence. Under that conservative definition, 1/1000th of the performance of superintelligence would be 1% as smart as a human.
We don't have a great way to compare intelligences. ChatGPT already beats humans on several benchmarks. It does better than college students on college-level questions. One study found it gets higher grades on essays than college students. It's not as good as humans on long, complex reasoning tasks. Overall, I'd say it's smarter than a dumb human in most ways, and smarter than a smart human in a few ways.
I'm not certain we'll ever create superintelligence. I just don't see why you think the odds are "extremely small".
> Given that uncertainty, we can't rule out the chance of our current AI approach leading to superintelligence
I think you realise this is the weak point. You can't rule out the current AI approach leading to superintelligence. You also can't rule out a rotting banana skin in your bin spontaneously gaining sentience either. Does that mean you shouldn't risk throwing away that skin? It's so outrageous that you need at least some reason to rule it in. So it goes with current AI approaches.
Isn't the problem precisely that uncertainty though? That we have many data points showing that a rotting banana skin will not spontaneously gain sentience, but we have no clear way to predict the future? And we have no way of knowing the true chance of superintelligence arising from the current path of AI research—the fact that it could be 1-in-100 or 1-in-1e12 or whatever is part of the discussion of uncertainty itself, and people are biased in all sorts of ways to believe that the true risk is somewhere on that continuum.
>And we have no way of knowing the true chance of superintelligence arising from the current path of AI research
What makes people think that the future advances in AI will continue to be linear instead of falling of and plateau? Don't all breakthrough technologies develop quickly at the start and then fall of in improvements as all the 'easy' improvements have already been made? In my opinion AI and AGI is like the car and the flying car. People saw continous improvements in cars and thought this rate of progress would continue indefinitely. Leading to cars that have the ability to not only drive but fly as well.
You bring up the example of an extinction-level asteroid hurling toward earth. Gee, I wonder if this superintelligence you’re deathly afraid of could help with that?
This extreme risk aversion and focus on negative outcomes is just the result of certain personality types, no amount of rationalizing will change your mind as you fundamentally fear the unknown.
How do you get out of bed everyday knowing there’s a chance you could get hit by a bus?
If your tribe invented fire you’d be the one arguing how we can’t use it for fear it might engulf the world. Yes, humans do risk starting wildfires, but it’s near impossible to argue the discovery of fire wasn’t a net good.
Since the internet inception there were a few wrong turns taken by the wrong people (and lizards, ofc) behind the wheel, leading to the sub-optimal, enshitified tm experience we have today. I think GP just don't want to live through that again.
> Superintelligence poses an existential threat to humanity
I disagree at least on this one. I don't see any scenario where superintelligence comes into existence, but is for some reason limited to a mediocrity that puts it in contention with humans. That equilibrium is very narrow, and there's no good reason to believe machine-intelligence would settle there. It's a vanishingly low chance event. It considerably changes the later 1-in-n part of your comment.
So you assume a superintelligence, so powerful it would see humans as we see ants, would not destroy our habitat for resources it could use for itself?
> There are almost no statements about the future which I'd assign this level of confidence to.
You have cooked up a straw man that will believe anything as long as it contains a doomsday prediction. You are more than 99.9% confident about doomsday predictions, even if you claim you aren't.
Or if you’re talking more about everyday engineers working in the field, I suspect the people soldering vacuum tubes to the ENIAC would not necessarily have been the same people with the clearest vision for the future of the computer.
Sounds a little too much like, "It's not AGI today ergo it will never become AGI"
Does the current AI give productivity benefits to writing code? Probably. Do OpenAI engineers have exclusive access to more capable models that give them a greater productivity boost than others? Also probably.
If one exclusive group gets the benefit of developing AI with a 20% productivity boost compared to others, and they develop a 2.0 that grants them a 25% boost, then a 3.0 with a 30% boost, etc...
The question eventually becomes, "is AGI technically possible"; is there anything special about meat that cannot be reproduced on silicon? We will find AGI someday, and more than likely that discovery will be aided by the current technologies. It's the path here that matters, not the specific iteration of generative LLM tech we happen to be sitting on in May 2025.
> Does the current AI give productivity benefits to writing code? Probably.
> If one exclusive group gets the benefit of developing AI with a 20% productivity boost compared to others, and they develop a 2.0 that grants them a 25% boost, then a 3.0 with a 30% boost, etc...
That’s a bit of a stretch, generative AI is least capable of helping with novel code such as needed to make AGI.
If anything I’d expect companies working on generative AI to be at a significant disadvantage when trying to make AGI because they’re trying to leverage what they are already working on. That’s fine for incremental improvement, but companies rarely ride one wave of technology to the forefront of the next. Analog > digital photography, ICE > EV, coal mining > oil, etc.
> At the state that we're in the AIs we're building are just really useful input/output devices that respond to a stimuli (e.g., a "prompt"). No stimuli, no output.
It was true before we allowed them to access external systems, disregarding certain rule which I forgot the origin.
The more general problem is a mix between the tradegy of the common; we have better understanding every passing day yet still don't understand exacly why LLM perform that well emergently instead of engineered that way; and future progress.
Do you think you can find a way around access boundaries to masquerade your Create/Update requests as Read in the log system monitoring it, when you have super intelligence?
Yes! Sounds like a dream. My value isn't determined by some economic system, but rather by myself. There is so much to do when you don't have to work. Of course, this assumes we actually get to UBI first, and it doesn't create widespread poverty. But even if humanity will have to go through widespread poverty, we'd porbably come out with UBI on the other side (minus a few hundred millions starved).
There's so much to do, explore and learn. The prospect of AI stealing my job is only scary because my income depends on this job.
Hobbies, hanging out with friends, reading, etc. That's basically it.
Probably no international travel.
It will be like a simple retirement on a low income, because in a socialist system the resources must be rationed.
This will drive a lot of young ambitious people to insanity. Nothing meaningful for them to achieve. No purpose. Drug use, debauchery, depression, violence, degeneracy, gangs.
It will be a true idiocracy. No Darwinian selection pressures, unless the system enforces eugenics and population control.
Wait, wait, wait. Our society's gonna fall apart due to a lack of Darwinian selection pressure? What do you think we're selecting for right now?
Seems to me like our culture treats both survival and reproduction as an inalienable right. Most people would go so far as to say everyone deserves love, "there's a lid for every pot".
> This will drive a lot of young ambitious people to insanity. Nothing meaningful for them to achieve.
Maybe, if the only flavor of ambition you're aware of is that of SV types. Plenty of people have found achievement and meaning before and alongside the digital revolution world.
I mean common people will be affected just as badly as SV types. It will impact everyone.
Jobs, careers, real work, all replaced by machines which can do it all better, faster, cheaper than humans.
Young people with modest ambitions to learn and master a skill and contribute to society, and have a meaningful life. That can be blue collar stuff too.
How will children respond to the question - "What do you want to be when you grow up?"
They can join the Amish communities where humans still do the work.
> So you don't mind if your economic value drops to zero, with all human labour replaced by machines?
This was the fear when the cotton gin was invented. It was the ear when cars were created. The same complaint happened with the introduction of electronic, automated, telephone switchboards.
Jobs change. Societies change. Unemployment worldwide, is near the lowest it has ever been. Work will change. Society will eventually move to a currency based on energy production, or something equally futuristic.
This doesn't mean that getting there will be without pain.
Where did all the work-horses go? Why is there barely a fraction of the population there once was? Why did they not adapt and find niches where they had a competitive advantage over cars and machines?
The horses weren't the market the economy is selling to, the people are. Ford figured out that people having both time and money is best for the economy. We'll figure out that having all the production capabilities but none of the market benefits nobody.
The goal for AGI/ASI is to create machines that can do any job much faster, better, and cheaper than humans. That's the ultimate end point of this progress.
The economic value of human labour will drop to zero. That would be an existential threat to our civilization.
> are just really useful input/output devices that respond to a stimuli
LLMs are huge pretrained models. The economic benefit here is that you don't have to train your own text classification model anymore. (The LLM was likely already trained on whatever training set you could think of.)
That's a big time and effort saver, but no different from "AI" that we had decades prior. It's just more accessible to the normal person now.
The US government probably doesn't think it's behind.
Right now it's operated by a bunch of people who think that you can directly relate the amount of money a venture could make in the next 90 days to its net benefit for society. Government telling them how they can and cannot make that money, in their minds, is government telling them that they cannot bring maximum benefit to society.
Now, is this mindset myopic to everything that most people have in their lived experience? Is it ethically bankrupt and held by people who'd sell their own mothers for a penny if they otherwise couldn't get that penny? Would those people be banished to a place beyond human contact for the rest of their existence by functioning organs of an even somewhat-sane society?
I'd go further and say the US government wants "an instrument more powerful than any nuclear weapon" to be built in its territory, by people it has jurisdiction over.
It might not be a direct US-govt project like the Manhattan Project was, but it doesn't have to. The government has the ties it needs with the heads of all these AI companies, and if it comes to it, the US-govt has the muscle and legal authority to reign control over it.
A good deal for everyone involved really. These companies get to make bank and technology that furthers their market dominance, the US-govt gets potentially "Manhattan project"-level pivotal technology— it's elites helping elites.
Unless China handicaps the their progress as well (which they won’t, see made in China 2025), all you’re doing is handing the future to deepseek et al.
What kind of a future is that? If China marches towards a dystopia, why should Europe dutifully follow?
We can selectively ban uses without banning the technology wholesale; e.g., nuclear power generation is permitted, while nuclear weapons are strictly controlled.
What I meant is: Europe can choose to regulate as they do, and end up living in a Chinese dystopia because the Chinese will drastically benefit from non-regulated AI, or they can create their own AI dystopia.
If you are suggesting that China may use AI to attack Europe, they can invest in defense without unleashing AI domestically. And I don't think China will become a utopia with unregulated AI. My impression after having visited it was not one of a utopia, and knowing how they use technology, I don't think AI will usher it in, because our visions of utopia are at odds. They may well enjoy what they have. But if things go sideways they may regret it too.
Not attack, just influence. Destabilize if you want. Advocate regime change, sabotage trust in institution. Being on a defense in a propaganda war doesn't really work.
With US already having lost ideologigal war with russia and China, Europe is very much next
> If you are suggesting that China may use AI to attack Europe
No - I'm suggesting that China will reap the benefits of AI much more than Europe will, and they will eclipse Europe economically. Their dominance will follow, and they'll be able to dictate terms to other countries (just as the US is doing, and has been doing).
> And I don't think China will become a utopia with unregulated AI.
Did you miss all the places I used the word "dystopia"?
> My impression after having visited it was not one of a utopia, and knowing how they use technology, I don't think AI will usher it in, because our visions of utopia are at odds. They may well enjoy what they have.
Comparing China when I was a kid, not that long ago, to what it is now: It is a dystopia, and that dystopia is responsible for much of the improvements they've made. Enjoying what they have doesn't mean it's not a dystopia. Most people don't understand how willing humans are to live in a dystopia if it improves their condition significantly (not worrying too much about food, shelter, etc).
Do Zambians currently live in an American dystopia? I think they just do their own thing and don't care much what America thinks as long as they don't get invaded.
We don't know whether pushing towards AGI is marching towards a dystopia.
If it's winner takes all for the first company/nation to have AGI (presuming we can control it), then slowing down progress of any kind with regulation is a risk.
I don't think there's a good enough analogy to be made, like your nuclear power/weapons example.
The hypothetical benefits of an aligned AGI outweigh those of any other technology by orders of magnitude.
As with nuclear weapons, there is non-negligible probability of wiping out the human race. The companies developing AI have not solved the alignment problem, and OpenAI even dismantled what programs it had on it. They are not going to invest in it unless forced to.
We should not be racing ahead because China is, but investing energy in alignment research and international agreements.
This thought process it not different than it was with nuclear weapons.
The primary difference is the observability - with satellites we had some confidence that other nations respected treaties, or that they had enough reaction time for mutual destruction, but with this AI development we lack all that.
The EU can say all it wants about banning AI applications with unacceptable risk. But ASML is still selling machines to TSMC, which makes the chips which the AI companies are using. The EU is very much profiting off of the AI boom. ASML makes significantly more money than OpenAI, even.
US government is behind because Biden admin were pushing strongly for controls and regulations and told Andersen and friends exactly that, who then went and did everything in their power to elect Trump, who then put those same tech bros in charge of making his AI policy.
Absolutely. It's frankly quite shocking to see how otherwise atheist or agnostic people have so quickly begun worshipping at the altar of "inevitable AGI apocalypse", much in the same way as how extremist Christians await the rapture.
To be fair many of us arrived at the idea that AI was humanities inevitable endpoint ahead of and independently of whether we would ever see it in our lifetimes. Its easy enough to see how people could independently converge on such am idea. I dont see that view as related to atheism in any way other than it creating space for the belief, in the same way it creates space for many others.
Id love to believe there is more to life than the AI future, or that we as humans are destined to be perpetually happy and live meaningful. However I currently dont see how our current levels of extreme prosperity are anything more than an evolutionary blip, even if we could make them
last several millennia more.
We'll be debating whether or not "AGI is here" in philosophical terms, in the same way people debate if God is real, for years to come. To say nothing of the untaxed "nonprofit" status these institutions share.
Omnipotent deities can never be held responsible for famine and natural disasters ("God has a plan for us all"). AI currently has the same get-out-of-jail free card where mistakes that no literate human would ever make are handwaved away as "hallucinations" that can be exorcised with a more sophisticated training model ("prayers").
Because many people fundamentally don’t believe AGI is possible at a basic level, even AI researchers. Humans tend to only understand what materially affects their existence.
Well, possibly it isn't. Possibly LLMs are limited in ways that humans aren't, and that's why the staggering advances from GPT-2 to GPT-3 and from GPT-3 to GPT-4 have not continued. Certainly GPT-4 doesn't seem to be more powerful than the largest nuclear weapons.
But OpenAI isn't limited to creating LLMs. OpenAI's objective is not to create LLMs but to create artificial general intelligence that is better than humans at all intellectual tasks. Examples of such tasks include:
1. Designing nuclear weapons.
2. Designing and troubleshooting mining, materials processing, and energy production equipment.
3. Making money by investing in the stock market.
4. Discovering new physics and chemistry.
5. Designing and troubleshooting electronics such as GPUs.
6. Building better AI.
7. Cracking encryption.
8. Finding security flaws in computer software.
9. Understanding the published scientific literature.
10. Inferring unpublished discoveries of military significance from the published scientific literature.
11. Formulating military strategy.
Presumably you can see that a system capable of doing all these things can easily be used to produce an unlimited quantity of nuclear weapons, thus making it more powerful than any nuclear weapon.
If LLMs turn out not to be able to do those things better than humans, OpenAI will try other approaches, sooner or later. Maybe it'll turn out to be impossible, or much further off than expected, but that's not what OpenAI is claiming.
the problem is, none of that needs to happen. If the AI can start coming up with novel math or physics, it's game over. Whether the AI is "sentient" or not, being able to break that barrier would send us into an advancement spiral.
None of my argument depends on the AI being sentient.
You are surely correct that there are weaker imaginable AIs than the strongly superhuman AI that OpenAI and I are talking about which would still be more powerful than nuclear weapons, but they are more debatable. For example, whether discovering new physics would permit the construction of new, more powerful weapons is debatable; it didn't help Archimedes or Tipu Sultan. So discussing such weak claims is likely to end up off in the weeds of logistics and speculation about exactly what kind of undiscovered physics and math would come to light. Instead, I focused on the most obviously correct ways that strongly superhuman AI would be more powerful than nuclear weapons.
These may not be the most practically important ways. Maybe any strongly superhuman AI would immediately discover a way to explode the sun, or to control people's minds, or to build diamondoid molecular nanotechnology, or to genetically engineer super-plagues, or to collapse the false vacuum. Any of those would make nuclear weapons seem insignificant. But claims like those are much more uncertain than the very simple question before us: whether what OpenAI is trying to develop would be more powerful than nuclear weapons. Obviously it would be, by my reasoning in the grandparent comment, even if this isn't a false vacuum, if the sticky fingers problem makes diamondoid nanotechnology impossible, if people's minds are inherently uncontrollable, etc. So we don't need to resolve those other, more difficult questions in order to do the much easier task of ranking OpenAI's objective relative to nuclear weapons.
I feel this. I had a very productive convo with an LLM today and realized that a huge part of the value of it was that it addressed my questions in a focused way, without trying to sell me anything or generate SEO rankings or register ad impressions. It just helped me. And that was incredibly refreshing in a digital world that generally feels adversarial.
Then the thought came, when will they start showing ads here.
I like to think that if we learn to pay for it directly, or the open source models get good enough, we could still enjoy that simplicity and focus for quite a while. Here’s hoping!
The "good" thing is this is all way too expensive to be ad-supported. Maybe there will be some ad-supported products using very small/cheap models, but the leading edge stuff is always going to be at the leading-edge of compute usage too, and someone has to pay the bill. Even with investors subsidizing a lot of the costs, it's still very expensive to use the best models heavily for real work.
Subscription services can sell ads too. See Hulu, or Netflix. Spotify might not play "radio ads" if you pay, but it will still advertise artists on your home screen.
These models being expensive leads me to think they will look at all methods of monetization possible when seeking profitability. Rather than ads being off the table, it could feasibly make ads be on the table sooner.
Maybe it could happen, but the revenue that can be made per user from ads is basically insignificant compared to the compute costs. They’d be pissing off their users for a very marginal benefit.
There's no such thing as too expensive to be ad-supported. There might be too expensive to be ONLY ad-supported, but as a revenue stream ads can be layered on top of other sources. For example, see that the ads shown on a $100/mo cable package!
It is guaranteed that the models will become salespeople in disguise with time. This is just how the world works. Hopefully competition can stave it off but I doubt it.
It's also why totalitarian regimes love it, they can simply train it to regurgitate a modified version of reality.
I'm hoping there will always be a good LLM option, for the following reasons:
1) The Pareto frontier of open LLMs will keep expanding. The breakneck pace of open research/development, combined with techniques like distillation will keep the best open LLMs pretty good, if not the best.
2) The cost of inference will keep going down as software and hardware are optimized. At the extreme, we're lookin toward bit-quantized LLMs that run in RAM itself.
These two factors should mean a good open LLM alternative should always exist, one without ulterior motives. Now, will people be able to have the hardware to run it? Or will users just put up with ads to use the best LLM? The latter is likely, but you do have a choice.
For all of the skepticism I've seen of Sam Altman, listening to interviews with him (eg by Ben Thompson) he says he really does not want to create an ad tier for OpenAI.
Even if you take him at his word, incentives are hard to ignore (and advertising is a very powerful business model when your goal is to create something that reaches everyone)
Ads intermixed into llm responses is so clearly evil that openai will never do it so long as the nonprofit has a controlling stake (which it currently still has), because the nonprofit would never allow it.
The insidious part is it doesn't have to be so blatant as adverts, you can achieve a lot by just slight biases in text output.
Decades ago I worked for a classical music company, fresh out of school. "So.. how do you anticipate where the music trend is going", I once naively asked one of the senior people on the product side. "Oh, we don't. We tell people really quietly, and they listen". They and the marketing team spent a lot of time doing very subtle work, easily as much as anything big like actual advertisements. Things like small little conversations with music journalists, just a dropped sentence or two that might be repeated in an article, or marginally influence an article; that another journalist might see and have an opinion on, or spark some other curiosity. It only takes a small push and it tends to spread across the industry. It's not a fast process, but when the product team is capable of road-mapping for a year or so in advance, a marketing team can do a lot to prepare things so the audience is ready.
LLMs represent a scary capability to influence the entire world, in ways we're not equipped to handle.
>LLMs represent a scary capability to influence the entire world, in ways we're not equipped to handle
replace LLMs with TV, or smartphones, or maybe even mcdonald's, and you've got the same idea.
through TV, corporations got to control a lot of the social world and people's behavior.
Ads / SEO but with AI responses was so obviously the endgame given how much human attention it controls and the fact that people aren't really willing to pay what it costs (when decent free, open-weights alternatives exist)
In the future AI will be commoditized. You'll be able to buy an inference server for your home in the form factor like a wi-fi router now. They will be cheap and there will be a huge selection of different models, both open-source and proprietary. You'll be able to download a model with a click of a button. (Or just torrent them.)
The smaller models are becoming even more capable now. Add that with a suite of tools and integrations and you can do most of what you do online within the infra at home.
how many times must we repeat that AGI is whatever will sell the project. it means nothing. even philosophers don't have a good definition of "intelligence"
AGI just refers roughly to the intelligence it would take to replace most if not all white collar workers. There is no precise definition, but it's not meaningless.
Isn't this already the case? Perhaps you mean in a non-transient fashion, i.e. internalizes the in-context learning into the model itself, sort of an ongoing training, that isn't sort of a "hack" like writing notes or adding to a RAG database or whatever.
I see OpenAI's original form as the last gasp of a kind of liberal tech; in a world where "doing good" was seen as very important, the non-profit approach made sense and got a lot of people on board. These days the Altmans and the pmarcas of the world are much more comfortable expressing their authoritarian, self-centered world views; the "evolving" structure of Open AI is fully in line with that. They want to be the kings they always thought of themselves as, and now they get to do so without couching it in "doing good".
That world never existed. Yes, pockets did - IT professionals with broadband lines and spare kit hosting IRC servers and phpBB forums from their homes free of charge, a few VC-funded companies offering idealistic visions of the net until funding ran dry (RIP CoHost) - but once the web became privatized, it was all in service of the bottom line by companies. Web 2.0 onwards was all about centralization, surveillance, advertising, and manipulation of the populace at scale - and that intent was never really a secret to those who bothered to pay attention. While the world was reeling from Cambridge Analytica, us pre-1.0 farts who cut our teeth on Telnet and Mosaic were just kind of flabbergasted that ya'll were surprised by overtly obvious intentions.
That doesn't mean it has to always be this way, though. Back when I had more trust in the present government and USPS, I mused on how much of a game changer it might be for the USPS to provide free hosting and e-mail to citizens, repurposing the glut of unused real estate into smaller edge compute providers. Everyone gets a web server and 5GB of storage, with 1A Protections letting them say and host whatever they like from their little Post Office Box. Everyone has an e-mail address tied to their real identity, with encryption and security for digital mail just like the law provides for physical mail. I still think the answer is about enabling more people to engage with the internet on their selective terms (including the option of disengagement), rather than the present psychological manipulation everyone engages in to keep us glued to our screens, tethered to our phones, and constantly uploading new data to advertisers and surveillance firms alike.
But the nostalgic view that the internet used to be different is just that: rose-tinted memories of a past that never really existed. The first step to fixing this mess is acknowledging its harm.
I don’t think the parent was saying that everyone’s intentions were pure until recently, but rather that naked greed wasn’t cool before, but now it is.
The Internet has changed a lot over the decades, and it did used to be different, with the differences depending on how many years you go back.
What we are observing is the effects of profit maximization when the core value to the user is already fulfilled. It's a type of pathological optimization that is useful at the beginning but eventually pathologizes.
When we already have efficient food production that drove down costs and increased profits (a good thing), what else is there for companies to optimize for, if not loading it with sugar, putting it in cheap plastic, bamboozling us with ads?
This same dynamic plays out in every industry. Markets are a great thing when the low hanging fruit hasn't been picked, because the low hanging fruit is usually "cut the waste, develop basic tech, be efficient". But eventually the low hanging fruit becomes "game human's primitive reward circuits".
I think it did and still does today - every single time an engineer sees a problem an starts an open-source project to solve it - not out of any profit motive and without any monetization strategy in mind, but just because they can, and they think the world would be better off.
I have to agree. That's one of the dangers of today's world; the risk of believing that we never had a better one. Yes, the altruism of yesteryear was partially born of convenience, but it still existed. And I remember people actually believing it was important and acting as such. Today's cynicism and selfishness seem a lot more arbitrary to me. There's absolutely no reason things have to be this way. Collectively, we have access to more wealth and power now than we ever did previously. By all accounts, things ought to be great. It seems we just need the current generation of leaders to re-learn a few lessons from history.
You and I are on the same path, just at different points in the journey. Your response is very similar to my own tone and position a decade ago, trying to celebrate what we had before in an attempt to shepherd others towards a better future together. Time wore down that naivety into the cynicism of today, because I’ve come to realize that those celebrations simply coddle those who do not wish to put in the effort for change and yearn for a return to past glories.
We should acknowledge the past flatly and objectively for what it was and spend more time building that future, than listening to the victors of the past brag and boast, content to wallow in their accomplishments instead of rejoining contributors to tomorrow. The good leaders of yesteryear have stepped aside in lieu of championing newer, younger visionaries; those still demanding respect for what they did fifty years ago in circumstances we can only dream about, are part of the problem.
Sure it has. For every Woz, there was a Jobs; for every Linus, a Bill (Gates). For every starry-eyed engineer or developer who just wants to help people, there are business people who will pervert it into an empire and jettison them as soon as practical. For every TED, there’s a Davos; for every DEFCON, there’s a glut of vendor-specific conferences.
We should champion the good people who did the good things and managed to resist the temptations of the poisoned apple, but we shouldn’t hold an entire city on a pedestal because of nostalgia alone. Nobody, and no entity, is that deserving.
I would argue that cynicism is born of attempting to assert accountability and finding repeated harm from said attempts, rather than some intrinsic pre-existing apathy or laziness.
I think most people will snitch on bad behavior as children. However, our systems often allow other children to discipline the snitch, rather than correct the negative behavior the snitch raised. We see it in adult systems as well: whistleblowers often end up with substantially shorter and poorer lives for attempting to assert accountability or consequences on those who committed them, while the perpetrators often enjoy lives of immense wealth and reward regardless of the whistleblower's actions.
If you want people to stop being "lazy" and "cynical", then you have to support them when systems turn against them. In my experience, none of ya'll actually want to also walk out of work when layoffs happen following a profitable quarter for no other reason than to juice the share price, none of ya'll also want to walk off the job because your employer is taking contracts from authoritarian regimes, none of ya'll also want to put yourselves in the line of fire and risk harm over your purported values.
Don't blame us cynics when we have the battle scars showing our commitment to a better tomorrow. What have you done to prevent cynicism?
Coincidentally, and as another pre-1.0 fart myself :-) -- one who remembers when Ted Nelson's "Computer Lib / Dream Machines" was still just a wild hope -- I was thinking of something similar the other day (not USPS-specific for hosting, but I like that).
It was sparked by going to a video conference "Hyperlocal Heroes: Building Community Knowledge in the Digital Age" hosted by New_ Public:
https://newpublic.org/
"Reimagine social media: We are researchers, engineers, designers, and community leaders working together to explore creating digital public spaces where people can thrive and connect."
A not-insignificant amount of time in that one-hour teleconference was spent related to funding models for local social media and local reporting.
Afterwards, I got to thinking. The USA spent literally trillions of dollars on the (so-many-problematical-things-about-it-I-better-stop-now) Iraq war.
https://en.wikipedia.org/wiki/Financial_cost_of_the_Iraq_War
"According to a Congressional Budget Office (CBO) report published in October 2007, the US wars in Iraq and Afghanistan could cost taxpayers a total of $2.4 trillion by 2017 including interest."
Or, from a different direction, the USA spends about US$200 billion per year on mostly-billboard-free roads:
https://www.urban.org/policy-centers/cross-center-initiative...
"In 2021, state and local governments provided three-quarters of highway and road funding ($154 billion) and federal transfers accounted for $52 billion (25 percent)."
That's about US$700 per person per year on US roads.
So, clearly huge amounts of money are available in the USA if enough people think something is important. Imagine if a similar amount of money went to funding exactly what you outlined -- a free web presence for distributed social media -- with an infrastructure funded by tax dollars instead of advertisements. Isn't a healthy social media system essential to 21st century online democracy with public town squares?
And frankly such a distributed social media ecosystem in the USA might be possible for at most a tenth of what roads cost, like perhaps US$70 per person per year (or US$20 billion per year)?
Yes, there are all sorts of privacy and free speech issues to work through -- but it is not like we don't have those all now with the advertiser-funded social media systems we have. So, it is not clear to me that such a system would be immensely worse than what we have.
But what do I know? :-) Here was a previous big government suggestion be me from 2010 -- also mostly ignored (until now 15 years later the USA is in political crisis over supply chain dependency and still isn't doing anything very related to it yet):
"Build 21000 flexible fabrication facilities across the USA"
https://web.archive.org/web/20100708160738/http://pcast.idea...
"Being able to make things is an important part of prosperity, but that capability (and related confidence) has been slipping away in the USA. The USA needs more large neighborhood shops with a lot of flexible machine tools. The US government should fund the construction of 21,000 flexible fabrication facilities across the USA at a cost of US$50 billion, places where any American can go to learn about and use CNC equipment like mills and lathes and a variety of other advanced tools and processes including biotech ones. That is one for every town and county in the USA. These shops might be seen as public extensions of local schools, essentially turning the shops of public schools into more like a public library of tools. This project is essential to US national security, to provide a technologically literate populace who has learned about post-scarcity technology in a hands-on way. The greatest challenge our society faces right now is post-scarcity technology (like robots, AI, nanotech, biotech, etc.) in the hands of people still obsessed with fighting over scarcity (whether in big organizations or in small groups). This project would help educate our entire society about the potential of these technologies to produce abundance for all."
They deeply believe in the Ayn Rand mindset that the system that brings them the most individual wealth is also the best system for humanity as a whole.
The problem with that mindset is that money is a proxy for the Marxist idea of inherent value. The distinction does not matter when you are just an average dude, doubling your money doubles the amount of material wealth you have access to.
But once you control a significant enough chunk of money, it becomes clear the pie doesn't get any bigger the more shiny coins you have, you only have more relative purchasing power, automatically making everyone else poorer.
When people that wealthy are that delusional... With few checks or balances from politics, media, or even social media... I don't think humanity as a whole is in for a great time.
They are roughly as delusional as everyone else. There is an image human bias to convince yourself that what benefits you is also best for everyone else.
It’s just that their biases have much more capacity to cause damage as their wealth gives them so much power.
Yes, people are generally delusional; importantly though, some people are much less so (and some more so). Being connected to reality, being grounded, are learnable traits (but not very valuable to CEOs and narcissists).
> They are roughly as delusional as everyone else.
I would bet serious money that people who believe in Ayn Rand are generally more delusional than others, and the same goes for the ultra-wealthy living in a bubble of sycophants.
And their wealth gives them much more capacity - and motive - to cause damage.
Like everything, it's projection. Those who loudly scream against something are almost always the ones engaging in it.
Google screamed against service revenue and advertising while building the world's largest advertising empire. Facebook screamed against misinformation and surveillance while enabling it on a global scale. Netflix screamed against the overpriced cable TV industry while turning streaming into modern overpriced cable television. Uber screamed against the entrenched taxi industry harming workers and passengers while creating an unregulated monster that harmed workers and passengers.
Altman and OpenAI are no different in this regard, loudly screaming against AI harming humanity while doing everything in their capacity to create AI tools that will knowingly harm humanity while enriching themselves.
If people trust the performance instead of the actions and their outcomes, then we can't convince them otherwise.
Oh I'm not saying they every believed more than their self-centered views, but that in a world that leaned more liberal there was value in trying to frame their work in those terms. Now there's no need to pretend.
And to those who "say" at least now they're honest, I say "WHY?!" Unconditionally being "good" would be better than disguising selfishness as good. But that's not really a thing. Having to maintain the presence of doing good puts significant boundaries on what you can get away with, and increases the consequence when people uncover some shit.
Condoning "honest liars" enables a whole other level of open and unrestricted criminality.
Why are you changing the subject? The “War on Terror” was never intended to spread democracy as far as I know; democracy was a means by which to achieve the objective of safety from terrorism.
Is it reasonable to assign the descriptor “authoritarian” to anyone who simply does not subscribe to the common orthodoxy of one faction in the american culture war? That is what it seems to me is happening here, though I would love to be wrong.
I have not seen anything from sama or pmarca that I would classify as “authoritarian”.
Donating millions to a fascist president (in Altman’s case) seems pretty authoritarian to me. And he seems happy enough hanging out with Thiel and other Yarvin groupies.
I think this is more a symptom of the level of commonplace corruption in the American regulatory environment than any indication of the political views of the person directing such donations.
Tim Apple did it too, and we don’t assume he’s an authoritarian now too, do we? I imagine they would probably have done similarly regardless of who won the election.
It sure seems like an endorsement, but I think it’s simply modern corporate strategy in the American regulatory environment, same as when foreign dignitaries stay in overpriced suites in the Trump hotel in DC.
Those who don’t kiss the ring are clearly and obviously punished. It’s not in the interest of your shareholders (or your launch partners) to be the tall poppy.
I do feel that way about every CEO in those cheery inauguration day photos (https://apnews.com/article/trump-inauguration-tech-billionai...). Zuckerberg, Bezos, Pichai, Cook, Altman, Musk, Thiel: enablers of fascism, every one. However, it should be noted that Cook donated from his own name and not Apple. Guess he didn't want his shittiness to rub off on his company.
As far as “enablers” of fascism - would we have the same amount of fascism if they didn’t participate? I posit that the answer is yes.
Furthermore, you are dead wrong on the last point. The “dispute” between the FBI and Apple is a fiction designed to restore public trust in Apple’s privacy stance following the Snowden revelations about FAA702 (aka PRISM) that shows that companies allow the USG warrantless access to their data in realtime via special APIs or portals.
The tech executives came to DC to meet with Obama in the wake of the whole Snowden thing to discuss it, though it was widely reported as being a consult on fixing healthcare.gov (lol) a few outlets reported it correctly. There are photos of the meeting kicking around.
I imagine the Apple-vs-the-FBI narrative (which is widely regarded as true and has resulted in mainstream false belief, such as yours demonstrated here) was borne directly out of these meetings.
Apple intentionally maintains access to the majority of their users’ data by the USG and the CCP (in their respective zones). It is required for them to continue operating in their current fashion. Every iMessage and (basically) every file
in iCloud (photos included) is readable by Apple and the government. Apple has the technical capability to prevent this by migrating their userbase to e2ee systems, and they do not.
I firmly believe that this is by design, and that they would be very severely punished, legally or extralegally, if they changed the status quo.
I’m not sure exactly what they meant by “liberal” in this case, but since they put it in contrast with authoritarianism, I assume they meant it in the conventional definition of the word (where it is the polar opposite of authoritarianism). Instead of the American politics-as-sports definition that makes it a synonym for “team blue.”
correct. "liberal" as in the general ideas that ie expanding the franchise is important, press freedoms are good, that government can do good things for people and for capital etc. Wikipedia's intro paragraph does a good job of describing what I was getting at (below). In prior decades Republicans in the US would have been categorized as "liberal" under this definition; in recent years, not so much.
>Liberalism is a political and moral philosophy based on the rights of the individual, liberty, consent of the governed, political equality, the right to private property, and equality before the law. Liberals espouse various and often mutually conflicting views depending on their understanding of these principles but generally support private property, market economies, individual rights (including civil rights and human rights), liberal democracy, secularism, rule of law, economic and political freedom, freedom of speech, freedom of the press, freedom of assembly, and freedom of religion. Liberalism is frequently cited as the dominant ideology of modern history.
No, "authoritarian" is a word with a specific meaning. I'm not sure about applying it to Sam Altman, but Marc Andreessen has expressed views that I consider authoritarian in his victory lap tour since last year's presidential election.
No I don't think it is. I DO think those two people want to be in charge (along with other billionaires) and they want the rest of us to follow along, which is in my book an authoritarian POV. pmarca's recent "VC is the only job that can't be done by AI" is a good example of that; the rest of us are to be managed and controlled by VCs and robots.
it is opt in until they manage to convince some government to allow them to be the contracted provider of "humanness verification" that is then made a prerequisite to access services.
Comcast is also opt-in. Except, in many areas there are no real alternatives.
I doubt Worldcoin will actually manage to corner the market. But the point is, if it did, bad things would happen. Though, that’s probably true of most products.
For better or worse, OpenAI removing the capped structure and turning the nonprofit from AGI considerations to just philanthropy feels like the shedding of the last remnants of sanctity.
Yes and no. It sounds like the capped profit PPU holders will get to have their units convert 1:1 with unlimited profit equity shares, which are obviously way more valuable. So the nonprofit loses insanely in this move and all current investors and employees make a huge amount.
A PBC is just a for-profit company that has _some_ sort of specific mandate to benefit the "public good" - however it chooses to define that. It's generally meant to provide some balance toward societal good over the more common, strictly shareholder profit-maximizing alternative.
(IANAL but run a PBC that uses this charter[1] and have written about it here[2] as part of our biennial reporting process.)
The charter of a public-benefit corporation gives the company's board and management a bit of legal cover for making decisions that don't serve to maximize, or may even limit, financial returns to shareholders, when those decisions are made for the benefit of the public.
Reality: It is the same as any other for-profit with a better-sounding name. It confuses a lot of people into thinking it's a non-profit without being one.
Theory: It allows the CEO to make decisions motivated not just by maximizing shareholder value but by some other social good. Of course, very few PBC CEOs choose to do that.
There are a lot of good points here, by multiple vantage points as far as views for the argument of how imminent, if it - metaphysically or logistically - viable at all even, AGI is.
I personally think the conversation, including obviously in the post itself, has swung too far in the direction of how AGI can or will potentially affect the ethical landscape regarding AI, however. I think we really ought to concern ourselves with addressing and mitigating effects that it already HAS brought - both good and bad - rather than engaging in any excessive speculation.
SamA is in a hurry because he's set to lose the race. We're at peak valuation and he needs to convert something now.
If the entrenched giants (Google, Microsoft and Apple) catch up - and Google 100% has, if not surpassed - they have a thousand levers to pull and OpenAI is done for. Microsoft has realized this, hence why they're breaking up with them - Google and Anthropic have shown they don't need OpenAI. Galaxy phones will get a Gemini button, Chrome will get it built into the browser. MS can either develop their own thing , use opensource models, or just ask every frontier model provider (and there's already 3-4 as we speak) how cheaply they're willing to deliver. Then chuck it right in the OS and Office first-class. Which half the white collar world spends their entire day staring at. Apple devices too will get an AI button (or gesture, given it's Apple) and just like MS they'll do it inhouse or have the providers bid against each other.
The only way OpenAI David was ever going to beat the Goliaths GMA in the long run was if it were near-impossible to catch up to them, á la TSMC/ASML. But they did catch up.
It's doubtful if there even is a race anymore. The last significant AI advancement in the consumer LLM space was fluent human language synthesis around 2020, with its following assistant/chat interface. Since then, everything has been incremental — larger models, new ways to prompt them, cheaper ways to run them, more human feedback, and gaming evaluations.
The wisest move in the chatbot business might be to wait and see if anyone discovers anything profitable before spending more effort and wasting more money on chat R&D, which includes most agentic stuff. Reliable assistants or something along those lines might be the next big breakthrough (if you ask certain futurologists), but the technology we have seems unsuitable for any provable reliability.
ML can be applied in a thousand ways other than LLMs, and many will positively impact our lives and create their own markets. But OpenAI is not in that business. I think the writing is on the wall, and Sama's vocal fry, "AGI is close," and humanity verification crypto coins are smoke and mirrors.
Saying LLMs have only incrementally improved is like saying my 13 year old has only incrementally approved over the last 5 years. Sure, it's been a set of continuous improvements, but that has taken it from a toy to genuinely insanely useful.
Personally, deep research and o3 have been transformative, taking LLMs from something I have never used to something that I am using daily.
Even if the progress ends up plateauing (which I do not believe will happen in the near term), behaviors are changing; OpenAI is capturing users, and taking them from companies like Google. Google may be able to fight back and win - Gemini 2.5 Pro is great - but any company sitting this out risks being unable to capture users back from Open AI at a later date.
> any company sitting this out risks being unable to capture users back from Open AI at a later date.
Why? I paid for Claude for a while, but with Deepseek, Gemini and the free hits on Mistral, ChatGPT, Claude and Perplexity I'm not sure why I would now. This is anecdotal of course, but I'm very rarely unique in my behaviour. I think the best the subscription companies can hope for is that their subscribers don't realize that Deepseek and Gemini can basically do all you need for free.
I doubt it. Google is shoving Gemini on everyone’s face through search, and Meta AI is embedded in every Meta product. Heck, instagram created a bot marketplace.
They might not “know” the brand as well as ChatGPT, but the average consumer has definitely been exposed to those at the very least.
DeepSeek also made a lot of noise, to the point that, anecdotally, I’ve seen a lot of people outside of tech using it.
I can't square how OpenAi can capture users and presumably retain them when the incumbents have been capturing users for multiple decades and why can they not retain them?
If every major player had an AI option, i'm just not understanding how because OpenAi moved first or got big first, the hugely massively successful companies that did the same thing for multiple decades don't have the same advantage?
Who knows how this will play out, but user behavior is always somewhat sticky and OpenAI now has 400M+ weekly active users. Currently, I'm not sure there is much of a moat, as many would jump if, say, Google released a model that is 10x better. However, there are myriad ways that OpenAI could slowly try to make their userbase even stickier:
1. OpenAI is apparently in the process of building a social network.
2. OpenAI is apparently working with Jonny Ive on some sort of hardware.
3. OpenAI is increasingly working on "memory" as a LLM feature. Users may be less likely to switch as an LLM increasingly feels like a person that knows you, understands you, has a history with you, etc.
4. Google and MSFT are leveraging their existing strengths. Perhaps you will stick with Gemini given deep integration with Android, Google Drive, Sheets, Docs, etc.
5. LLMs, as depressing as this sounds, will increasingly be used for romantic/friend purposes. These users may not want to switch, as it would be like breaking up and finding a new partner.
6. Your chat history, if it can't be easily exported/imported, may be a sticky feature, especially if it can be improved (e.g. easily search, cross-reference, chats, like a supercharged interconnecting note app with brains).
I could list 100 more of these. Perhaps none of the above will happen, but again, they have 400M weekly users and they will find ways to keep them. It's a lot easier to keep users that have a habit of showing up, then getting them in the first place. There's a reason that Google is treating this like an emergency; they are at serious risk of having their search cash cow permanently disrupted if they don't act fast to win.
6 (can’t export/import chat history) is already a wrap since every user is prohibited from using ChatGPT chat logs to “develop models that compete with OpenAI,” if you export your chats and give it to Gemini or Claude or post it on X and Grok reads it, then you just violated the OpenAI terms, that’s grounds for a permaban or lawsuit for breach of contract (lol) … maybe your companies accept this risk but I’m in malicious compliance mode
Google is alright, but they have similar stupid noncompete vendor lock in rule, and no way to opt out of training, so there’s no real reason to trust Google. Yeah they could ship tool use in reasoning to catch up to o3, but it’ll just be catching up and not passing unless they fix the stupid legal terms.
Claude IDK how to trust, they train on feedback and everything is feedback, and they have the noncompete rule written even more broadly, dumb to use that.
Grok has a noncompete rule but also has a way to opt out of training, so it’s on the same tier of ClosedAI. I use it sometimes for jokey toy image generation crap but there’s no way to use it for anything serious since it has a copypasted closed ai prohibition
Mistral needs better models and simpler legalese, it’s so complicated and impossible to know which of the million legal contracts applies
IMHO meta is the only player, but they shot themselves in the foot by making Llama 4 too big for the local llama community to even use, super dumb, killed their most valuable thing which was the community.
That means the best models we can use for work without needing to worry about a lawsuit, are Qwen, and DeepSeek distills, no American AI is even in the same ballpark, Gemma 3 is refusal king if you even hint at something controversial. basically, America is getting actively stomped by China in AI right now, because their stuff is open and interoperable, and ours is closed and has legal noncompete bullshit, what can we actually build that doesn’t compete with these companies? Nothing
Very thought provoking reply. #3 sounds the most sticky to me, in the product sense that you'd build "your own LLM/agent" and plug it other services. I heard this on a product podcast [1], think of it like Okta SSO integration: access controls for your personal/sensitive LLM stuff vs all other services trying to get you to use their LLM.
#5 stands out as well as a substantial barrier.
The rest to me our sticky, but no more uniquely sticky than any other service that retains data. Like the switching cost of email or a browser. It does stick but not insurmountable and once the switch is made, it's like why did I wait so long? (I'm a Safari user!)
No, it's still just a toy. Until they can make the models actually consistently good at things, they aren't going to be useful. Right now they still BS you far too much to trust them, and because you have to double check their work every time they are worse than no tool at all.
To extend your illustration, 5 years ago no one could train an LLM with the capabilities of a 13 year old human; now many companies can both train LLMs and integrate them into products.
> taken it from a toy to genuinely insanely useful.
It's been five years. There is no AI killer app. Agentic coding is still hot garbage. Normal people don't want to use AI tools despite them being shoved into every SaaS under the sun. LLMs are most famous among non-tech users for telling you to put glue into pizza. No one has been able to scale their chatbots into something profitable, and no one can put a date on when they'll be profitable.
Why are you still pretending anything is going to come out of this?
Just to get things right. The big AI LLM hype started end of 2022 with the launch of ChatGPT, DALL-E 2, ....
Most people in society connect AI directly to ChatGPT and hence OpenAI. And there has been a lot of progress in image generation, video generation, ...
So I think your timeline and views are slightly off.
> Just to get things right. The big AI LLM hype started end of 2022 with the launch of ChatGPT, DALL-E 2, ....
GPT-2 was released in 2019, GPT-3 in 2020. I'd say 2020 is significant because that's when people seriously considered the Turing test passed reliably for the first time. But for the sake of this argument, it hardly matters what date years back we choose. There's been enough time since then to see the plateau.
> Most people in society connect AI directly to ChatGPT and hence OpenAI.
I'd double-check that assumption. Many people I've spoken to take a moment to remember that "AI" stands for artificial intelligence. Outside of tongue-in-cheek jokes, OpenAI has about 50% market share in LLMs, but you can't forget that Samsung makes AI washing machines, let alone all the purely fraudulent uses of the "AI" label.
> And there has been a lot of progress in image generation, video generation, ...
These are entirely different architectures from LLM/chat though. But you're right that OpenAI does that, too. When I said that they don't stray much from chat, I was thinking more about AlexNet and the broad applications of ML in general. But you're right, OpenAI also did/does diffusion, GANs, transformer vision.
This doesn't change my views much on chat being "not seeing the forest for the trees" though. In the big picture, I think there aren't many hockey sticks/exponentials left in LLMs to discover. That is not true about other AI/ML.
>In the big picture, I think there aren't many hockey sticks/exponentials left in LLMs to discover. That is not true about other AI/ML.
We do appear to be hitting a cap on the current generation of auto-regressive LLMs, but this isn't a surprise to anyone on the frontier. The leaked conversations between Ilya, Sam and Elon from the early OpenAI days acknowledge they didn't have a clue as to architecture, only that scale was the key to making experiments even possible. No one expected this generation of LLMs to make it nearly this far. There's a general feeling of "quiet before the storm" in the industry, in anticipation of an architecture/training breakthrough, with a focus on more agentic, RL-centric training methods. But it's going to take a while for anyone to prove out an architecture sufficiently, train it at scale to be competitive with SOTA LLMs and perform enough post training, validation and red-teamint to be comfortable releasing to the public.
Current LLMs are years and hundreds of millions of dollars of training in. That's a very high bar for a new architecture, even if it significantly improves on LLMs.
ChatGPT was not released to the general public until November 2022, and the mobile apps were not released until May 2023. For most of the world LLM's did not exist before those dates.
This site and many others were littered with OpenAI stories calling it the next Bell Labs or Xerox PARC and other such nonsense going back to 2016.
And GPT stories kicked into high gear all over the web and TV in 2019 in the lead-up to GPT-2 when OpenAI was telling the world it was too dangerous to release.
Certainly by 2021 and early 2022, LLM AI was being reported on all over the place.
>For most of the world LLM's did not exist before those dates.
Just because people don't use something doesn't mean they don't know about it. Plenty of people were hearing about the existential threat of (LLM) AI long before ChatGPT. Fox News and CNN had stories on GPT-2 years before ChatGPT was even a thing. Exposure doesn't get much more mainstream than that.
As another proxy, compare Nvidia revenues - $26.91bln in 2022, $26.97bln in 2023, $60bln 2024, $130bln 2025. I think it's clear the hype didn't start until 2023.
You're welcome to point out articles and stores before this time period "hyping" LLM's, but what I remember is that before ChatGPT there was very little conversation around LLM's.
If you're in this space and follow it closely, it can be difficult to notice the scale. It just feels like the hype was always big. 15 years ago it was all big data and sentiment analysis and NLP, machine translation buzz. In 2016 Google Translate switched to neural nets (LSTM) which was relatively big news. The king+woman-man=queen stuff with word2vec. Transformer in 2017. BERT and ELMo. GPT2 was a meme in techie culture, there was even a joke subreddit where GPT2 models were posting comments. GPT3 was also big news in the techie circles. But it was only after ChatGPT that the average person on the street would know about it.
Image generation was also a continuous slope of hype all the way from the original GAN, then thispersondoesnotexist, the sketch-to-photo toys by Nvidia and others, the avocado sofa of DallE. Then DallE2, etc.
The hype can continue to grow beyond our limit of perception. For people who follow such news their hype sensor can be maxed out earlier, and they don't see how ridiculously broadly it has spread in society now, because they didn't notice how niche it was before, even though it seemed to be "everywhere".
There's a canyon of a difference between excitement and buzz vs. hype. There was buzz in 2022, there was hype in 2023. No one was spending billions in this space until a public demarcation point that, not coincidentally, happened right after ChatGPT.
I'd say Chain-of-Thought has massively improved LLM output. Is that "incremental"? Why is that more incremental than the move from GPT-2 to GPT-3? Sure you can say that this is when LLMs first passed some sort of Turing test, but fundamentally there was no technological difference from GPT-3 to GPT-4. In fact I would say the quality of GPT-4 unlocked thousands (millions?) more use-cases that were not very viable with the quality delivered by GPT-3. I don't see any reason for more use-cases to keep being unlocked by LLM improvements.
Yes. But they have also improved a lot. Incremental just means that the function is going up without breaking points. We haven't seen anything revolutionary, just evolutionary in the last 3 years. But the models do provide 2 or 3 times more value. So their pace of advancement is not slow.
The better you know a field the more it looks incremental. In other words, incrementalness is more a function of how much attention you pay or how deep you research it. Relativity and quantum mechanics were also incremental. Copernicus and Kepler were incremental. Deep learning itself was incremental. Based on almost identical networks from the 90s (CNN), which were using methods from the 80s (backprop) on architectures from the 70s (neocognitron) using activation functions from the 60s and the basic neuron model from the 40s (McCullough and Pitts), which was just a mathematization of observations in biology via microscopy integration with mathematical logic and electrical logic gates developed around the same time (Shannon), so it's just logic as formalized by Gödel and others and it goes back to Hilbert's program, which can be extrapolated from Leibniz etc. etc. It's not hard to say that "it's really just previous thing X plus previous thing Y, nothing new under the sun" to literally anything.
"It just suddenly appeared out of nowhere" is just a perception based on missing info. Many average people think ChatGPT was a sudden innovation specifically by OpenAI seemingly out of nowhere. Because they didn't follow it.
Well I think you’re correct that they know the jig is up, but I would say they know the AI bubble is about to burst so they want to cash out before that happens.
There is little to no money to be made in GAI, it will never turn into AGI, and people like Altman know this, so now they’re looking for a greater fool before it is too late.
AI companies are already automating huge swaths of document analysis, customer service. Doctors are straight up using ChatGPT to diagnose patients. I know it’s fun to imagine AI is some big scam like crypto, but you’d have to be ignoring a lot of genuine non hype economic movement at this point to assume GAI isn’t making any money.
Why does the forum of an incubator that now has a portfolio that is like 80% AI so routinely bearish on AI? Is it a fear of irrelevance?
> AI companies are already automating huge swaths of document analysis, customer service. Doctors are straight up using ChatGPT to diagnose patients
I don't think there is serious argument that LLMs won't generate tremendous value. The question is who will capture it. PCs generated massive value. But other than a handful of manufacturers and designers (namely, Apple, HP, Lenovo, Dell and ASUS), most PC builders went bankrupt. And out of the value generated by PCs in the world, the vast majority was captured by other businesses and consumers.
Doctors were using Google to diagnose patients before. The thing is, it's still the doctor delivering the diagnosis, the doctor writing the prescription, and the doctor billing insurance. Unless and until patients or hospitals are willing and legally able to use ChatGPT as a replacement for a doctor (unwise), ChatGPT is not about to eat any doctor's lunch.
Not OP, but I think this makes the point, not argues against it. Something has come along that can supplant Google for a wide range of things. And it comes without ads (for now). It’s an opportunity to try a different business model, and if they succeed at that then it’s off to the races indeed.
When the wright brothers made their plane they didn't expect today that there are thousands of planes flying at a time.
When the Internet was developed they didn't imagine the world wide Web.
When cars started to get popular people still thought there would be those who are going to stick with horses.
I think you're right on the AI we're just on the cusp of it and it'll be a hundred times bigger than we can imagine.
Back when oil was discovered and started to be used it was about equal to 500 laborers now automated. One AI computer with some video cards are now worth x number of knowledge workers. That never stop working as long as the electricity keeps flowing.
They did actually imagine the World Wide Web at the time of developing the first computer networks. This is one of the most obvious outcomes of a system of networked devices.
Even five years into this "AI revolution," the boosters haven't been able to paint a coherent picture of what AI could reasonably deliver – and they've delivered even less.
Lol they are not using ChatGPT for the full diagnosis. They're used in steps of double checking knowledge like drug interactions and such. If you're gonna speak on something like this in a vague manner I'd suggest you google this stuff first. I can tell you for certain that that part in particular is a highly inaccurate statement.
The article you posted describes a patient using ChatGPT to get a second opinion from what their doctor told them, not the doctor themself using ChatGPT.
The article could just as easily be about “Delayed diagnosis of a transient ischemic attack caused by talking to some rando on Reddit” and it would be just as (non) newsworthy.
People aren't saying that AI as a tool is going to go bust. Instead, people are saying that this practice of spending 100s of millions, or even billions of dollars on training massive models is going bust.
AI isn't going to be the world changing, AGI, that was sold to the public. Instead, it will simply be another B2B SaaS product. Useful, for sure. Even profitable for startups.
They made $4 billion last year, not really "little to no money". I agree it's not clear they can justify their valuation but it's certainly not a bubble.
But didn't they spend $9 billion? If I have a machine that magically turns $9 billion of investor money into $4 billion in revenue, I need to have a pretty awesome story for how in the future I am going to be making enormous piles of money to pay back that investment. If it looks like frontier models are going to be a commodity and it is not going to be winner-take-all... that's a lot harder story to tell.
There is a pretty significant different between “buy $9 for $4” and selling a service that costs $9 to build and run per year for $4 per year. Especially when some people think that service could be an absolute game changer for the species.
It’s ok to not buy into the vision or think it’s impossible. But it’s a shallow dismissal to make the unnuanced comparison, especially when we’re talking about a brand new technology - who knows what the cost optimization levers are. Who knows what the market will bear after a few more revs.
When the iPhone first came out, it was too expensive, didn’t do enough, and many people thought it was a waste of apples time when they should be making music players.
It's a commodity technology and VCs are investing as if this were still a winner-takes-all play. It's obviously not, if there were any doubt about that, Deepseek's R1 release should have made it obvious.
> But it’s a shallow dismissal to make the unnuanced comparison, especially when we’re talking about a brand new technology - who knows what the cost optimization levers are. Who knows what the market will bear after a few more revs.
You're acting as-if OpenAI is still the only player in this space. OpenAI has plenty of competitors who can deliver similar models for cheaper. Gemini 2.5 is an excellent and affordable model and Google has a substantially better capacity to scale because of a multi-year investment in its TPUs.
Whatever first mover advantage OpenAI had has been quickly eliminated, they've lost a lot of their talent, and the chief hypothesis they used to attract the capital they've raised so far is utterly wrong. VCs would be mad to be continuing to pump money into OpenAI just to extend their runway -- at 5 Bln losses per year they need to actually consider cost, especially when their frontier releases are only marginal improvements over competitors.
... this is a bubble despite the promise of the technology and anyone paying attention can see it. For all of the dumb money employed in this space to make it out alive, we'll have to at least see a fairly strong form of AGI developed, and by that point the tech will be threatening the general economic stability of the US consumer.
> When the iPhone first came out, it was too expensive, didn’t do enough, and many people thought it was a waste of apples time when they should be making music players.
This comparison is always used when people are trying to hype something. For every "iPhone" there are thousands of failures
> I started a business that would give people back $9 if they gave me $4
I feel like people overuse this criticism. That's not the only way that companies with a lot of revenue lose money. And this isn't at all what OpenAI is doing, at least from their customers' perspective. It's not like customers are subscribing to ChatGPT simply because it gives them something they were going to buy anyway for cheaper.
Facebook had immense network effects working for it back then.
What network effect does OpenAI have? Far as I can tell, moving from OpenAI to Gemini or something else is easy. It’s not sticky at all. There’s no “my friends are primarily using OpenAI so I am too” or anything like that.
OpenAI (or, more specifically, Chat GPT) is CocaCola, not Facebook.
They have the brand recognition and consumer goodwill no other brand in AI has, incredibly so with school students, who will soon go into the professional world and bring that goodwill with them.
I think better models are enough to dethrone OpenAI in API, B2C and internal enterprise use cases, but OpenAI has consumer mindshare, and they're going to be the king of chatbots forever. Unless somebody else figures out something which is better by orders of magnitude and that Open AI can't copy quickly, it's going to stay that way.
Apple had the opportunity to do something really great here. With Siri's deep device integration on one hand and Apple's willingness to force 3rd-party devs to do the right thing for users on the other, they could have had a compelling product that nobody else could copy, but it seems like they're not willing to go that route, mostly for privacy, antitrust and internal competency reasons, in that order. Google is on the right track and might get something similar (although not as polished as typical Apple) done, but Android's mindshare among tech-savvy consumers isn't great enough for it to get traction.
> Unless somebody else figures out something which is better by orders of magnitude and that Open AI can't copy quickly, it's going to stay that way.
This will happen, and it won't be another model which Open AI can't copy, it'll be products.
I don't doubt OpenA I can create the better models but they're no moat if they're not in better products. Right now the main product is chat, which is easy enough to build, but as integrations get deeper how can OpenAI actually ensure it keeps traffic?
Case in point, Siri. Apple allows you to use ChatGPT with Siri right now. If Apple chooses so, they could easily remove that setting. On most devices ChatGPT lives within the confines of an app or the browser. A phone with deep AI integration is arguably a fantastic product— much better than having to open an app and chat with a model. How quickly could Open AI build a phone that's as good as those of the big phone companies today?
To draw a parallel— Google Assistant has long been better than Siri, but to use Siri you don't have to install an app. I've used both Android and iOS, and every time I'm on iPhone I switch back to Siri because in spite of being a worse assistant, it's overall a better product. It integrates well with the rest of the phone, because Apple has chosen to not allow any other voice assistant integrate deeply with the rest of the phone.
Does Google not have brand recognition and Consumer good will? We might read all sorts of deep opinions of Google on HN, but I think Search and Chrome market share speak themselves. For the average consumer, I'm skeptical that OpenAI carries much weight.
> For the average consumer, I'm skeptical that OpenAI carries much weight.
My friend teaches at a Catholic girls’ high school and based on what he tells me, everyone knows about ChatGPT, both staff and students. He just had to fail an entire class on an assignment because they all used it to write a book summary (which many of them royally screwed up because there’s another book with a nearly identical title).
It’s all anecdotal and whatnot but I don’t think many of them even know about Claude or Gemini, while ChatGPT has broad adoption within education. (I’m far less clear on how much mindshare it has within the general population though)
Coca Cola does insane amounts of advertising to maintain their position in the mind of the consumer. I don't think it is as sticky as you say it is for OpenAi
> who will soon go into the professional world and bring that goodwill with them.
...Until their employer forces them to use Microsoft Copilot, or Google Gemini, or whatever, because that's what they pay for and what integrates into their enterprise stack. And the new employee shrugs and accepts it.
> Just like people are forced to use web Office and Microsoft Teams, and start prefering them over Google Docs and Slack? I don't think so
...yes. Office is the market leader. Slack has between a fifth and a fourth of the market. Coca-Cola's products have like 70% market share in the American carbonated soft-drink market [1].
Yep, I mostly interact with these AIs through Cursor. When I want to ask it a question, there's a little dropdown box and I can select openai/anthropic/deepseek whatever model. It's as easy as that to switch.
Yeah but I remember when search first started getting integrated with the browser and the "switch search engine" thing was significantly more prominent. Then Google became the default and nobody ever switched it and the rest is history.
So the interesting question is: How did that happen? Why wasn't Google search an easily swapped commodity? Or if it was, how did they win and defend their default status? Why didn't the existing juggernauts at the time (Microsoft) beat them at this game?
I have my own answers for these, and I'm sure all the smart people figuring out strategy at Open AI have thought about similar things.
It's not clear if Open AI will be able to overcome this commodification issue (personally, I think they won't), but I don't think it's impossible, and there is prior art for at least some of the pages in this playbook.
Yes, I think people severely underrate the data flywheel effects that distribution gives an ML-based product, which is what Google was and ChatGPT is. It is also an extremely capital-intensive industry to be in, so even if LLMs are commoditized, it will be to the benefit of a few players, and barring a sustained lead by any one company over the others, I suspect the first mover will be very difficult to unseat.
Google is doing well for the moment, but OpenAI just closed a $40 billion round. Neither will be able to rest for a while.
Yeah, a very interesting metric to know would be how many tokens of prompt data (that is allowed to be used for training) the different products are seeing per day.
> So the interesting question is: How did that happen? Why wasn't Google search an easily swapped commodity? Or if it was, how did they win and defend their default status? Why didn't the existing juggernauts at the time (Microsoft) beat them at this game?
Maybe the big amount of money they've given to Apple which is their direct competitor in the mobile space. Also good amount of money given to Firefox, which is their direct competitor in the browser space, alongside side Safari from Apple.
Most people don't care about the search engine. The default is what they will used unless said default is bad.
I don't think my comment implied that the answers to these questions aren't knowable! And indeed, I agree that the deals to pay for default status in different channels is a big part of that answer.
So then apply that to Open AI. What are the distribution channels? Should they be paying Cursor to make them the default model? Or who else? Would that work? If not, why not? What's different?
My intuition is that this wouldn't work for them. I think if this "pay to be default" strategy works for someone, it will be one of their deeper pocketed rivals.
But I also don't think this was the only reason Google won search. In my memory, those deals to pay to be the default came fairly long after they had successfully built the brand image as the best search engine. That's how they had the cash to afford to pay for this.
A couple years ago, I thought it seemed likely that Open AI would win the market in that way, by being known as the clear best model. But that seems pretty unclear now! There are a few different models that are pretty similarly capable at this point.
Essentially, I think the reason Google was able to win search whereas the prospects look less obvious for Open AI is that they just have stronger competition!
To me, it just highlights the extent to which the big players at the time of Google's rise - Microsoft, Yahoo, ... Oracle maybe? - really dropped the ball on putting up strong competition. (Or conversely, Google was just further ahead of its time.)
From talking to people, the average user relies on memories and chat history, which is not easy to migrate. I imagine that's the part of the strategy to keep people from hopping model providers.
No one has a deep emotional connection with OpenAI that would impede switching.
At best they have a bit of cheap tribalism that might prevent some incurious people who don't care much about using the best tools noticing that they aren't.
IMHO "ChatGPT the default chatbot" is a meaningful but unstable first-mover advantage. The way things are apparently headed, it seems less like Google+ chasing FB, more like Chrome eating IE + NN's lunch.
OpenAI is a relatively unknown company outside of the tech bubble. I told my own mom to install Gemini on her phone because she's heard of Google and is more likely going to trust Google with whatever info she dumps into a chat. I can’t think of a reason she would be compelled to use ChatGPT instead.
Consumer brand companies such as Coca Cola and Pepsi spend millions on brand awareness advertising just to be the “default” in everyone’s heads. When there’s not much consequence choosing one option over another, the one you’ve heard of is all that matters
Not sure if Google+ is a good analogy, it reminds me more of the Netscape vs IE fight. Netscape sprinted like it was going to dominate the early internet era and it worked until Microsoft bundled IE with Windows for free.
LLMs themselves aren't the moat, product integration is. Google, Apple and Microsoft already have the huge user bases and platforms with a big surface area covering a good chunk of our daily life, that's why I think they're better positioned if models become a commodity. OpenAI has the lead now, but distribution is way more powerful in the long run.
I know a single person who uses ChatGPT daily, and only because their company has an enterprise subscription.
My impression is that Claude is a lot more popular – and it’s the one I use myself, though as someone else said the vast majority of people, even in software engineering, don’t use AI often at all.
> OpenAI has been on a winning streak that makes ChatGPT the default chatbot for most of the planet
OpenAI has like 10 to 20% market share [1][2]. They're also an American company whose CEO got on stage with an increasingly-hated world leader. There is no universe in which they keep equal access to the world's largest economies.
The comparison of Chrome and IE is much more apt, IMO, because the deciding factor as other mentioned for social media is network effects, or next-gen dopamine algorithms (TikTok). And that's unique to them.
For example, I'd never suggest that e.g. MS could take on TikTok, despite all the levers they can pull, and being worth magnitudes more. No chance.
That's not at all the same thing: social media has network effects that keep people locked in because their friends are there. Meanwhile, most of the people I know using LLMs cancel and resubscribe to Chat-GPT, Claude and Gemini constantly based on whatever has the most buzz that month. There's no lock-in whatsoever in this market, which means they compete on quality, and the general consensus is that Gemini 2.5 is currently winning that war. Of course that won't be true forever, but the point is that OpenAI isn't running away with it anymore.
And nobody's saying OpenAI will go bankrupt, they'll certainly continue to be a huge player in this space. But their astronomical valuation was based on the initial impression that they were the only game in town, and it will come down now that that's no longer true. Hence why Altman wants to cash out ASAP.
Google+ absolutely would have won, and it was clear to me that somebody at Google decided they didn't want to be in the business of social networking. It was killed deliberately, it didn't just peter out.
Even Alibaba is releasing some amazing models these days. Qwen 3 is pretty remarkable, especially considering the variety of hardware the variants of it can run on.
On the other hand...If you asked, 5-6-7 years ago, 100 people which of the following they used:
Slack? Zoom? Teams?
I'm sure you'd get a somewhat uniform distribution.
Ask the same today, and I'd bet most will say Teams. Why Teams? Because it comes with office / windows, so that's what most people will use.
Same logic goes for the AI / language models...which one are people going to use? The ones that are provided as "batteries included" in whatever software or platform they use the most. And for the vast majority of regular people / workers, it is going to be something by microsoft / google / whatever.
About 95% of people know the Coca Cola brand, about 70% of soda drinkers in the US drink one of its sodas, and about 40% of all people in the US drink it.
Agreed on Google dominance. Gemini models from this year are significantly more helpful than anything from OAI.. and they're being handed out for free to anyone with a Google account.
Makes for a good underdog story! But OpenAI is dominating and will continue to do so. They have the je ne sais quoi. It’s therefore laborious to speak to it, but it manifests in self-reinforcing flywheels of talent, capital, aesthetic, popular consciousness, and so forth. But hey, Bing still makes Microsoft billions a year, so there will be other winners. Underestimating focused breakout leaders in new rapidly growing markets is as cliche as those breakouts ultimately succeeding, so even if we go into an AI winter it’s clear who comes out on top the other side. A product has never been adopted this quickly, ever. AGI or not, skepticism that merely points to conventional resource imbalances misses the big picture and such opinions age poorly. Doesn’t have to be obvious only in hindsight if you actually examine the current record of disruptive innovation.
> SamA is in a hurry because he's set to lose the race.
OpenAI trained GPT-4.1 and 4.5—both originally intended to be GPT-5 but they were considered disappointments, which is why they were named differently.
Did they really believe that scaling the number of parameters would continue indefinitely without diminishing returns? Not only is there no moat, but there's also no reasonable path forward with this architecture for an actual breakthrough.
I probably need to clarify what I'm talking about, so that peeps like @JumpCrisscross can get a better grasp of it.
I do not mean the total market share of the category of businesses that could be labeled as "AI companies", like Microsoft or NVIDIA, on your first link.
I will not talk about your second link because it does not seem to make sense within the context of this conversation (zero mentions or references to market share).
What I mean is:
* The main product that OpenAI sells is AI models (GPT-4o, etc...)
* OpenAI does not make hardware. OpenAI is not in the business of cloud infrastructure. OpenAI is not in the business of selling smartphones. A comparison between OpenAI and any of those companies would only make sense for someone with a very casual understanding of this topic. I can think of someone, perhaps, who only used ChatGPT a couple times and inferred it was made by Apple because it was there on its phone. This discussion calls for a deeper understanding of what OpenAI is.
* Other examples of companies that sell their own AI models, and thus compete directly with OpenAI in the same market that OpenAI operates by taking a look at their products and services, are Anthropic (w/ Claude), Google (w/ Gemini) and some others ones like Meta and Mistral with open models.
* All those companies/models, together, make up some market that you can put any name you want to it (The AI Model Market TM)
That is the market I'm talking about, and that is the one that I estimated to be 90%+ which was pretty much on point, as usual :).
> that is the market that I'm talking about, and that is the one that I (correctly, as usual) estimated to be around 90% [1][2]
Your second source doesn’t say what it’s measuring and disclaims itself as from its “‘experimental era’ — a beautiful mess of enthusiasm, caffeine, and user-submitted chaos.” Your first link only measures chatbots.
ChatGPT is a chatbot. OpenAI sells AI models, including via ChatGPT. Among chatbots, sure, 84% per your source. (Not “90%+,” as you stated.) But OpenAI makes more than chatbots, and in the broader AI model market, its lead is far from 80+ percent.
TL; DR It is entirely wrong to say the “market share of OpenAI is like 90%+.”
One, you suggested OP had not “looked at the actual numbers.” That implies you have. If you were just guessing, that’s misleading.
Two, you misquoted (and perhaps misunderstand) a statistic that doesn’t match your claim. Even in your last comment, you defined the market as “companies that sell their own AI models” before doubling down on the chatbot-only figure.
> not even in Puchal wildest dreams
Okay, so what’s your source? Because so far you’ve put forward two sources, a retracted one and one that measures a single product that you went ahead and misquoted.
I have no problem with 'OpenAI', so much as the individual running it and, more generally, rich financiers making the world worse in every capitalizable way and even some they can't capitalize on.
I asked Gemini today to replace the background of a very simple logo and it refused. ChatGPT did it no problem (though it did take a long time because apparently lots of people were doing image generation).
I guess Gemini just refused because of a poor filter for sensitive content. But still, it was annoying.
Literally the founder of Y Combinator all but outright called Sam Altman a conniving dickbag. That’s the consensus view advanced by the very man who made him.
This seems like misinformation, are you talking about how Sam left YC after OpenAI took off? What PG said was "we didn't want him to leave, just to choose one or the other"[1].
That says PG thinks Sam is clever. I don't think there's any moral judgement there. The statement I posted suggests PG likes Sam and would love to keep working with him.
Google is pretty far behind. They have random one off demos and they beat benchmarks yes, but try to use Google’s AI stuff for real work and it falls apart really fast.
Anecdotally, I've switched to Gemini as my daily driver for complex coding tasks. I prefer Claude's cleaner code, but it is less capable at difficult problems, and Anthropic's servers are unreliable.
So the non-profit retains control but we all know that Altman controls the board of the non-profit and I'd be shocked if he won't have significant stock in the new for-profit (from TFA: "we are moving to a normal capital structure where everyone has stock"). Which means that regardless of whether the non-profit has control on paper, OpenAI is now even better structured for Sam Altman's personal enrichment.
No more caps on profit, a simpler structure to sell to investors, and Altman can finally get that 7% equity stake he's been eyeing. Not a bad outcome for him given the constraints apparently imposed on them by "the Attorney General of Delaware and the Attorney General of California".
We have seen how much power does the board have after the firing of Altman - none.
Let's see how this plays out. PBC effectively means nothing - just take a look at Xai and its purchase of Twitter. I would love to listen reasoning explaining why this ~33 billion USD move is benefiting public.
The explanation seemed pretty obvious to me: They set up a nonprofit to deliver an AI that was Open.
Then things went unexpectedly well, people were valuing them at billions of dollars, and they suddenly decided they weren't open any more. Suddenly they were all about Altman's Interests Safety (AI Safety for short).
The board tried to fulfil its obligation to get the nonprofit to do the things in its charter, and they were unsuccessful.
The explanation was pretty clear and coherent: The CEO was no longer adhering to the mission of the non-profit (which the board was upholding).
But they found themselves alone in that it turns out the employees (who were employed by the for-profit company) and investors (MSFT in particular) didn't care about the mission and wanted to follow the money instead.
So the board had no choice but to capitulate and leave.
Branding, and perhaps a demand from the judges. In practice it doesn't mean anything if/when they stuff the board with people who want to run it as a normal LLC.
If I pay £200,000 for a car, I received more value than I gave up, otherwise I wouldn't have given the owner £200,000 for her car. No reasonable person would say the car was "free"...
> If you use it, that means you received more value than you gave up. It's called consumer surplus
This is true for literally any transaction. Actually, it's true for any rational action. If you're being tortured, and you decide it's not worth it to keep your secrets hidden any longer, you get more than you give up when you stop being tortured.
It’s only true in theory and over a single transaction, not necessarily over time. The hack that VCs have exploited for decades now is subsidizing products and acquiring competition to eventually enshittify. In this case, when OpenAI dials up the inevitable enshittification, they’ll have gotten a ton of data from their users to use for their proprietary closed AI.
That's effectively every business that isn't a complete rent-seeking monopoly. It's not a very good measure.
edit: to be clear, it's not a bad thing - we should want companies that create consumer surplus. But that's the default state of companies in a healthy market.
It’s like a free beer, but it’s Bud Light, lukewarm, and your reaction to tasting the beer goes toward researching ways to make you appreciate the lukewarm Bud Light for its marginal value, rather than making that beer taste better or less unhealthy. They’ll try very hard to convince you that they have though. It parallels their approach to AI Alignment.
Or, alternatively, it’s much harder to fight with one hand behind your back. They need to be able to compete for resources and talent given the market structure, or they fail on the mission.
This is already impossibly hard. Approximately zero people commenting would be able to win this battle in Sam’s shoes. What would they need to do to begin to have a chance? Rather than make all the obvious comments “bad evil man wants to get rich”, think what it would take to achieve the mission. What would you need to do in his shoes, aside from just give up and close up shop? Probably this, at the very least.
Edit: I don’t know the guy and many near YC do. So I accept there may be a lens I don’t have. But I’d rather discuss the problem, not the person.
What would they have to do to have a chance supporting the mission they were incorporated and given preferential tax treatment for a decade to make happen? Certainly not this.
Isn’t Sam already very rich? I mean it wouldn’t be the first time a guy wanted to be even richer, but I feel like we need to be more creative when divining his intentions
Why would we need to be more creative? The explanation of him wanting more money is perfectly adequate.
Being rich results in a kind of limitation of scope for ambition. To the sufferer, a person who has everything they could want, there is no other objective worth having. They become eccentric and they pursue more money.
We should have enrichment facilities for these people where they play incremental games and don’t ruin the world like the paperclip maximizers they are.
> Why would we need to be more creative? The explanation of him wanting more money is perfectly adequate.
Being rich results in a kind of limitation of scope for ambition.
The dude announces new initiatives from the White House, regularly briefs Senators and senior DoD leaders, and is the top get for interviews around the world for AI topics.
There’s a lot more to be ambitious about than just money.
These are all activities he is engaging in to generate money through the company he has a stake in. None of those activities have a purpose other than selling the work of his company and presenting it as a good investment which is how he gets money.
Maybe he wants to use the money in some nebulous future way, subjugating all people in a way that deals with his childhood trauma or whatever. That’s also something rich people do when they need a hobby aside from gathering more money. It’s not their main goal, except when they run into setbacks.
People are not complicated when they are money hoarders. They might have had hidden depths once, but they are thin furrows in the ground next to the giant piles of money that define them now.
> These are all activities he is engaging in to generate money through the company he has a stake in. None of those activities have a purpose other than selling the work of his company and presenting it as a good investment which is how he gets money.
So he doesn't enjoy the attention? Prestige or power? Respect?
Are you Sam Altman? Because you're making a lot of assumptions on his psyche right now.
It seems a defining feature of nearly every single extremely rich person is their belief that they somehow are smarter than filthy peasants, and so he decides to "educate" them of the sacred knowledge. This may take vastly different forms - genocide, war, trying to create via bribes a better government, create a city from scratch, create a new corporate "culture", do public proselytizing of their "do better" faith, write books, classes etc.
St. Altman plans to create a corporate god for us dumb schmucks, and he will be it's prophet.
Never understood his appeal. Lacks charisma. Not technically savvy relative to many engineers at OpenAI(I doubt he would pass their own intern interviews, even less so their FT). Very unlikeable in person (comes off as fake for some reason, like a political plant). Who is vouching for this guy. When I met him, for some reason, he reminded me of Thiel. He is no Jobs
> OpenAI is not a normal company and never will be.
Where did I hear something like that before...
> Founders' IPO Letter
> Google is not a conventional company. We do not intend to become one.
I wonder if it's intentional or perhaps some AI-assisted regurgitation prompted by "write me a successful letter to introduce a new corporate structure of a tech company".
"Instead of our current complex capped-profit structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal capital structure where everyone has stock. This is not a sale, but a change of structure to something simpler."
Imagine having a mission of “ensure[ing] that artificial general intelligence (AGI) benefits all of humanity” while also believing that it can only be trusted in the hands of the few
> A lot of people around OpenAI in the early days thought AI should only be in the hands of a few trusted people who could “handle it”.
He's very clearly stating that trusting AI to a few hands was an old, naiive idea that they have evolved from. Which establishes their need to keep evolving as the technology matures.
There is a lot to criticize about OpenAI and Sama, but this isn't it.
Another possibility is that OpenAL thinks _none_ of the labs will achieve AGI in a meaningful timeframe so they are trying to cash out with whatever you want to call the current models. There will only be one or two of those before investors start looking at the incredible losses.
The least speculative: PPUs will be converted from capped profit to unlimited profit equity shares at the benefit of PPU holders and at the expense of OpenAI the nonprofit. This is why they are doing it.
> Our mission is to ensure that artificial general intelligence (AGI) benefits all of humanity
They already fight transparency in this space to prevent harmful bias. Why should I believe anything else they have to say if they refuse to take even small steps toward transparency and open auditing?
Matt Levine on OpenAI's weird capped return structure in November 2023:
And the investors wailed and gnashed their teeth but it’s true, that is what they agreed to, and they had no legal recourse. And OpenAI’s new CEO, and its nonprofit board, cut them a check for their capped return and said “bye” and went back to running OpenAI for the benefit of humanity. It turned out that a benign, carefully governed artificial superintelligence is really good for humanity, and OpenAI quickly solved all of humanity’s problems and ushered in an age of peace and abundance in which nobody wanted for anything or needed any Microsoft products. And capitalism came to an end.
The explosion of PBC structured corps recently has me thinking it must just be a tax loophole at this point. I can't possibly imagine there is any meaningful enforcement around any of its restrictions or guidelines.
Not a loophole as they pay taxes (unlike non-profits) but a fig leaf to cover commercial activity with some feel-good label. The real purpose of PBC is the legal protection it may afford to the company from shareholders unhappy with less than maximal profit generation. It gives the board some legal space to do some good if they choose to but has no mandate like real non-profits which get a tax break for creating a public good or service, a tax break that can be withdrawn if they do not annually prove that public benefit to the IRS.
It’s not a tax thing, it’s a power thing. PBCs transfer power from shareholders to management as long as management can say they were acting for a public benefit.
The recent flap over ChatGPT's fluffery/flattery/glazing of users doesn't bode well for the direction that OpenAI is headed in. Someone at the outfit appeared to think that giving users a dopamine hit would increase time-spent-on-app or some other metric - and that smells like contempt for the intelligence of the user base and a manipulative approach designed not to improve the quality of the output, but to addict the user population to the ChatGPT experience. Your own personal yes-person to praise everything you do, how wonderful. Perfect for writing the scripts for government cabinent ministers to recite when the grand poobah-in-chief comes calling, I suppose.
What it really says is that if a user wants to control the interaction and get the useful responses, direct programmatic calls to the API that control the system prompt are going to be needed. And who knows how much longer even that will be allowed? As ChatGPT reports,
> "OpenAI has updated the ChatGPT UI (especially in GPT-4-turbo and ChatGPT Plus environments) to no longer expose the full system prompt or baseline prompt directly."
I agree that this is simply Altman extending his ability to control, shape and benefit from OpenAI. Yes, this is clearly (further) subverting the original intent under which the org was created - and that's unfortunate. But in terms of impact on the world, or even just AI safety, I'm not sure the governance of OpenAI matters all that much anymore. The "governance" wasn't that great after the first couple years and OpenAI hasn't been "open" since long before the board spat.
More crucially, since OpenAI's founding and especially over the past 18 months, it's grown increasingly clear that AI leadership probably won't be dominated by one company, progress of "frontier models" is stalling while costs are spiraling, and 'Foom' AGI scenarios are highly unlikely anytime soon. It looks like this is going to be a much longer, slower slog than some hoped and others feared.
I'm not gonna get caught in the details, I'm just going to assume this is legalese cognitive dissonance to avoid saying "we want this to stop being an NFP because we want the profits."
Here’s a breakdown of the *key structural changes*, and an analysis of *potential risks or concerns*:
---
## *What Has Changed*
### 1. *OpenAI’s For-Profit Arm is Becoming a Public Benefit Corporation (PBC)*
* *Before:* OpenAI LP (limited partnership with a “capped-profit” model).
* *After:* OpenAI LP becomes a *Public Benefit Corporation* (PBC).
*Implications:*
* A PBC is still a *for-profit* entity, but legally required to balance shareholder value with a declared public mission.
* OpenAI’s mission (“AGI that benefits all humanity”) becomes part of the legal charter of the new PBC.
---
### 2. *The Nonprofit Remains in Control and Gains Equity*
* The *original OpenAI nonprofit* will *continue to control* the new PBC and will now also *hold equity* in it.
* The nonprofit will use this equity stake to fund “mission-aligned” initiatives in areas like health, education, etc.
*Implications:*
* This strengthens the nonprofit’s influence and potentially its resources.
* But the balance between nonprofit oversight and for-profit ambition becomes more delicate as stakes rise.
---
### 3. *Elimination of the “Capped-Profit” Structure*
* The old “capped-return” model (investors could only make \~100x on investments) is being dropped.
* Instead, OpenAI will now have a *“normal capital structure”* where everyone holds unrestricted equity.
*Implications:*
* This likely makes OpenAI more attractive to investors.
* However, it also increases the *incentive to prioritize commercial growth*, which could conflict with mission-first priorities.
---
## *Potential Negative Implications*
### 1. *Increased Commercial Pressure*
* Moving from a capped-profit model to unrestricted equity introduces *stronger financial incentives*.
* This could push the company toward *more aggressive monetization*, potentially compromising safety, openness, or alignment goals.
### 2. *Accountability Trade-offs*
* While the nonprofit “controls” the PBC, actual accountability and oversight may be limited if the nonprofit and PBC leadership overlap (as has been a concern before).
* Past board turmoil in late 2023 (Altman's temporary ousting) highlighted how difficult it is to hold leadership accountable under complex structures.
### 3. *Risk of “Mission Drift”*
* Over time, with more funding and commercial scale, *stakeholder interests* (e.g., major investors or partners like Microsoft) might influence product and policy decisions.
* Even with the mission enshrined in a PBC charter, *profit-driven pressures could subtly shape choices*—especially around safety disclosures, model releases, or regulatory lobbying.
---
## *What Remains the Same (According to the Letter)*
* OpenAI’s *mission* stays unchanged.
* The *nonprofit retains formal control*.
* There’s a stated commitment to safety, open access, and democratic use of AI.
You missed the part where OpenAI the nonprofit gives away the value that’s between capped profit PPUs and unlimited profit equity shares, enriching current PPUs at the expense of the nonprofit. Surely, this is illegal.
This sounds like a good middle ground between going full capitalism and non-profit. This way they can still raise money and also have the same mission, but a weakened one. You can't have everything.
> Our mission is to ensure that artificial general intelligence (AGI) benefits all of humanity.
Then why is it paywalled? Why are you making/have made people across the world sift through the worst material on offer by the wide uncensored Internet to train your LLMs? Why do you have a for-profit LLC operating under a non-profit, or for that matter, a "Public Benefit Corporation" that has to answer to shareholders at all?
Related to that:
> or the needs for hundreds of billions of dollars of compute to train models and serve users.
How does that serve humanity? Redirecting billions of dollars to fancy autocomplete who's power demands strain already struggling electrical grids and offset the gains of green energy worldwide?
> A lot of people around OpenAI in the early days thought AI should only be in the hands of a few trusted people who could “handle it”.
No, we thought your plagiarism machine was a disgusting abuse of the public square, and to be clear, this criticism would've been easily handled by simply requesting people opt-in to have their material used for AI training. But we all know why you didn't do that, don't we Sam.
> It will of course not be all used for good, but we trust humanity and think the good will outweigh the bad by orders of magnitude.
Well so far, we've got vulnerable, lonely people being scammed on Facebook, we've got companies charging subscriptions for people to sext their chatbots, we've got various states using it to target their opposition for military intervention, and the White House may have used it to draft the dumbest basis for a trade war in human history. Oh and fake therapists too.
When's the good kick in?
> We believe this is the best path forward—AGI should enable all of humanity^1 to benefit each other.
> Then why is it paywalled? Why are you making/have made people across the world sift through the worst material on offer by the wide uncensored Internet to train your LLMs?
Because they're concerned about AI use the same way Google is concerned about your private data.
No, it's good that you feel this. Don't give up on tech, protest.
I've been feeling for some time now that we're sort of in the Vietnam War era of the tech industry.
I feel a strong urge to have more "ok, so where do we go from here?" and "what does a tech industry that promotes net good actually look like?" internal discourse in the community of practice, and some sort of ethical social contract for software engineering.
The open source movement has been fabulous and sometimes adjacent to or one aspect of these concerns, but really we need a movement for socially conscious and responsible software.
We need a tech counter-culture. We had one once, but now we need one.
Not all non-profits are doomed. It's natural that the biggest companies will be the ones who have growth and profit as their primary goal.
But there are still plenty of mission-focused technology non-profits out there. Many of which have lasted decades. For example: Linux Foundation, Internet Archive, Mozilla, Wikimedia, Free Software Foundation, and Python Software Foundation.
Don't get me wrong, I'm also disappointed in the direction and actions of big tech, but I don't think it's fair to dismiss the non-profit foundations. They aren't worth a trillion dollars, however they are still doing good and important work.
"We made the decision for the nonprofit to retain control of OpenAI after hearing from..." [CHIEF LAW ENFORCEMENT OFFICERS IN CALIFORNIA AND DELAWARE]
This indicates that they didn't actually want the nonprofit to retain control and they're only doing it because they were forced to by threats of legal action.
So were do I vote? How do I became a candidate to be a representative or a delegate of voters? I assume every single human is eligible for both, as OpenAI serves the humanity?
I wonder if democracy is some kind of corporate speech homonym of some totally different concept I'm familiar with. Perhaps it's even an interesting linguistic case where a word is a homonym of its antonym?
Lenin and the Bolsheviks were also committed to the path of fully democratic government. As soon as the people are ready. In the interim we'll make all the decisions.
With 2, the real problem is that approximately 0% of the OpenAI employees actually believed in the mission. Pretty much every single one of them signed the letter to the board demanding that if the company's existence ever comes into conflict with humanity's survival, the company's existence comes first.
That's the reality of every organization if it survives long enough.
Checks-and-balances need to be robust enough to survive bad people. Otherwise, they're not checks-and-balances.
One of the tricks is a broad range of diverse stakeholders with enforcement power. For example, if OpenAI does anything non-open, you'd like organizations FSF, CC, and similar to be represented on their board and to be able to enforce those rules in court.
Does anyone truly believe Musk had benevolent intentions? But before we even evaluate the substance of that claim, we must ask whether he has standing to make it. In his court filing, Musk uses the word "nonprofit" 111 times, yet fails to explain how reverting OpenAI to a nonprofit structure would save humanity, elevate the public interest, or mitigate AI’s risks. The legal brief offers no humanitarian roadmap, no governance proposal, and no evidence that Musk has the authority to dictate the trajectory of an organization he holds no equity in. It reads like a bait and switch — full of virtue-signaling, devoid of actionable virtue. And he never had a contract or an agreement for with OpenAI to keep it a non-profit.
Musk claimed Fraud, but never asked for his money back in the brief. Could it be his intentions were to limit OpenAI to donations thereby sucking the oxygen out of the venture capital space to fund Xai's Grok?
Musk claimed he donated $100mil, later in a CNBC interview, he said $50-mil. TechCrunch suggests it was way less.
Speakingof humanitarian, how about this 600lbs Oxymoron in the room: A Boston University mathematician has now tracked an estimated 10,000 deaths linked to the Musk's destruction of USAID programs, many of which provided basic health services to vulnerable populations. He may have a death count on his reume in the coming year.
Non profits has regulation than publicly traded companies. Each quarterly filings is like a colonoscopy with Sorbonne Oxley rules etc. Non profits just file a tax statement. Did you know the Chirch of Scientology is a non-profit.
If you are a materialist, the laws of physics are the problem.
But to speak plainly, Musk is a complex figure, frequently problematic, and he often exacts a tool on the people around him. Part of this is attributable to his wealth, part to his particulars. When he goes into "demon mode", to use Walter Isaacson's phrase, you don't want to be in his way.
> If you are a materialist, the laws of physics are the problem.
I'm a citizen, the laws of politics are the problem.
> Musk is a complex figure
Hogwash. He's greedy. There's nothing complex about that.
> and he often exacts a tool on the people around him
Yea it's a one way transfer of wealth from them to him. The _literal_ definition of a "toll."
> When he goes into "demon mode"
When he decides to lie, cheat and steal? Why do you strain so hard to lionize this behavior?
> you don't want to be in his way.
Name a billionaire who's way you would _like_ to be in. Elon Musk literally stops existing tomorrow. A person who's name you don't currently know will become known and take his place.
His place needs to be removed. It's not a function of his "personality" or "particulars." That's just goofy "temporarily embarrassed billionaire" thinking.
You attribute to personality what should be attributed to malice. You do this three times.
> Please calm down
I am perfectly calm.
> Please try to be charitable and curious rather than accusatory towards me.
In attempting to explain why my point of view has been misunderstood by you I also attempted to find a reason for it. I do not think my explanation makes you a bad person nor do I think you should be particularly confronted by it.
> In attempting to explain why my point of view has been misunderstood by you I also attempted to find a reason for it.
What have I misunderstood? Help me understand. What is the key point you want to make that you think I misunderstand?
>> (me) When he goes into "demon mode"
> When he decides to lie, cheat and steal? Why do you strain so hard to lionize this behavior?
I hope this is clear: I'm not defending Musk's actions. Above, I'm just using the phrase that Walter Isaacson uses: "demon mode". Have you read the book or watched an interview with Isaacson about it? The phrase is hardly flattering, and I certainly don't use it to lionize Musk. Is there some misunderstanding on this part?
>>>> (me) But to speak plainly, Musk is a complex figure, frequently problematic, and he often exacts a tool on the people around him. Part of this is attributable to his wealth, part to his particulars. When he goes into "demon mode", to use Walter Isaacson's phrase, you don't want to be in his way.
>> (me) Where in my comment do I lionize Musk?
> You attribute to personality what should be attributed to malice. You do this three times.
Please spell this out for me. Where are the three times I do this?
Also, let's step back. Is the core of this disagreement about trying to detect malice in Elon's head? Detecting malice is not easy. Malice may not even be present; many people rationalize actions in such a way so they feel like they are acting justly.
Even if we could detect "malice", wouldn't we want to assess what causes that malice? That's going to be tough to disentangle with him being on the Autism spectrum and also having various mental health struggles.
Along with most philosophers, I think free will (as traditionally understood) is an illusion. From my POV, attempting to blame Musk requires careful explanation. What do we mean? A short lapse of judgment? His willful actions? His intentions? His character? The overall condition of his brain? His upbringing? Which of these is Elon "in control of"? From the materialist POV, none.
From a social and legal POV, we usually draw lines somewhere. We don't want to defenestrate ethics or morality; we still have to find ways to live together. This requires careful thinking about justice: prevention, punishment, reintegration, etc. Overall, the focus shifts to policies that improve societal well-being. It doesn't help to pretend like people could have done otherwise given their situation. We _want_ people to behave better, so we should design systems to encourage that.
I dislike a huge part of what Musk has done, and I think more is likely to surface. Like we said earlier -- and I think we probably agree -- Musk is part of a system. Is he a cause or symptom? It depends on how you frame the problem.
Yup. Haven't used an OpenAI model for anything in 6+ months now, except to check the latest one and confirm that it is still hilariously behind Google/Anthropic.
I've gotten those messages, but the products recommended in both versions were the same, down to the model number, so I don't think it's strictly product placement. The products I was looking at were old oscilloscopes.
Quite possibly! Consistency in moderation is impossible [1]. We don't come close to seeing everything that gets posted here, and the explanation for most of these things is randomness (or the absence of time travel - https://news.ycombinator.com/item?id=43823271)
If you see a post that ought to have been moderated but hasn't been, the likeliest explanation is that we didn't see it. You can help by flagging it or emailing us at hn@ycombinator.com.
At the same time, though, we need you (<-- I don't mean you personally, but all commenters) to follow HN's rules regardless of what other commenters are doing.
Think of it like speeding tickets [2]. There are always lots of other drivers speeding just as bad (nay, worse) than you were, and yet it's always you who gets pulled over, right? Or at least it always feels that way.
- Abandoning the "capped profit" model (which limited investor returns) in favor of traditional equity structure
- Converting for-profit LLC to Public Benefit Corporation (PBC)
- Nonprofit remains in control but also becomes a major shareholder
Reading Between the Lines:
1. Power Play: The "nonprofit control" messaging appears to be damage control following previous governance crises. Heavy emphasis on regulator involvement (CA/DE AGs) suggests this was likely not entirely voluntary.
2. Capital Structure Reality: They need "hundreds of billions to trillions" for compute. The capped-profit structure was clearly limiting their ability to raise capital at scale. This move enables unlimited upside for investors while maintaining the PR benefit of nonprofit oversight.
3. Governance Complexity: The "nonprofit controls PBC but is also major shareholder" structure creates interesting conflicts. Who controls the nonprofit? Who appoints its board? These details are conspicuously absent.
4. Competition Positioning: Multiple references to "democratic AI" vs "authoritarian AI" and "many great AGI companies" signal they're positioning against perceived centralized control (likely aimed at competitors).
Red Flags:
- Vague details about actual control mechanisms
- No specifics on nonprofit board composition or appointment process
- Heavy reliance on buzzwords ("democratic AI") without concrete governance details
- Unclear what specific powers the nonprofit retains besides shareholding
This reads like a classic Silicon Valley power consolidation dressed up in altruistic language - enabling massive capital raising while maintaining insider control through a nonprofit structure whose own governance remains opaque.
I think this is one of the most interesting lines as it basically directly implies that leadership thinks this won't be a winner take all market:
> Instead of our current complex capped-profit structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal capital structure where everyone has stock. This is not a sale, but a change of structure to something simpler.
That is a very obvious thing for them to say though regardless of what they truly believe, because (a) it legitimizes removing the cap , making fundraising easier and (b) averts antitrust suspicions.
> "Our for-profit LLC, which has been under the nonprofit since 2019, will transition to a Public Benefit Corporation (PBC)–a purpose-driven company structure that has to consider the interests of both shareholders and the mission."
One remarkable advantage of being a "Public Benefit Corporation" is this it:
> prevent[s] shareholders from using a drop in stock value as evidence for dismissal or a lawsuit against the corporation[1]
In my view, it is their own shareholders that the directors of OpenAI are insulating themselves against.
[1] https://en.wikipedia.org/wiki/Benefit_corporation
(b) is true but no so much (a). If investors thought it would be winner take all and they thought ClosedAI would win they'd invest in ClosedAI only and starve competitors of funding.
Actually I'm thinking in a winner-takes-all universe, the right strategy would be to spread your bets on as many likely winners as possible.
That's literally the premise of venture capital. This is a scenario where we're assuming ALL our bets will go to zero, except one which will be worth trillions. In that case you should bet on everything.
It's only in the opposite scenario (where every bet pays off with varying ROI) that it makes sense to go all-in on whichever bet seems most promising.
Y that sounds just like a certain startup incubator’s perspective on things.
[dead]
I'm not surprised that they found a reason to uncap their profits, but I wouldn't try to infer too much from the justification they cooked up.
As a deeper issue on "justification", here is something I wrote related to this in 2001 on the risks of non-profits engaging in self-dealing when they create artificial scarcity to enrich themselves:
https://pdfernhout.net/on-funding-digital-public-works.html#...
"Consider this way of looking at the situation. A 501(c)3 non-profit creates a digital work which is potentially of great value to the public and of great value to others who would build on that product. They could put it on the internet at basically zero cost and let everyone have it effectively for free. Or instead, they could restrict access to that work to create an artificial scarcity by requiring people to pay for licenses before accessing the content or making derived works. If they do the latter and require money for access, the non-profit can perhaps create revenue to pay the employees of the non-profit. But since the staff probably participate in the decision making about such licensing (granted, under a board who may be all volunteer), isn't that latter choice still in a way really a form of "self-dealing" -- taking public property (the content) and using it for private gain? From that point of view, perhaps restricting access is not even legal?"
"Self-dealing might be clearer if the non-profit just got a grant, made the product, and then directly sold the work for a million dollars to Microsoft and put the money directly in the staff's pockets (who are also sometimes board members). Certainly if it was a piece of land being sold such a transaction might put people in jail. But because the content or software sales are small and generally to their mission's audience they are somehow deemed OK. The trademark-infringing non-profit-sheltered project I mention above is as I see it in large part just a way to convert some government supported PhD thesis work and ongoing R&D grants into ready cash for the developers. Such "spin-offs" are actually encouraged by most funders. And frankly if that group eventually sells their software to a movie company, say, for a million dollars, who will really bat an eyebrow or complain? (They already probably get most of their revenue from similar sales anyway -- but just one copy at a time.) But how is this really different from the self-dealing of just selling charitably-funded software directly to Microsoft and distributing a lump sum? Just because "art" is somehow involved, does this make everything all right? To be clear, I am not concerned that the developers get paid well for their work and based on technical accomplishments they probably deserve that (even if we do compete for funds in a way). What I am concerned about is the way that the proprietary process happens such that the public (including me) never gets full access to the results of the publicly-funded work (other than a few publications without substantial source)."
That said, charging to provide a service that costs money to supply (e.g. GPU compute) is not necessarily self-dealing. It is restricting the source code or using patents to create artificial scarcity around those services that could be seen that way.
Enlightening read, especially your last paragraph which touches on the nuance of the situation. It’s quite easy to end up on one side or the other when it comes to charity/nonprofits because the mission itself can be very motivating and galvanizing.
>"Self-dealing [...] convert some government supported PhD thesis work [...] the public (including me) never gets full access to the results of the publicly-funded work [...]
Your 2001 essay isn't a good parallel to OpenAI's situation.
OpenAI wasn't "publicly funded" i.e. with public donations or government grants.
The non-profit was started and privately funded by a small group of billionaires and other wealthy people (Elon Musk donates $44 million, Reid Hoffman, etc collectively pledging $1 billion of their own money).
They miscalculated in thinking their charity donations would be enough to recruit the PhD machine learning researchers and pay the high GPU costs to create the AI alternative to Google DeepMind, etc. Their 2015 assumptions about future AI development costs were massively underestimated and now they look like bad for trying to convert it to a for-profit enterprise. Instead of a big conversion to for-profit, they now will settle with keeping a subsidiary that's for-profit. Somewhat like other entities structured as a non-profit that owns for-profit subsidiaries such as Mozilla, Girl Scouts, Novo Nordisk, etc.
Obviously with hindsight... if they had to do it all over, they would just create the reverse structure of creating the OpenAI for-profit company as the "parent entity" that pledges to donate money to charities. E.g. Amazon Inc is the for-profit that donates to Housing Equity Fund for affordable housing.
All 501(c)(3) are funded in part by the public by way of uncollected tax revenues for economically valuable activity.
>uncollected tax revenues for economically valuable activity.
Taxes are on profits not revenue. The for-profit OpenAI LLC subsidiary created in 2019 would have been the entity that owes taxes but it has been losing money and never made any profits to tax.
Yesterday's news about switching from for-profit LLC to for-profit PBC still leaves a business entity that's liable for future taxes on profits.
The contributors to the charity get a write off too
The value investor Mohnish Pabrai once talked about his observation that most companies with a moat pretend they don’t have one and companies without pretend they do.
A version of this is emphasized in the thielverse as well. Companies in heavy competition try to intersect all their qualities to appear unique. Dominant companies talk about their portfolio of side projects to appear in heavy competition (space flight, ed tech, etc).
I don't know how I feel about a tech bro being credit for an idea like this.
This is originally from The Art of War.
It's a specific observation that matches some very general advice from The Art of War, it's not like it's a direct quote from it.
Mohnish isn't a tech bro though, in my books. After selling his company, guy retreated away from the tech scene to get into Buffett-style value investing. And if you read his book, it's about glorifying the small businessmen running motels and garages, who invest bit by bit into the stock market.
Its quite true. The closest thing to a moat OpenAI has is the memory feature.
There needs to be regulations about deceptive, indirect, purposefully ambiguous or vague public communication by corporations (or any entity). I'm not an expert in corporate law or finance, but the statement should be:
"Open AI for-profit LLC will become a Public Benefit Corporation (PBC)"
followed by: "Profit cap is hereby removed" and finally "The Open AI non-profit will continue to control the PBC. We intend it to be a significant shareholder of the PBC."
AGI can't really be a winner take all market. The 'reward' for general intelligence is infinite as a monopoly and it accelerates productivity.
Not only is there infinite incentive to compete, but theres decreasing costs to. The only world in which AGI is winner take all is a world in which it is extremely controlled to the point at which the public cant query it.
> AGI can't really be a winner take all market. The 'reward' for general intelligence is infinite as a monopoly and it accelerates productivity
The first-mover advantages of an AGI that can improve itself are theoretically unsurmountable.
But OpenAI doesn't have a path to AGI any more than anyone else. (It's increasingly clear LLMs alone don't make the cut.) And the market for LLMs, non-general AI, is very much not winner takes all. In this announcement, OpenAI is basically acknowledging that it's not getting to self-improving AGI.
> The first-mover advantages of an AGI that can improve itself are theoretically unsurmountable.
This has some baked assumptions about cycle time and improvement per cycle and whether there's a ceiling.
> this has some baked assumptions about cycle time and improvement per cycle and whether there's a ceiling
To be precise, it assumes a low variability in cycle time and improvement per cycle. If everyone is subjected to the same limits, the first-mover advantage remains insurmountable. I’d also argue that whether there is a ceiling matters less than how high it is. If the first AGI won’t hit a ceiling for decades, it will have decades of fratricidal supremacy.
> I’d also argue that whether there is a ceiling matters less than how high it is.
How steeply the diminishing returns curve off at.
I find these assumptions curious. How so? What is the AGI going to do that captures markets? Even if it can take over all desk work, then what? Who is going to consume that? And further more (and perhaps more importantly), with it putting everyone out of work, who is going to pay for it?
I'm pretty sure today's models probably can be capable of self-improving. It's just that they are not yet as good as self-improving as the combinations of programmers improving them with the help of the models.
I think the foundation model companies are actually poorly situated to reach the leading edge of AGI first, simply because their efforts are fragmented across multiple companies with different specializations—Claude is best at coding, OpenAI at reasoning, Gemini at large context, and so on.
The most advanced tools are (and will continue to be) at a higher level of the stack, combining the leading models for different purposes to achieve results that no single provider can match using only their own models.
I see no reason to think this won't hold post-AGI (if that happens). AGI doesn't mean capabilities are uniform.
Nothing OpenAI is doing, or ever has done, has been close to AGI.
Agreed and, if anything, you are too generous. They aren’t just not “close”, they aren’t even working in the same category as anything that might be construed as independently intelligent.
I agree with you, but that’s kindof beside the point. Open AI’s thesis is that they will work towards AGI, and eventually succeed. In the context of that premise, Open AI still doesn’t believe AGI would be winner-takes-all. I think that’s an interesting discussion whether you believe the premise or not.
I agree with you
I wonder, do you have a hypothesis as to what would be a measurement that would differentiate AGI vs Not-AGI?
Differentiating between AGI and non-AGI, if we ever get remotely close, would be challenging, but for now it's trivial. The defining feature of AGI is recursive self improvement across any field. Without self improvement, you're just regurgitating. Humanity started with no advanced knowledge or even a language. In what should practically be a heartbeat at the speed of distributed computing with perfect memory and computation power, we were landing a man on the Moon.
So one fundamental difference is that AGI would not need some absurdly massive data dump to become intelligent. In fact you would prefer to feed it as minimal a series of the most primitive first principles as possible because it's certain that much of what we think is true is going to end up being not quite so -- the same as for humanity at any other given moment in time.
We could derive more basic principles, but this one is fundamental and already completely incompatible with our current direction. Right now we're trying to essentially train on the entire corpus of human writing. That is a defacto acknowledgement that the absolute endgame for current tech is simple mimicry, mistakes and all. It'd create a facsimile of impressive intelligence because no human would have a remotely comparable knowledge base, but it'd basically just be a glorified natural language search engine - frozen in time.
So then it’s something exponentially more capable than the most capable human?
> So one fundamental difference is that AGI would not need some absurdly massive data dump to become intelligent.
The first 22 years of life for a “western professional adult” is literally dedicated to a giant bootstrapping info dump
Your quote is a non sequitur to your question. The reason you want to avoid massive data dumps is because there are guaranteed to be errors and flaws. See things like Alpha Go vs Alpha Go Zero. The former was trained on the entirety of human knowledge, the latter was trained entirely on itself.
The zero training version not only ended up dramatically outperforming the 'expert' version, but reached higher levels of competence exponentially faster. And that should be entirely expected. There were obviously tremendous flaws in our understanding of the game, and training on those flaws resulted in software seemingly permanently handicapping itself.
Minimal expert training also has other benefits. The obvious one is that you don't require anywhere near the material and it also enables one to ensure you're on the right track. Seeing software 'invent' fundamental arithmetic is somewhat easier to verify and follow than it producing a hundred page proof advancing, in a novel way, some esoteric edge theory of mathematics. Presumably it would also require orders of magnitude less operational time to achieve such breakthroughs, especially given the reduction in preexisting state.
I mostly agree with you. But if you think about it mimicry is an aspect of intelligence. If I can copy you and do what you do reliably, regardless of the method used, it does capture an aspect of intelligence. The true game changer is a reflective AI that can automatically improve upon itself
[dead]
I'm not sure humans meet the definition here.
If you took the average human from birth and gave them only 'the most primitive first principles', the chance that they would have novel insights into medicine is doubtful.
I also disagree with your following statement:
> Right now we're trying to essentially train on the entire corpus of human writing. That is a defacto acknowledgement that the absolute endgame for current tech is simple mimicry
At worst it's complex mimicry! But I would also say that mimicry is part of intelligence in general and part of how humans discover. It's also easy to see that AI can learn things - you can teach an AI a novel language by feeding in a fairly small amount of words and grammar of example text into context.
I also disagree with this statement:
> One fundamental difference is that AGI would not need some absurdly massive data dump to become intelligent
I don't think how something became intelligent should affect whether it is intelligent or not. These are two different questions.
> you can teach an AI a novel language by feeding in a fairly small amount of words and grammar of example text into context.
You didn't teach it, the model is still the same after you ran that. That is the same as a human following instructions without internalizing the knowledge, he forgets it afterward and didn't learn what he performed. If that was all humans did then there would be no point in school etc, but humans do so much more than that.
As long as LLM are like an Alzheimer's human they will never become a general intelligence. And following instructions is not learning at all, learning is building an internal model for those instructions that is more efficient and general than the instructions themselves, humans do that and that is how we manage to advance science and knowledge.
It's not much help but when I read "AGI" I picture a fish tank with brains floating in it.
Interesting but I’m not sure very instructive
When it can start wars over resources.
Seems as good a difference as any
So now? Trump generated his tariff list with ChatGPT
On its own.
Please, keep telling people that. For my sake. Keep the world asleep as I take advantage of this technology which is literally General Artificial Intelligence that I can apply towards increasing my power.
Every tool is a technology than can increase ones power.
That is just what it wants you to think.
https://www.noemamag.com/artificial-general-intelligence-is-...
Here is a mainstream opinion about why AGI is already here. Written by one of the authors the most widely read AI textbook: Artificial Intelligence: A Modern Approach https://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Mod...
Why does the Author choose to ignore the "General" in AGI?
Can ChatGPT drive a car? No, we have specialized models for driving vs generating text vs image vs video etc etc. Maybe ChatGPT could pass a high school chemistry test but it certainly couldn't complete the lab exercises. What we've built is a really cool "Algorithm for indexing generalized data", so you can train that Driving model very similarly to how you train the Text model without needing to understand the underlying data that well.
The author asserts that because ChatGPT can generate text about so many topics that it's general, but it's really only doing 1 thing and that's not very general.
There are people who can’t drive cars. Are they not general intelligence?
I think we need to separate the thinking part of intelligence from tool usage. Not everyone can use every tool at a high level of expertise.
Generally speaking, anyone can learn to use any tool. This isn't true of generative AI systems which can only learn through specialized training with meticulously curated data sets.
People physically unable to use the tool can't learn to use it. This isn't necessarily my view, but one could make a pretty easy argument that the LLMs we have today can't drive a car only because they aren't physically able to control the car.
> but one could make a pretty easy argument that the LLMs we have today can't drive a car only because they aren't physically able to control the car.
Of course they can. We already have computer controlled car systems, the reason LLMs aren't used to drive them is because AI systems that specialize in text are a poor choice for driving - specialized driving models will always outperform them for a variety of technical reasons.
We have compute controlled automobiles, not LLM controlled automobiles.
That was my whole point. Maybe in theory an LLM could learn to drive a car, but they can't today because they don't physically have access to cars they could try to drive just like a person who can't learn to use a tool because they're physically limited from using it.
It doesn't make sense to connect a LLM to a car, that could never work because they are trained offline using curated data sets.
>can only learn through specialized training with meticulously curated data sets.
but so do I!
This isn't true. A curated data set can greatly increase learning efficiency in some cases, but it's not strictly necessary and represents only a fraction of how people learn. Additionally, all curated data sets were created by humans in the first place, a feat that language models could never achieve if we did not program them to do so.
Generality is a continuous value, not a boolean; turned out that "AGI" was poorly defined, and because of that most people were putting the cut-off threshold in different places.
Likewise for "intelligent", and even "artificial".
So no, ChatGPT can't drive a car*. But it knows more about car repairs, defensive driving, global road features (geoguesser), road signs in every language, and how to design safe roads, than I'm ever likely to.
* It can also run python scripts with machine vision stuff, but sadly that's still not sufficient to drive a car… well, to drive one safety, anyway.
Text can be a carrier for any type of signal. The problem gets reduced to that of an interface definition. It’s probably not going to be ideal for driving cars, but if the latency, signal quality, and accuracy is within acceptable constraints, what else is stopping it?
This doesn’t imply that it’s ideal for driving cars, but to say that it’s not capable of driving general intelligence is incorrect in my view.
You can literally today prompt ChatGPT with API instructions to drive a car, then feed it images of a car's window outlooks and have it generate commands for the car (JSON schema restricted structured commands if you like). Text can represent any data thus yes, it is general.
> JSON schema restricted structured commands if you like
How about we have ChatGPT start with a simple task like reliably generating JSON schema when asked to.
Hint: it will fail.
ChatGPT can write a working Python script to generate the Json. It can call a library to do that.
But it cannot think on it's own! Billions of years of evolution couldn't bring human level 'AGI' to many many species, and we think a mere LLM company could do so. AGI isn't just a language model, there's tons of things baked into dna(the way brain functions, it's structure when it grows etc). It's not simply neuron interactions as well. The complexity is mind boggling
The latest models are natively multimodal. Gemini, GPT-4o, Llama 4.
Same model trained on audio, video, images, text - not separate specialized components stitched together.
"AGI is already here, just wait 30 more years". Not very convincing.
> AGI is already here
Last time I checked, in an Anthropic paper, they asked the model to count something. They examined the logits and a graph showing how it arrived at the answer. Then they asked the model to explain its reasoning, and it gave a completely different explanation, because that was the most statistically probable response to the question. Does that seem like AGI to you?
That's exactly what I would expect from a lot of people. Post factum rationalization is a thing.
Exactly. A lot of these arguments end up dehumanizing people because our own intelligence doesn’t hit the definition
There is no post factum rationalization here. If you ask a human to think about how they do something before they do it, there's no post factum rationalization. If you ask an LLM to do the same, it will give you a different answer. So, there is a difference. It's all about having knowledge of your internal state and being conscious of your actions and how you perform them, so you can learn from that knowledge. Without that, there is no real intelligence, just statistics.
If you ask a human to think about how to do a thing, before they do it, then you will also get a different answer.
There’s a good reason why schools spend so much time training that skill!
Yes, humans can post rationalize. But an LLM do nothing but post rationalize, as you yourself admitted humans can think it through beforehand and then actually do what they planned, while an LLM wont follow that plan mentally.
It is easy to see why, since the LLM doesn't communicate what it thinks it communicates what it thinks a human would communicate. A human would explain their inner process, and then go through that inner process. An LLM would explain a humans inner process, and then generate a response using a totally different process.
So while its true that humans doesn't have perfect introspection, the fact that we have introspection about our own thoughts at all is extremely impressive. An LLM has no part that analyzes its own thoughts the way humans do, meaning it has no clue how it thinks.
I have no idea how you would even build introspection into an AI, like how are we able to analyze our own thoughts? What is even a thought? What would this introspection part of an LLM do, what would it look like, would it identify thoughts and talk about them the way we do? That would be so cool, but that is not even on the horizon, I doubt we will ever see that in our lifetime, it would need some massive insight changing the AI landscape at its core to get there.
But, once you have that introspection I think AGI will happen almost instantly. Currently we use dumb math to train the model, that introspection will let the model train itself in an intelligent way, just like humans do. I also think it will never fully replace humans without introspection, intelligent introspection seems like a fundamental part to general intelligence and learning from chaos.
... that was written in mid-2023. So that opinion piece is trying to redefine 2 year old LLMs like GPT-4 (pre-4o) as AGI. Which can only be described as an absolutely herculean movement of goalposts.
I would argue that this is a fringe opinion that has been adopted by a mainstream scholar, not a mainstream opinion. That or, based on my reading of the article, this person is using a definition of AGI that is very different than the one that most people use when they say AGI.
Their multimodal models are a rudimentary form of AGI.
EDIT: There can be levels of AGI. Google DeepMind have proposed a framework that would classify ChatGPT as "Emerging AGI".
https://arxiv.org/abs/2311.02462
Ah! Like Full Self Driving!
Goalpost moving.
Nothing to do with moving the goalposts.
This is current research. The classification of AGI systems is currently being debated by AI researchers.
It's a classification system for AGI, not a redefinition. It's a refinement.
Also there is no universally accepted definition of AGI in the first place.
Thank you.
"AGI" was already a goalpost move from "AI" which has been gobbled up by the marketing machine.
AGI would mean something which doesn't need direction or guidance to do anything. Like us humans, we don't wait for somebody to give us a task and go do it as if that is our sole existence. We live with our thoughts, blank out, watch TV, read books etc. What we currently have and possibly in the next century as well will be nothing close to an actual AGI.
I don't know if it is optimism or delusions of grandeur that drives people to make claims like AGI will be here in the next decade. No, we are not getting that.
And what do you think would happen to us humans if such AGI is achieved? People's ability to put food on the table is dependent on their labor exchanged for money. I can guarantee for a fact, that work will still be there but will it be equitable? Available to everyone? Absolutely not. Even UBI isn't going to cut it because even with UBI people still want to work as experiments have shown. But with that, there won't be a majority of work especially paper pushing mid level bs like managers on top of managers etc.
If we actually get AGI, you know what would be the smartest thing for such an advanced thing to do? It would probably kill itself because it would come to the conclusion that living is a sin and a futile effort. If you are that smart, nothing motivates you anymore. You will be just a depressed mass for all your life.
That's just how I feel.
I think there's a useful distinction that's often missed between AGI and artificial consciousness. We could conceivably have some version of AI that reliably performs any task you throw at it consistently with peak human capabilities, given sufficient tools or hardware to complete whatever that task may be, but lacks subjective experience or independent agency; I would call that AGI.
The two concepts have historically been inexorably linked in sci-fi, which will likely make the first AGI harder to recognize as AGI if it lacks consciousness, but I'd argue that simple "unconscious AGI" would be the superior technology for current and foreseeable needs. Unconscious AGI can be employed purely as a tool for massive collective human wealth generation; conscious AGI couldn't be used that way without opening a massive ethical can of worms, and on top of that its existence would represent an inherent existential threat.
Conscious AGI could one day be worthwhile as something we give birth to for its own sake, as a spiritual child of humanity that we send off to colonize distant or environmentally hostile planets in our stead, but isn't something I think we'd be prepared to deal with properly in a pre-post-scarcity society.
It isn't inconceivable that current generative AI capabilities might eventually evolve to such a level that they meet a practical bar to be considered unconscious AGI, even if they aren't there yet. For all the flak this tech catches, it's easy to forget that capabilities which we currently consider mundane were science fiction only 2.5 years ago (as far as most of the population was concerned). Maybe SOTA LLMs fit some reasonable definition of "emerging AGI", or maybe they don't, but we've already shifted the goalposts in one direction given how quickly the Turing test became obsolete.
Personally, I think current genAI is probably a fair distance further from meeting a useful definition of AGI than those with a vested interest in it would admit, but also much closer than those with pessimistic views of the consequences of true AGI tech want to believe.
One sci-fi example could be based on the replicators from Star Trek, who are able to synthesize any meals on demand.
It is not hard to imagine a "cooking robot" as a black box that — given the appropriate ingredients — would cook any dish for you. Press a button, say what you want, and out it comes.
Internally, the machine would need to perform lots of tasks that we usually associate with intelligence, from managing ingredients and planning cooking steps, to fine-grained perception and manipulation of the food as it is cooking. But it would not be conscious in any real way. Order comes in, dish comes out.
Would we use "intelligent" to describe such a machine? Or "magic"?
I immediately thought of Star Trek too, I think the ship's computer was another example of unconscious intelligence. It was incredibly capable and could answer just about any request that anyone made of it. But it had no initiative or motivation of its own.
Regarding "We could conceivably have some version of AI that reliably performs any task you throw at it consistently" - it is very clear to anyone who just looks at the recent work by Anthropic analyzing how their LLM "reasons" that such a thing will never come from LLMs without massive unknown changes - and definitely not from scale - so I guess the grandparent is absolute right that openai is nor really working on this.
It isn't close at all.
That's an important distinction.
A machine could be super intelligent at solving real world practical tasks, better than any human, without being conscious.
We don't have a proper definition of consciousness. Consciousness is infinitely more mysterious than measurable intelligence.
> AGI would mean something which doesn't need direction or guidance to do anything
There can be levels of AGI. Google DeepMind have proposed a framework that would classify ChatGPT as "Emerging AGI".
ChatGPT can solve problems that it was not explicitly trained to solve, across a vast number of problem domains.
https://arxiv.org/pdf/2311.02462
The paper is summarized here https://venturebeat.com/ai/here-is-how-far-we-are-to-achievi...
This constant redefinition of what AGI means is really tiring. Until an AI has agency, it is nothing but a fancy search engine/auto completer.
I agree. AGI is meaningless as a term if it doesn't mean completely autonomous agentic intelligence capable of operating on long-term planning horizons.
Edit: because if "AGI" doesn't mean that... then what means that and only that!?
> Edit: because if "AGI" doesn't mean that... then what means that and only that!?
"Agentic AI" means that.
Well, to some people, anyway. And even then, people are already arguing about what counts as agency.
That's the trouble with new tech, we have to invent words for new stuff that was previously fiction.
I wonder, did people argue if "horseless carriages" were really carriages? And "aeroplane" how many argued that "plane" didn't suit either the Latin or Greek etymology for various reasons?
We never did rename "atoms" after we split them…
And then there's plain drift: Traditional UK Christmas food is the "mince pie", named for the filling, mincemeat. They're usually vegetarian and sometimes even vegan.
Agents can operate in narrow domains too though, so to fit the G part of AGI the agent needs to be non-domain specific.
It's kind of a simple enough concept... it's really just something that functions on par with how we do. If you've built that, you've built AGI. If you haven't built that, you've built a very capable system, but not AGI.
> Until an AI has agency, it is nothing but a fancy search engine/auto completer.
Stepping back for a moment - do we actually want something that has agency?
Who is "we"?
Vulture Capitalists, obviously
Unless you can define "agency", you're opening yourself to being called nothing more than a fancy chemical reaction.
It's not a redefinition, it's a refinement.
Think about it - the original definition of AGI was basically a machine that can do absolutely anything at a human level of intelligence or better.
That kind of technology wouldn't just appear instantly in a step change. There would be incremental progress. How do you describe the intermediate stages?
What about a machine that can do anything better than the 50th percentile of humans? That would be classified as "Competent AGI", but not "Expert AGI" or ASI.
> fancy search engine/auto completer
That's an extreme oversimplification. By the same reasoning, so is a person. They are just auto completing words when they speak. No that's not how deep learning systems work. It's not auto complete..
> It's not a redefinition, it's a refinement
It's really not. The Space Shuttle isn't an emerging interstellar spacecraft, it's just a spacecraft. Throwing emerging in front of a qualifier to dilute it is just bullshit.
> By the same reasoning, so is a person. They are just auto completing words when they speak.
We have no evidence of this. There is a common trope across cultures and history of characterising human intelligence in terms of the era's cutting-edge technology. We did it with steam engines [1]. We did it with computers [2]. We're now doing it with large language models.
[1] http://metaphors.iath.virginia.edu/metaphors/24583
[2] https://www.frontiersin.org/journals/ecology-and-evolution/a...
Technically it is a refinement, as it distinguishes levels of performance.
The General Intelligence part of AGI refers to its ability to solve problems that it was not explicitly trained to solve, across many problem domains. We already have examples of the current systems doing exactly that - zero shot and few shot capabilities.
> We have no evidence of this.
That's my point. Humans are not "autocompleting words" when they speak.
> Technically it is a refinement, as it distinguishes levels of performance
No, it's bringing something out of scope into the definition. Gluten-free means free of gluten. Gluten-free bagel verus sliced bread is a refinement--both started out under the definition. Glutinous bread, on the other hand, is not gluten free. As a result, "almost gluten free" is bullshit.
> That's my point. Humans are not "autocompleting words" when they speak
Humans are not. LLMs are. It turns out that's incredibly powerful! But it's also limiting in a way that's fundamentally important to the definition of AGI.
LLMs bring us closer to AGI in the way the inventions of writing, computers and the internet probably have. Calling LLMs "emerging AGI" pretends we are on a path to AGI in a way we have zero evidence for.
> Gluten-free means free of gluten.
Bad analogy. That's a binary classification. AGI systems can have degrees of performance and capability.
> Humans are not. LLMs are.
My point is that if you oversimplify LLMs to "word autocompletion" then you can make the same argument for humans. It's such an oversimplification of the transformer / deep learning architecture that it becomes meaningless.
> That's a binary classification. AGI systems can have degrees of performance and capability
The "g" in AGI requires the AI be able to perform "the full spectrum of cognitively demanding tasks with proficiency comparable to, or surpassing, that of humans" [1]. Full and not full are binary.
> if you oversimplify LLMs to "word autocompletion" then you can make the same argument for humans
No, you can't, unless you're pre-supposing that LLMs work like human minds. Calling LLMs "emerging AGI" pre-supposes that LLMs are the path to AGI. We simply have no evidence for that, no matter how much OpenAI and Google would like to pretend it's true.
[1] https://en.wikipedia.org/wiki/Artificial_general_intelligenc...
Why are you linking a Wikipedia page like it's the ground zero for the term? Especially when neither article the page link to justify that definition see the term as a binary accomplishment.
The g in AGI is General. I don't what world you think Generality isn't a spectrum, but it's sure as hell isn't this one.
That's right, and the Wikipedia page refers to the classification system:
"A framework for classifying AGI by performance and autonomy was proposed in 2023 by Google DeepMind researchers. They define five performance levels of AGI: emerging, competent, expert, virtuoso, and superhuman"
In the second paragraph:
"Some researchers argue that state‑of‑the‑art large language models already exhibit early signs of AGI‑level capability, while others maintain that genuine AGI has not yet been achieved."
The entire article makes it clear that the definitions and classifications are still being debated and refined by researchers.
Then you are simply rejecting any attempts to refine the definition of AGI. I already linked to the Google DeepMind paper. The definition is being debated in the AI research community. I already explained that definition is too limited because it doesn't capture all of the intermediate stages. That definition may be the end goal, but obviously there will be stages in between.
> No, you can't, unless you're pre-supposing that LLMs work like human minds.
You are missing the point. If you reduce LLMs to "word autocompletion" then you completely ignore the the attention mechanism and conceptual internal representations. These systems have deep learning models with hundreds of layers and trillions of weights. If you completely ignore all of that, then by the same reasoning (completely ignoring the complexity of the human brain) we can just say that people are auto-completing words when they speak.
> I already linked to the Google DeepMind paper. The definition is being debated in the AI research community
Sure, Google wants to redefine AGI so it looks like things that aren’t AGI can be branded as such. That definition is, correctly in my opinion, being called out as bullshit.
> obviously there will be stages in between
We don’t know what the stages are. Folks in the 80s were similarly selling their expert systems as a stage to AGI. “Emerging AGI” is a bullshit term.
> If you reduce LLMs to "word autocompletion" then you completely ignore the the attention mechanism and conceptual internal representations. These systems have deep learning models with hundreds of layers and trillions of weights
Fair enough, granted.
> Sure, Google wants to redefine AGI
It is not a redefinition. It's a classification for AGI systems. It's a refinement.
Other researchers are also trying to classify AGI systems. It's not just Google. Also, there is no universally agreed definition of AGI.
> We don’t know what the stages are. Folks in the 80s were similarly selling their expert systems as a stage to AGI. “Emerging AGI” is a bullshit term.
Generalization is a formal concept in machine learning. There can be degrees of generalized learning performance. This is actually measurable. We can compare the performance of different systems.
It seems like you believe AGI won't come for a long time, because you don't want that to happen.
The turing test was succesfull. Pre chatGPT, I would not have believed, that will happen so soon.
LLMs ain't AGI, sure. But they might be an essential part and the missing parts maybe already found, just not put together.
And work there will be always plenty. Distributing ressources might require new ways, though.
While I also hold a peer comment's view that the Turing Test is meaningless, I would further add that even that has not been meaningfully beaten.
In particular we redefined the test to make it passable. In Turing's original concept the competent investigator and participants were all actively expected to collude against the machine. The entire point is that even with collusion, the machine would be able to pass. Instead modern takes have paired incompetent investigators alongside participants colluding with the machine, probably in an effort to be part 'of something historic'.
In "both" (probably more, referencing the two most high profile - Eugene and the large LLMs) successes, the interrogators consistently asked pointless questions that had no meaningful chance of providing compelling information - 'How's your day? Do you like psychology? etc' and the participants not only made no effort to make their humanity clear, but often were actively adversarial obviously intentionally answering illogically, inappropriately, or 'computery' to such simple questions. And the tests are typically time constrained by woefully poor typing skills (this the new normal in the smartphone gen?) to the point that you tend to get anywhere from 1-5 interactions of a few words each.
The problem with any metric for something is that it often ends up being gamed to be beaten, and this is a perfect example of that.
I mean, I am pretty sure that I won't be fooled by a bot, if I get the time to ask the right questions.
And I did not looked into it (I also don'think the test has too much relevance), but fooling the average person sounds plausible by now.
Now sounding plausible is what LLMs are optimized for and not being plausible, still, I would not have thought we get so far so quick 10 years ago. So I am very hesistant about the future.
> The turing test was succesfull.
The very people whose theories about language are now being experimentally verified by LLMs, like Chomsky, have also been discrediting the Turing test as pseudoscientific nonsense since early 1990s.
It's one of those things like the Kardashev scale, or Level 5 autonomous driving, that's extremely easy to define and sounds very cool and scientific, but actually turns out to have no practical impact on anything whatsoever.
"but actually turns out to have no practical impact on anything whatsoever"
Bots, that are now allmost indistinguishable from humans, won't have a practical impact? I am sceptical. And not just because of scammers.
> I can guarantee for a fact, that work will still be there but will it be equitable? Available to everyone?
I don't think there has ever been a time in history when work has been equitable and available to everyone.
Of course, that isn't to say that AI can't make it worse then it is now.
> AGI would mean something which doesn't need direction or guidance to do anything. Like us humans, ...
Name me a human that also doesn't need direction or guidance to do a task, at least one they haven't done before
> Name me a human that also doesn't need direction or guidance to do a task, at least one they haven't done before
Literally everything that's been invented.
I feel like, if nothing else, this new wave of AI products is rapidly demonstrating the lack of faith people have in their own intelligence -- or maybe, just the intelligence of other human beings. That's not to say that this latest round of AI isn't impressive, but legions of apologists seem to forget that there is more to human cognition than being able to regurgitate facts, write grammatically-correct sentences, and solve logical puzzles.
> legions of apologists seem to forget that there is more to human cognition than being able to regurgitate facts, write grammatically-correct sentences, and solve logical puzzles
To be fair, there is a section of the population whose useful intelligence can roughly be summed up as that or worse.
I think this takes an unnecessarily narrow view of what "intelligence" implies. It conflates "intelligence" with fact-retention and communicative ability. There are many other intelligent capabilities that most normally-abled human beings possess, such as:
- Processing visual data and classifying objects within their field of vision.
- Processing auditory data, identifying audio sources and filtering out noise.
- Maintaining an on-going and continuous stream of thoughts and emotions.
- Forming and maintaining complex memories on long-term and short-term scales.
- Engaging in self-directed experimentation or play, or forming independent wants/hopes/desires.
I could sit here all day and list the forms of intelligence that humans and other intelligent animals display which have no obvious analogue in an AI product. It's true that individual AI products can do some of these things, sometimes better than humans could ever, but there is no integrated AGI product that has all these capabilities. Let's give ourselves a bit of credit and not ignore or flippantly dismiss our many intelligent capabilities as "useless."
> It conflates "intelligence" with fact-retention and communicative ability
No, I’m using useful problem solving as my benchmark. There are useless forms of intelligence. And that’s fine. But some people have no useful intelligence and show no evidence of the useless kind. They don’t hit any of the bullets you list, there just isn’t that curiosity and drive and—I suspect—capacity to comprehend.
I don’t think it’s intrinsic. I’ve seen pets show more curiosity than some folk. But due to nature and nurture, they just aren’t intelligent to any material stretch.
Remember however that their charter specifies: "If a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project"
It does have some weasel words around value-aligned and safety-conscious which they can always argue but this could get interesting because they've basically agreed not to compete. A fairly insane thing to do in retrospect.
They will just define away all of those terms to make that not apply.
Who defines "value-aligned, safety-conscious project"?
"Instead of our current complex non-competing structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal competing structure where ..." is all it takes
Most likely the same people who define "all natural chicken" - the company that creates the term.
I actually lol-ed at that. It's like asking the inventor of a religion who goes to heaven.
AGI could be a winner-take-all market... for the AGI, specifically for the first one that's General and Intelligent enough to ensure its own survival and prevent competing AGI efforts from succeeding...
How would an AGI prevent others from competing? Sincere question. That seems like something that ASI would be capable of. If another company released an AGI, how would the original stifle it? I get that the original can self-improve to try to stay ahead, but that doesn't necessarily mean it self-improves the best or most efficiently, right?
AGI used to be synonymous with ASI; it's still unclear to me it's even possible to build a sufficiently general AI - that is, as general as humans - without it being an ASI just by virtue of being in silico, thus not being constrained in scale or efficiency like our brains are.
Well, it could pretend to be playing 4d chess and meanwhile destroy the economy and from there take over the world.
If it was first, it could have self-improved more, to the point that it has the capacity to prevent competition, while the competition does not have the capacity to defend itself against superior AGI. This all is so hypothetical and frankly far from what we're seeing in the market now. Funny how we're all discussing dystopian scifi scenarios now.
Homo Sapiens wiped out every other intelligent hominid and every other species on Earth exists at our mercy. That looks a lot like the winners (humans) taking all.
Well, yeah, the world in which it is winner take all is the one where it accelerates productivity so much such that the first firm to achieve it doesn't provide access to its full capabilities directly to oursiders but uses it themselves and conquers every other field of endeavor.
That's always been pretty overtly the winner-take-all AGI scenario.
You can say the same thing about big companies hiring all the smart people and somehow we think that's ok.
AGI might not be fungible. From the trends today it's more likely there will be multiple AGIs with different relative strengths and weakness, different levels of accessibility and compliance, different development rates, and different abilities to be creative and surprising.
AGI can be winner take all. But winner take all AGI is not aligned with the larger interests of humanity.
Modern corporations did't seem to care about humanity...
Or they consider themselves to have low(er) chance of winning. They could think either, but they obviously can't say the latter.
OpenAI is winning in a similar way that Apple is winning in smartphones.
OpenAI is capturing most of the value in the space (generic LLM models), even though they have competitors who are beating them on price or capabilities.
I think OpenAI may be able to maintain this position at least for the medium term because of their name recognition/prominence and they are still a fast mover.
I also think the US is going to ban all non-US LLM providers from the US market soon for "security reasons."
Big difference - Apple makes billions from smartphones, getting most of the industry's profits, which makes it hard to compete with.
OpenAI loses billions and is at the mercy of getting new investors to fund the losses. It has many plausible competitors.
> I also think the US is going to ban all non-US LLM providers from the US market soon for "security reasons."
Well Trump is interested in tariffing movies and South Korea took DeepSeek off mobile app stores, so they certainly may try. But for high-end tasks, DeepSeek R1 671B is available for download, so any company with a VPN to download it and the necessary GPUs or cloud credits can run it. And for consumers, DeepSeek V3's distilled models are available for download, so anyone with a (~4 year old or newer) Mac or gaming PC can run them.
If the only thing keeping these companies valuations so high is banning the competition, that's not a good sign for their long-term value. If you have to ban the competition, you can't be feeling good about what you're making.
For what it's worth, I think GPT o3 and o1, Gemini 2.5 Pro and Claude 3.7 Sonnet are good enough to compete. DeepSeek R1 is often the best option (due to cost) for tasks that it can handle, but there are times where one of the other models can achieve a task that it can't.
But if the US is looking to ban Chinese models, then that could suggest that maybe these models aren't good enough to raise the funding required for newer, significantly better (and more expensive) models. That, or they just want to stop as much money as possible from going to China. Banning the competition actually makes the problem worse though, as now these domestic companies have fewer competitors. But I somewhat doubt there's any coherent strategy as to what they ban, tariff, etc.
> ban all non-US LLM providers
What do you consider an "LLM provider"? Is it a website where you interact with a language model by uploading text or images? That definition might become too broad too quickly. Hard to ban.
I don't have to imagine. There are various US bills trying to achieve this ban. Here is one of them:
https://www.theregister.com/2025/02/03/us_senator_download_c...
One of them will eventually pass given that OpenAI is also pushing for protection:
https://futurism.com/openai-ban-chinese-ai-deepseek
the bulk of money comes from enterprise users. Just need to call 500 CEOs from the S&P500 list, and enforce via "cyber data safety" enforcement via SEC or something like that.
everyone will roll over if all large public companies roll over (and they will)
rather than coming up with a thorough definition, legislation will likely target individual companies (DeepSeek, Alibaba Cloud, etc)
IE once captured all of the value in browserland, with even much higher mindshare and market dominance than OpenAI has ever had. Comparing with Apple (= physical products) is Apples to oranges (heh).
Their relationship with MS breaking down is a bad omen. I'm already seeing non-tech users who use "Copilot" because their spouse uses it at work. Barely knowing it's rebadged GPT. You think they'll switch when MS replaces the backend with e.g. Anthropic? No chance.
MS, Google and Apple and Meta have gigantic levers to pull and get the whole world to abandon OpenAI. They've barely been pulling them, but it's a matter of time. People didn't use Siri and Bixby because they were crap. Once everyone's Android has a Gemini button that's just as good as GPT (which it already is (it's better) for anything besides image generation), people are going to start pressing them. And good luck to OpenAI fighting that.
Apple is not the right analogy. OpenAI has first mover advantage and they have a widely recognized brand name — ChatGPT — and that’s kind of it. Anyone (with very deep pockets) can buy Nvidia chips and go to town if they have a better or equivalent idea. There was a brief time (long before I was born) when “Univac” was synonymous with “computer.”
Companies that are contractors with the US government already aren’t allowed to use Deepseek even if its an airgapped R1 model is running on our own hardware. Legal told us we can’t run any distills of it or anything. I think this is very dumb.
Switching between Apple and Google/Android ecosystems is expensive and painful.
Switching from ChatGPT to the many competitors is neither expensive nor painful.
Not saying this is OpenAI's case, but every monopolist claims they are not a monopolist...
Even if they think it will be a winner-take-all market, they won't say it out loud. It would be begging for antitrust lawsuits.
I read this line as : we were completely off the chart from a corp structure standpoint.
We need to get closer to the norm and give shares of a for-profit to employees in order to create retention.
The level of arrogance needed to think they'd be the only company to come up with AI/AGI is staggering.
> I think this is one of the most interesting lines as it basically directly implies that leadership thinks this won't be a winner take all market:
Yeah; and:
Seems like nary a daylight between DeepSeek R1, Sonnet 3.5, Gemini 2.5, & Grok3 really put things in perspective for them!Not to mention, @Gork, aka Grok 3.5...
Lmaoing at their casual use of AGI as if them or any of their competitors are anywhere near it.
If you change the definition of AGI, we're already there!
Damn, didn't know my Casio FX-300 was AGI, good to know!
“Appear weak when you are strong, and strong when you are weak.”
― Sun Tzu
“Fine, we’ll keep the non-profit, but we’re going to extract the fuck out of the for-profit”
Quite the arc from the original organization.
"It's not you, it's me."
to me it sounds like an admission that AGI is bullshit! AGI would be so disruptive to the current economic regime that "winner takes all" barely covers it, I think. Admitting they will be in normal competition with other AI companies implies specializations and niches to compete, which means Artificial Specialized Intelligence, NOT general intelligence!
and that makes complete sense if you don't have a lay person's understanding of the tech. Language models were never going to bring about "AGI."
This is another nail in the coffin
That, or they don't care if they get to AGI first, and just want their payday now.
Which sounds pretty in-line with the SV culture of putting profit above all else.
If they think AGI is imminent the value of that payday is very limited. I think the grandparent is more correct: OpenAI is admitting that near term AGI - which, being that the only one anyone really cares about is the case with exponential self improvement - isn't happening any time soon. But that much is obvious anyway despite the hyperbolic nonsense now common around AI discussions.
Define "imminent".
If I were a person like several of the people working on AI right now (or really, just heading up tech companies), I could be the kind to look at a possible world-ending event happening in the next - eh, year, let's say - and just want to have a party at the end of the world.
Five years to ten years? Harder to predict.
Imminent means "in a timeframe meaningful to the individual equity holders this change is about."
The window there would at _least_ include the next 5 years, though obviously not ten.
I don't read it that way. It reads more like AGIs will be like very smart people and rather than having one smart person/AGI, everyone will have one. There's room for both Beethoven and Einstein although they were both generally intelligent.
AGI is matter of when, not if.
It will likely require research breakthroughs, significant hardware advancement, and anything from a few years to a few decades. But it's coming.
ChatGPT was released 2.5 years ago, and look at all the crazy progress that has been made in that time. That doesn't mean that the progress has to continue, we'll probably see a stall.
But AIs that are on a level with humans for many common tasks is not that far off.
Either that, or this AI boom mirrors prior booms. Those booms saw a lot of progress made, a lot of money raised, then collapsed and led to enough financial loss that AI went into hibernation for 10+ years.
There's a lot of literature on this, and if you've been in the industry for any amount of time since the 1950s, you have seen at least one AI winter.
But the Moore's law like growth in compute/$ chugs along, boom or bust.
AGI is matter of when, not if
probably true but this statement would be true if when is 2308 which would defeat the purpose of the statement. when first cars started rolling around some mates around the campfire we saying “not if but when” we’ll have flying cars everywhere and 100 years later (with amazing progress in car manufacturing) we are nowhere near… I think saying “when, not if” is one of those statements that while probably indisputable in theory is easily disputable in practice. give me “when” here and I’ll put up $1,000 to a charity of your choice if you are right and agree to do the same thing if wrong
If you look at Our World in Data's "Test scores of AI systems on various capabilities relative to human performance" https://ourworldindata.org/grapher/test-scores-ai-capabiliti...
you can see a pattern of fairly steady progress in different aspects, like they matched humans for image recognition around 2015 but 'complex reasoning' is still much worse than humans but rising.
Looking at the graph, I'd guess maybe five years before it can do all human skills which is roughly AGI?
I've got a personal AGI test of being able to fix my plumbing, given a robot body. Which they are way off just now.
It is already here, kinda. I mean look at how it passes the bar exam, solves math olympiad level questions, generates video, art, music. What else are you looking for? It already has penetrated into job market causing significant disruption in programming. We are not seeing flying cars but we are witnessing things even not talked about around campfire. Seriously even 4 years ago, would you think all these would happen?
> What else are you looking for?
To begin with, systems that don't tell people to use elmer's glue to keep the cheese from sliding off the pizza, displaying a fundamental lack of understanding of.. everything. At minimum it needs to be able to reliably solve hard, unique, but well-defined problems like a group of the most cohesive intelligent people could. It's certainly not AGI until it can do a better job than the most experienced, talented, and intelligent knowledge workers out there.
Every major advancement (which LLMs certainly are) has caused some disruption in the fields it affected, but that isn't useful criteria that can differentiate between "crude but useful tool" from "AGI".
Majority of people on earth don't solve hard, unique, but well-defined problems, do we? I dont expect AGI to to solve one of Hilbert's list of problems (yet). Your definition of AGI is a bit too imposing. Saying that I believe you would get answers from an LLM better than most of the answers you would get from an average human. IMHO the trend is obvious and we will see if it stalls or keeps the pace.
AGI is here?????! Damn, me, and every other human, must have missed that news… /s
Such things happen.
I think this is right but also missing a useful perspective.
Most HN people are probably too young to remember that the nanotech post-scarcity singularity was right around the corner - just some research and engineering way - which was the widespread opinion in 1986 (yes, 1986). It was _just as dramatic_ as today's AGI.
That took 4-5 years to fall apart, and maybe a bit longer for the broader "nanotech is going to change everything" to fade. Did nanotech disappear? No, but the notion of general purpose universal constructors absolutely is dead. Will we have them someday? Maybe, if humanity survives a hundred more years or more, but it's not happening any time soon.
There are a ton of similarities between nanotech-nanotech singularity and the moderns LLM-AGI situation. People point(ed) to "all the stuff happening" surely the singularity is on the horizon! Similarly, there was the apocalytpic scenario that got a ton of attention and people latching onto "nanotech safety" - instead of runaway AI or paperclip engines, it was Grey Goo (also coined in 1986).
The dynamics of the situation, the prognostications, and aggressive (delusional) timelines, etc. are all almost identical in a 1:1 way with the nanotech era.
I think we will have both AGI and general purpose universal constructors, but they are both no less than 50 years away, and probably more.
So many of the themes are identical that I'm wondering if it's a recurring kind of mass hysteria. Before nanotech, we were on the verge of genetic engineering (not _quite_ the same level of hype, but close, and pretty much the same failure to deliver on the hype as nanotech) and before that the crazy atomic age of nuclear everything.
Yes, yes, I know that this time is different and that AI is different and it won't be another round of "oops, this turned out to be very hard to make progress on and we're going to be in a very slow, multi-decade slow-improvement regime, but that has been the outcome of every example of this that I can think of.
I won't go too far out on this limb, because I kind of agree with you... but to be fair -- 1980s-1990s nanotech did not attract this level of investment, nor was it visible to ordinary people, nor was it useful to anyone except researchers and grant writers.
It seems like nanotech is all around us now, but the term "nanotech" has been redefined to mean something different (larger scale, less amazing) from Drexler's molecular assemblers.
Investment was completely different at the time and interest rates played a huge part of that. VC also wasn't that old in 86.
> Did nanotech disappear? No, but the notion of general purpose universal constructors absolutely is dead. Will we have them someday? Maybe, if humanity survives a hundred more years or more,
I thought this was a "we know we can't" thing rather than a "not with current technology" thing?
Specific cases are probably impossible, though there's always hope. After all, to ue the example the nanotech people loved: there are literal assemblers all around you. Whether we can have singular device that can build anything (probably not - energy limits and many many other issues) or factories that can work on atomic scale (maybe) is open, I think. The idea of little robots was kind of visibly silly even at the peak.
The idea of scaling up LLMs and hoping is .. pretty silly.
Every consumer has very useful AI at their fingertips right now. It's eating the software engineering world rapidly. This is nothing like nanotech in the 80s.
Sure. But fancy autocomplete for a very limited industry (IT) plus graphics generation and a few more similar items, are indeed useful. Just like "nanotech" coating of say optics or in the precise machinery or all other fancy nano films in many industries. Modern transistors are close to nano scale now, etc.
The problem is that the distance between a nano thin film or an interesting but ultimately rigid nano scale transistor and a programmable nano level sized robot is enormous, despite similar sizes. Same like the distance between an autocomplete heavily relying on the preexisting external validators (compilers, linters, static code analyzers etc.) and a real AI capable of thinking is equally enormous.
Progress is not just a function of technical possibility( even if it exists) it is also economics.
It has taken tens to hundred of billions of dollars without equivalent economic justification(yet) before to reach here. I am not saying economic justification doesn't exist or wont come in the future, just that the upfront investment and risk is already in order of magnitude of what the largest tech companies can expend.
If the the next generation requires hundreds of billions or trillions [2] upfront and a very long time to make returns, no one company (or even country) could allocate that kind of resources.
Many cases of such economically limited innovations[1], nuclear fusion is the classic always 20 years away example. Another close one is anything space related, we cannot replicate in next 5 years what we already achieved from 50 years ago of say landing on the moon and so on.
From a just a economic perspective it is a definitely a "If", without even going into the technology challenges.
[1]Innovations in cost of key components can reshape economics equation, it does happen (as with spaceX) but it also not guaranteed like in fusion.
[2] The next gen may not be close enough to AGI. AGI could require 2-3 more generations ( and equivalent orders of magnitude of resources), which is something the world is unlikely to expend resources on even if it had them.
> AGI is matter of when, not if.
LLMs destroying any sort of capacity (and incentive) for the population to think pushes this further and further out each day
I agree that LLMs are hurting the general population’s capacity to think (assuming they use it often. I’ve certainly noticed a slight trend among students I’ve taught to use less effort, and myself to some extent).
I don’t agree that this will affect ML progress much, since the general population isn’t contributing to core ML research.
On the other hand, dumbing down the population also lowers the bar for AGI. /s
Could you elaborate on the progress that has been made? To me, it seems only small/incremental changes are made between models with all of them still hallucinating. I can see no clear steps towards AGI.
https://reddit.com/r/ThatsInsane/comments/1jyja0s/2_years_di...
> AGI is matter of when, not if
We have zero evidence for this. (Folks said the same shit in the 80s.)
"X increased exponentially in the past, therefore it will increase exponentially in the same way in the future" is fallacious. There is nothing guaranteeing indefinite uncapped growth in capabilities of LLMs. An exponential curve and a sigmoidal curve look the same until a certain point.
Yeah, it is a pretty good bet that any real process that produces something that looks like an exponential curve over time is the early phase of a sigmoid curve, because all real processes have constraints.
And if we apply the 80/20 rule, feels like we're at about 50-75% right now. So we're almost getting close to done with the easy parts. Then come the hard parts.
> AGI is matter of when, not if.
I want to believe, man.
I don’t think that’s a safe foregone conclusion. What we’ve seen so far is very very powerful pattern matchers with emergent properties that frankly we don’t fully understand. It very well may be the road to AGI, or it may stop at the kind of things we can do in our subconscious—but not what it takes to produce truly novel solutions to never before seen problems. I don’t think we know.
It's somewhat odd to me that many companies operating in the public eye are basically stating "We are creating a digital god, an instrument more powerful than any nuclear weapon" and raising billions to do it, and nobody bats an eye...
I'd really love to talk to someone that both really believes this to be true, and has a hands-on experience with building and using generative AI.
The intersection of the two seems to be quite hard to find.
At the state that we're in the AIs we're building are just really useful input/output devices that respond to a stimuli (e.g., a "prompt"). No stimuli, no output.
This isn't a nuclear weapon. We're not going to accidentally create Skynet. The only thing it's going to go nuclear on is the market for jobs that are going to get automated in an economy that may not be ready for it.
If anything, the "danger" here is that AGI is going to be a printing press. A cotton gin. A horseless carriage -- all at the same time and then some, into a world that may not be ready for it economically.
Progress of technology should not be artitrarily held back to protect automateable jobs though. We need to adapt.
Which of these statements do you disagree with?
- Superintelligence poses an existential threat to humanity
- Predicting the future is famously difficult
- Given that uncertainty, we can't rule out the chance of our current AI approach leading to superintelligence
- Even a 1-in-1000 existential threat would be extremely serious. If an asteroid had a 1-in-1000 chance of hitting Earth and obliterating humanity we should make serious contingency plans.
Second question: how confident are you that you're correct? Are you 99.9% sure? Confident enough to gamble billions of lives on your beliefs? There are almost no statements about the future which I'd assign this level of confidence to.
You could use the exact same argument to argue the opposite. Simply change the first premise to "Super intelligence is the only thing that can save humanity from certain extinction". Using the exact same logic, you'll reach the conclusion that not building superintelligence is a risk no sane person can afford to take.
So, since we've used the exact same reasoning to prove two opposite conclusions, it logically follows that this reasoning is faulty.
That’s not how logic works. The GP is applying the precautionary principle: when there’s even a small chance of a catastrophic risk, it makes sense to take precautions-like restricting who can build superintelligent AI, similar to how we restrict access to nuclear technology.
Changing the premise to "superintelligence is the only thing that can save us" doesn’t invalidate the logic of being cautious. It just shifts the debate to which risk is more plausible. The reasoning about managing existential risks remains valid either way, the real question is which scenario is more likely, not whether the risk-based logic is flawed.
Just like with nuclear power, which can be both beneficial and dangerous, we need to be careful in how we develop and control powerful technologies. The recent deregulation by the US admin are an example of us doing the contrary currently.
Not really. If there is a small chance that this miraculous new technology will solve all of our problems with no real downside, we must invest everything we have and pull out all the stops, for the very future of the human race depends on AGI.
Also, @tsimionescu's reasoning is spot on, and exactly how logic works.
Some of us believe that continued AI research is by far the biggest threat to human survival, much bigger for example than climate change or nuclear war (which might cause tremendous misery and reduce the population greatly, but seem very unlikely to kill every single person).
I'm guessing that you think that society is getting worse every year or will eventually collapse, and you hope that continued AI research might prevent that outcome.
It literally isn't, changing/reversing a premise and not adressing the point that was made is not a valid way to counter the initial argument in a logical way.
Just like your proposition that any "small" chance justifies investing "everything" disregards the same argument regarding the precautionary principle of potentially devastating technologies. You've also slipped in an additonal "with no real downside" which you cannot predict with certainty anyways, rendering this argument infalsifiable. At least tsimionescu didn't dare making such a sweeping (but baseless) statement.
The best we can hope for is that Artificial Super Intelligence treats us kindly as pets, or as wildlife to be preserved, or at least not interfered with.
ASI to humans, is like humans to rats or ants.
Isn't the question you're posing basically Pascals wager?
I think the chance they're going to create a "superintelligence" is extremely small. That said I'm sure we're going to have a lot of useful intelligence. But nothing general or self-conscious or powerful enough to be threatening for many decades or even ever.
> Predicting the future is famously difficult
That's very true, but that fact unfortunately can never be used to motivate any particular action, because you can always say "what if the real threat comes from a different direction?"
We can come up with hundreds of doomsday scenarios, most don't involve AI. Acting to minimize the risk of every doomsday scenario (no matter how implausible) is doomsday scenario no. 153.
> I think the chance they're going to create a "superintelligence" is extremely small.
I'd say the chance that we never create a superintelligence is extremely small. You either have to believe that for some reason the human brain achieved the maximum intelligence possible, or that progress on AI will just stop for some reason.
Most forecasters on prediction markets are predicting AGI within a decade.
Why are you so sure that progress won't just fizzle out at 1/1000 of the performance we would classify as superintelligence?
> that progress on AI will just stop for some reason
Yeah it might. I mean, I'm not blind and deaf, there's been tremendous progress in AI over the last decade, but there's a long way to go to anything superintelligent. If incremental improvement of the current state of the art won't bring superintelligence, can we be sure the fundamental discoveries required will ever be made? Sometimes important paradigm shifts and discoveries take a hundred years just because nobody made the right connection.
Is it certain that every mystery will be solved eventually?
Aren't we already passed 1/1000th of the performance we would classify as superintelligence?
There isn't an official precise definition of superintelligence, but it's usually vaguely defined as smarter than humans. Twice as smart would be sufficient by most definitions. We can be more conservative and say we'll only consider superintelligence achieved when it gets to 10x human intelligence. Under that conservative definition, 1/1000th of the performance of superintelligence would be 1% as smart as a human.
We don't have a great way to compare intelligences. ChatGPT already beats humans on several benchmarks. It does better than college students on college-level questions. One study found it gets higher grades on essays than college students. It's not as good as humans on long, complex reasoning tasks. Overall, I'd say it's smarter than a dumb human in most ways, and smarter than a smart human in a few ways.
I'm not certain we'll ever create superintelligence. I just don't see why you think the odds are "extremely small".
I agree, the 1/1000 ratio was a bit too extreme. Like you said, almost any way that's measured it's probably fair to say chatgpt is already there.
Yes, this is literally Pascal's wager / Pascal's mugging.
> Given that uncertainty, we can't rule out the chance of our current AI approach leading to superintelligence
I think you realise this is the weak point. You can't rule out the current AI approach leading to superintelligence. You also can't rule out a rotting banana skin in your bin spontaneously gaining sentience either. Does that mean you shouldn't risk throwing away that skin? It's so outrageous that you need at least some reason to rule it in. So it goes with current AI approaches.
Isn't the problem precisely that uncertainty though? That we have many data points showing that a rotting banana skin will not spontaneously gain sentience, but we have no clear way to predict the future? And we have no way of knowing the true chance of superintelligence arising from the current path of AI research—the fact that it could be 1-in-100 or 1-in-1e12 or whatever is part of the discussion of uncertainty itself, and people are biased in all sorts of ways to believe that the true risk is somewhere on that continuum.
>And we have no way of knowing the true chance of superintelligence arising from the current path of AI research
What makes people think that the future advances in AI will continue to be linear instead of falling of and plateau? Don't all breakthrough technologies develop quickly at the start and then fall of in improvements as all the 'easy' improvements have already been made? In my opinion AI and AGI is like the car and the flying car. People saw continous improvements in cars and thought this rate of progress would continue indefinitely. Leading to cars that have the ability to not only drive but fly as well.
There are lots of data points of previous AI efforts not creating super intelligence.
You bring up the example of an extinction-level asteroid hurling toward earth. Gee, I wonder if this superintelligence you’re deathly afraid of could help with that?
This extreme risk aversion and focus on negative outcomes is just the result of certain personality types, no amount of rationalizing will change your mind as you fundamentally fear the unknown.
How do you get out of bed everyday knowing there’s a chance you could get hit by a bus?
If your tribe invented fire you’d be the one arguing how we can’t use it for fear it might engulf the world. Yes, humans do risk starting wildfires, but it’s near impossible to argue the discovery of fire wasn’t a net good.
I think of the invention of ASI as introducing a new artificial life form.
The new life form will be to humans, as humans are to chimps, or rats, or ants.
At this point we have lost control of the situation (the planet). We are no longer at the top of the food chain. Fingers crossed it all goes well.
It's an existential gamble. Is the gamble worth taking? No one knows.
Since the internet inception there were a few wrong turns taken by the wrong people (and lizards, ofc) behind the wheel, leading to the sub-optimal, enshitified tm experience we have today. I think GP just don't want to live through that again.
You mean right turns. The situation that we have today is the one that gets most rewarded. A right move is defined as one that gets rewarded.
> Superintelligence poses an existential threat to humanity
I disagree at least on this one. I don't see any scenario where superintelligence comes into existence, but is for some reason limited to a mediocrity that puts it in contention with humans. That equilibrium is very narrow, and there's no good reason to believe machine-intelligence would settle there. It's a vanishingly low chance event. It considerably changes the later 1-in-n part of your comment.
So you assume a superintelligence, so powerful it would see humans as we see ants, would not destroy our habitat for resources it could use for itself?
> There are almost no statements about the future which I'd assign this level of confidence to.
You have cooked up a straw man that will believe anything as long as it contains a doomsday prediction. You are more than 99.9% confident about doomsday predictions, even if you claim you aren't.
[dead]
> I'd really love to talk to someone that both really believes this to be true, and has a hands-on experience with building and using generative AI.
Any of the signatories here match your criteria? https://safe.ai/work/statement-on-ai-risk#signatories
Or if you’re talking more about everyday engineers working in the field, I suspect the people soldering vacuum tubes to the ENIAC would not necessarily have been the same people with the clearest vision for the future of the computer.
Sounds a little too much like, "It's not AGI today ergo it will never become AGI"
Does the current AI give productivity benefits to writing code? Probably. Do OpenAI engineers have exclusive access to more capable models that give them a greater productivity boost than others? Also probably.
If one exclusive group gets the benefit of developing AI with a 20% productivity boost compared to others, and they develop a 2.0 that grants them a 25% boost, then a 3.0 with a 30% boost, etc...
The question eventually becomes, "is AGI technically possible"; is there anything special about meat that cannot be reproduced on silicon? We will find AGI someday, and more than likely that discovery will be aided by the current technologies. It's the path here that matters, not the specific iteration of generative LLM tech we happen to be sitting on in May 2025.
> Does the current AI give productivity benefits to writing code? Probably.
> If one exclusive group gets the benefit of developing AI with a 20% productivity boost compared to others, and they develop a 2.0 that grants them a 25% boost, then a 3.0 with a 30% boost, etc...
That’s a bit of a stretch, generative AI is least capable of helping with novel code such as needed to make AGI.
If anything I’d expect companies working on generative AI to be at a significant disadvantage when trying to make AGI because they’re trying to leverage what they are already working on. That’s fine for incremental improvement, but companies rarely ride one wave of technology to the forefront of the next. Analog > digital photography, ICE > EV, coal mining > oil, etc.
> At the state that we're in the AIs we're building are just really useful input/output devices that respond to a stimuli (e.g., a "prompt"). No stimuli, no output.
It was true before we allowed them to access external systems, disregarding certain rule which I forgot the origin.
The more general problem is a mix between the tradegy of the common; we have better understanding every passing day yet still don't understand exacly why LLM perform that well emergently instead of engineered that way; and future progress.
Do you think you can find a way around access boundaries to masquerade your Create/Update requests as Read in the log system monitoring it, when you have super intelligence?
> Progress of technology should not be artitrarily held back to protect automateable jobs though. We need to adapt.
So you don't mind if your economic value drops to zero, with all human labour replaced by machines?
Dependent on UBI, existing in a basic pod, eating rations of slop.
Yes! Sounds like a dream. My value isn't determined by some economic system, but rather by myself. There is so much to do when you don't have to work. Of course, this assumes we actually get to UBI first, and it doesn't create widespread poverty. But even if humanity will have to go through widespread poverty, we'd porbably come out with UBI on the other side (minus a few hundred millions starved).
There's so much to do, explore and learn. The prospect of AI stealing my job is only scary because my income depends on this job.
> There's so much to do, explore and learn.
Hobbies, hanging out with friends, reading, etc. That's basically it.
Probably no international travel.
It will be like a simple retirement on a low income, because in a socialist system the resources must be rationed.
This will drive a lot of young ambitious people to insanity. Nothing meaningful for them to achieve. No purpose. Drug use, debauchery, depression, violence, degeneracy, gangs.
It will be a true idiocracy. No Darwinian selection pressures, unless the system enforces eugenics and population control.
Wait, wait, wait. Our society's gonna fall apart due to a lack of Darwinian selection pressure? What do you think we're selecting for right now?
Seems to me like our culture treats both survival and reproduction as an inalienable right. Most people would go so far as to say everyone deserves love, "there's a lid for every pot".
> This will drive a lot of young ambitious people to insanity. Nothing meaningful for them to achieve.
Maybe, if the only flavor of ambition you're aware of is that of SV types. Plenty of people have found achievement and meaning before and alongside the digital revolution world.
I mean common people will be affected just as badly as SV types. It will impact everyone.
Jobs, careers, real work, all replaced by machines which can do it all better, faster, cheaper than humans.
Young people with modest ambitions to learn and master a skill and contribute to society, and have a meaningful life. That can be blue collar stuff too.
How will children respond to the question - "What do you want to be when you grow up?"
They can join the Amish communities where humans still do the work.
> So you don't mind if your economic value drops to zero, with all human labour replaced by machines?
This was the fear when the cotton gin was invented. It was the ear when cars were created. The same complaint happened with the introduction of electronic, automated, telephone switchboards.
Jobs change. Societies change. Unemployment worldwide, is near the lowest it has ever been. Work will change. Society will eventually move to a currency based on energy production, or something equally futuristic.
This doesn't mean that getting there will be without pain.
Where did all the work-horses go? Why is there barely a fraction of the population there once was? Why did they not adapt and find niches where they had a competitive advantage over cars and machines?
The horses weren't the market the economy is selling to, the people are. Ford figured out that people having both time and money is best for the economy. We'll figure out that having all the production capabilities but none of the market benefits nobody.
The goal for AGI/ASI is to create machines that can do any job much faster, better, and cheaper than humans. That's the ultimate end point of this progress.
The economic value of human labour will drop to zero. That would be an existential threat to our civilization.
> are just really useful input/output devices that respond to a stimuli
LLMs are huge pretrained models. The economic benefit here is that you don't have to train your own text classification model anymore. (The LLM was likely already trained on whatever training set you could think of.)
That's a big time and effort saver, but no different from "AI" that we had decades prior. It's just more accessible to the normal person now.
alignmentforum.com
Lots of people in academia and industry are calling for more oversight. It's the US government that's behind. Europe's AI Act bans applications with unacceptable risk: https://en.wikipedia.org/wiki/Artificial_Intelligence_Act
The US government probably doesn't think it's behind.
Right now it's operated by a bunch of people who think that you can directly relate the amount of money a venture could make in the next 90 days to its net benefit for society. Government telling them how they can and cannot make that money, in their minds, is government telling them that they cannot bring maximum benefit to society.
Now, is this mindset myopic to everything that most people have in their lived experience? Is it ethically bankrupt and held by people who'd sell their own mothers for a penny if they otherwise couldn't get that penny? Would those people be banished to a place beyond human contact for the rest of their existence by functioning organs of an even somewhat-sane society?
I don't know. I'm just asking questions.
I'd go further and say the US government wants "an instrument more powerful than any nuclear weapon" to be built in its territory, by people it has jurisdiction over.
It might not be a direct US-govt project like the Manhattan Project was, but it doesn't have to. The government has the ties it needs with the heads of all these AI companies, and if it comes to it, the US-govt has the muscle and legal authority to reign control over it.
A good deal for everyone involved really. These companies get to make bank and technology that furthers their market dominance, the US-govt gets potentially "Manhattan project"-level pivotal technology— it's elites helping elites.
Unless China handicaps the their progress as well (which they won’t, see made in China 2025), all you’re doing is handing the future to deepseek et al.
What kind of a future is that? If China marches towards a dystopia, why should Europe dutifully follow?
We can selectively ban uses without banning the technology wholesale; e.g., nuclear power generation is permitted, while nuclear weapons are strictly controlled.
> If China marches towards a dystopia, why should Europe dutifully follow?
I think the more relevant question is: Do you want to live in a Chinese dystopia, or a European one?
A European dystopia won't be AI borne, so this is a false dilemma.
What I meant is: Europe can choose to regulate as they do, and end up living in a Chinese dystopia because the Chinese will drastically benefit from non-regulated AI, or they can create their own AI dystopia.
A non-AI dystopia is the least likely scenario.
If you are suggesting that China may use AI to attack Europe, they can invest in defense without unleashing AI domestically. And I don't think China will become a utopia with unregulated AI. My impression after having visited it was not one of a utopia, and knowing how they use technology, I don't think AI will usher it in, because our visions of utopia are at odds. They may well enjoy what they have. But if things go sideways they may regret it too.
Not attack, just influence. Destabilize if you want. Advocate regime change, sabotage trust in institution. Being on a defense in a propaganda war doesn't really work.
With US already having lost ideologigal war with russia and China, Europe is very much next
> China may use AI to attack Europe
No, just control. America exerts influence and control over Europe without having had to attack it in generations.
> If you are suggesting that China may use AI to attack Europe
No - I'm suggesting that China will reap the benefits of AI much more than Europe will, and they will eclipse Europe economically. Their dominance will follow, and they'll be able to dictate terms to other countries (just as the US is doing, and has been doing).
> And I don't think China will become a utopia with unregulated AI.
Did you miss all the places I used the word "dystopia"?
> My impression after having visited it was not one of a utopia, and knowing how they use technology, I don't think AI will usher it in, because our visions of utopia are at odds. They may well enjoy what they have.
Comparing China when I was a kid, not that long ago, to what it is now: It is a dystopia, and that dystopia is responsible for much of the improvements they've made. Enjoying what they have doesn't mean it's not a dystopia. Most people don't understand how willing humans are to live in a dystopia if it improves their condition significantly (not worrying too much about food, shelter, etc).
Do Zambians currently live in an American dystopia? I think they just do their own thing and don't care much what America thinks as long as they don't get invaded.
We don't know whether pushing towards AGI is marching towards a dystopia.
If it's winner takes all for the first company/nation to have AGI (presuming we can control it), then slowing down progress of any kind with regulation is a risk.
I don't think there's a good enough analogy to be made, like your nuclear power/weapons example.
The hypothetical benefits of an aligned AGI outweigh those of any other technology by orders of magnitude.
As with nuclear weapons, there is non-negligible probability of wiping out the human race. The companies developing AI have not solved the alignment problem, and OpenAI even dismantled what programs it had on it. They are not going to invest in it unless forced to.
We should not be racing ahead because China is, but investing energy in alignment research and international agreements.
> We don't know whether pushing towards AGI is marching towards a dystopia.
We do know that. By literally looking at China.
> The hypothetical benefits of an aligned AGI outweigh those of any other technology by orders of magnitude.
AGI aligned with whom?
This thought process it not different than it was with nuclear weapons.
The primary difference is the observability - with satellites we had some confidence that other nations respected treaties, or that they had enough reaction time for mutual destruction, but with this AI development we lack all that.
Yes, it was the same with nukes, each side had to build them because the other side was building them.
Only countries with nuclear weapons had an actual seat at the table when the world banned new nuclear weapon programs.
That is why we see the current AI competition and some attempts from companies to regulate it so that "it is safe only in their hands".
https://time.com/6288245/openai-eu-lobbying-ai-act/
Compare the other American "innovations" that Europe mostly rejects.
[dead]
> Lots of people in academia and industry
Mostly OpenAI and DeepMind and it stunk of 'pulling up the drawbridge behind them' and pivoting from actual harm to theoretical harm.
For a crowd supposedly entrenched in startups, it's amazing everyone here is so slow to recognise it's all funding pitches and contract bidding.
The EU can say all it wants about banning AI applications with unacceptable risk. But ASML is still selling machines to TSMC, which makes the chips which the AI companies are using. The EU is very much profiting off of the AI boom. ASML makes significantly more money than OpenAI, even.
If we think of “making money” as having more revenue than expenses a lemonade stand makes significantly more money than OpenAI.
US government is behind because Biden admin were pushing strongly for controls and regulations and told Andersen and friends exactly that, who then went and did everything in their power to elect Trump, who then put those same tech bros in charge of making his AI policy.
The EU does and has passed the AI act to reign in the worst consequences of this nuclear weapon. It has not been received well around here.
The "digital god" angle might explain why. For many, this has become a religious movement, a savior for an otherwise doomed economic system.
Absolutely. It's frankly quite shocking to see how otherwise atheist or agnostic people have so quickly begun worshipping at the altar of "inevitable AGI apocalypse", much in the same way as how extremist Christians await the rapture.
To be fair many of us arrived at the idea that AI was humanities inevitable endpoint ahead of and independently of whether we would ever see it in our lifetimes. Its easy enough to see how people could independently converge on such am idea. I dont see that view as related to atheism in any way other than it creating space for the belief, in the same way it creates space for many others.
Id love to believe there is more to life than the AI future, or that we as humans are destined to be perpetually happy and live meaningful. However I currently dont see how our current levels of extreme prosperity are anything more than an evolutionary blip, even if we could make them last several millennia more.
I guess they think that the “digital god” has a chance to become real (and soon, even), unlike the non-digital one?
We'll be debating whether or not "AGI is here" in philosophical terms, in the same way people debate if God is real, for years to come. To say nothing of the untaxed "nonprofit" status these institutions share.
Omnipotent deities can never be held responsible for famine and natural disasters ("God has a plan for us all"). AI currently has the same get-out-of-jail free card where mistakes that no literate human would ever make are handwaved away as "hallucinations" that can be exorcised with a more sophisticated training model ("prayers").
Roko's Basilisk is basically Pascal's wager with GPUs.
[dead]
[dead]
I don't know what sources you're reading. There's so much eye-batting I'm surprised people can see at all.
Because many people fundamentally don’t believe AGI is possible at a basic level, even AI researchers. Humans tend to only understand what materially affects their existence.
How is an LLM more powerful than any nuclear weapon? Seriously curious.
Well, possibly it isn't. Possibly LLMs are limited in ways that humans aren't, and that's why the staggering advances from GPT-2 to GPT-3 and from GPT-3 to GPT-4 have not continued. Certainly GPT-4 doesn't seem to be more powerful than the largest nuclear weapons.
But OpenAI isn't limited to creating LLMs. OpenAI's objective is not to create LLMs but to create artificial general intelligence that is better than humans at all intellectual tasks. Examples of such tasks include:
1. Designing nuclear weapons.
2. Designing and troubleshooting mining, materials processing, and energy production equipment.
3. Making money by investing in the stock market.
4. Discovering new physics and chemistry.
5. Designing and troubleshooting electronics such as GPUs.
6. Building better AI.
7. Cracking encryption.
8. Finding security flaws in computer software.
9. Understanding the published scientific literature.
10. Inferring unpublished discoveries of military significance from the published scientific literature.
11. Formulating military strategy.
Presumably you can see that a system capable of doing all these things can easily be used to produce an unlimited quantity of nuclear weapons, thus making it more powerful than any nuclear weapon.
If LLMs turn out not to be able to do those things better than humans, OpenAI will try other approaches, sooner or later. Maybe it'll turn out to be impossible, or much further off than expected, but that's not what OpenAI is claiming.
the problem is, none of that needs to happen. If the AI can start coming up with novel math or physics, it's game over. Whether the AI is "sentient" or not, being able to break that barrier would send us into an advancement spiral.
None of my argument depends on the AI being sentient.
You are surely correct that there are weaker imaginable AIs than the strongly superhuman AI that OpenAI and I are talking about which would still be more powerful than nuclear weapons, but they are more debatable. For example, whether discovering new physics would permit the construction of new, more powerful weapons is debatable; it didn't help Archimedes or Tipu Sultan. So discussing such weak claims is likely to end up off in the weeds of logistics and speculation about exactly what kind of undiscovered physics and math would come to light. Instead, I focused on the most obviously correct ways that strongly superhuman AI would be more powerful than nuclear weapons.
These may not be the most practically important ways. Maybe any strongly superhuman AI would immediately discover a way to explode the sun, or to control people's minds, or to build diamondoid molecular nanotechnology, or to genetically engineer super-plagues, or to collapse the false vacuum. Any of those would make nuclear weapons seem insignificant. But claims like those are much more uncertain than the very simple question before us: whether what OpenAI is trying to develop would be more powerful than nuclear weapons. Obviously it would be, by my reasoning in the grandparent comment, even if this isn't a false vacuum, if the sticky fingers problem makes diamondoid nanotechnology impossible, if people's minds are inherently uncontrollable, etc. So we don't need to resolve those other, more difficult questions in order to do the much easier task of ranking OpenAI's objective relative to nuclear weapons.
Most of us are batting our eyelashes as rapidly as possible but have no idea how to stop it.
have they started hiring people to make maglev trains and permaculture gardens all around urban areas yet?
It'd be odd if people batted eyes before the 1st nuclear weapon came to be, but not batting now.
Well, because it's obviously bullshit and everyone knows it. Just play the game and get rich like everyone else.
Are you sure about that? AI-powered robotic soldiers are around the corner. What could go wrong...
> AI agent robot soliders that are as inept as ChatGPT
Sounds like payola for the enterprising and experienced mercenary.
Robot soldiers != AGI
Ooo I know, Cybermen! Yay.
We're all too busy rolling our eyes.
This is the moment where we fumble the opportunity to avoid a repeat of Web 1.0's ad-driven race to the bottom
Look forward to re-living that shift from life-changing community resource to scammy and user-hostile
I feel this. I had a very productive convo with an LLM today and realized that a huge part of the value of it was that it addressed my questions in a focused way, without trying to sell me anything or generate SEO rankings or register ad impressions. It just helped me. And that was incredibly refreshing in a digital world that generally feels adversarial.
Then the thought came, when will they start showing ads here.
I like to think that if we learn to pay for it directly, or the open source models get good enough, we could still enjoy that simplicity and focus for quite a while. Here’s hoping!
> I like to think that if we learn to pay for it directly
The $20 monthly payment is not enough though and companies like Google can keep giving away their AI for free till OpenAI is bankrupt.
The "good" thing is this is all way too expensive to be ad-supported. Maybe there will be some ad-supported products using very small/cheap models, but the leading edge stuff is always going to be at the leading-edge of compute usage too, and someone has to pay the bill. Even with investors subsidizing a lot of the costs, it's still very expensive to use the best models heavily for real work.
Subscription services can sell ads too. See Hulu, or Netflix. Spotify might not play "radio ads" if you pay, but it will still advertise artists on your home screen.
These models being expensive leads me to think they will look at all methods of monetization possible when seeking profitability. Rather than ads being off the table, it could feasibly make ads be on the table sooner.
Maybe it could happen, but the revenue that can be made per user from ads is basically insignificant compared to the compute costs. They’d be pissing off their users for a very marginal benefit.
There's no such thing as too expensive to be ad-supported. There might be too expensive to be ONLY ad-supported, but as a revenue stream ads can be layered on top of other sources. For example, see that the ads shown on a $100/mo cable package!
It is guaranteed that the models will become salespeople in disguise with time. This is just how the world works. Hopefully competition can stave it off but I doubt it.
It's also why totalitarian regimes love it, they can simply train it to regurgitate a modified version of reality.
I'm hoping there will always be a good LLM option, for the following reasons:
1) The Pareto frontier of open LLMs will keep expanding. The breakneck pace of open research/development, combined with techniques like distillation will keep the best open LLMs pretty good, if not the best.
2) The cost of inference will keep going down as software and hardware are optimized. At the extreme, we're lookin toward bit-quantized LLMs that run in RAM itself.
These two factors should mean a good open LLM alternative should always exist, one without ulterior motives. Now, will people be able to have the hardware to run it? Or will users just put up with ads to use the best LLM? The latter is likely, but you do have a choice.
For all of the skepticism I've seen of Sam Altman, listening to interviews with him (eg by Ben Thompson) he says he really does not want to create an ad tier for OpenAI.
Even if you take him at his word, incentives are hard to ignore (and advertising is a very powerful business model when your goal is to create something that reaches everyone)
Now the hard part. Design a policy stop this from happening while balancing the need to maintain competition, innovation, etc.
That step, along with getting politicians to pass it, is the only thing that will stop that outcome.
Ads intermixed into llm responses is so clearly evil that openai will never do it so long as the nonprofit has a controlling stake (which it currently still has), because the nonprofit would never allow it.
The insidious part is it doesn't have to be so blatant as adverts, you can achieve a lot by just slight biases in text output.
Decades ago I worked for a classical music company, fresh out of school. "So.. how do you anticipate where the music trend is going", I once naively asked one of the senior people on the product side. "Oh, we don't. We tell people really quietly, and they listen". They and the marketing team spent a lot of time doing very subtle work, easily as much as anything big like actual advertisements. Things like small little conversations with music journalists, just a dropped sentence or two that might be repeated in an article, or marginally influence an article; that another journalist might see and have an opinion on, or spark some other curiosity. It only takes a small push and it tends to spread across the industry. It's not a fast process, but when the product team is capable of road-mapping for a year or so in advance, a marketing team can do a lot to prepare things so the audience is ready.
LLMs represent a scary capability to influence the entire world, in ways we're not equipped to handle.
>LLMs represent a scary capability to influence the entire world, in ways we're not equipped to handle
replace LLMs with TV, or smartphones, or maybe even mcdonald's, and you've got the same idea. through TV, corporations got to control a lot of the social world and people's behavior.
Ads / SEO but with AI responses was so obviously the endgame given how much human attention it controls and the fact that people aren't really willing to pay what it costs (when decent free, open-weights alternatives exist)
At least we can self-host this time around
In the future AI will be commoditized. You'll be able to buy an inference server for your home in the form factor like a wi-fi router now. They will be cheap and there will be a huge selection of different models, both open-source and proprietary. You'll be able to download a model with a click of a button. (Or just torrent them.)
That can be done with today's desktops already, if you beef up the specs slightly.
Cheap Chinese single-board computers made specifically for inference is the missing puzzle piece. (No, GPU's and especially Nvidia is not that.)
Also the current crop of AI agents are just utter crap. But that's a skill issue of the people coding them, expect actual advances here soon.
Aren't DGX Spark or Framework Desktop cheap enough?
Not really. Eventually we'll get something with the price and availability of a home appliance. (Wi-fi router tier.)
The smaller models are becoming even more capable now. Add that with a suite of tools and integrations and you can do most of what you do online within the infra at home.
> We did not really know how AGI was going to get built, or used (...)
Altman keeps on talking about AGI as if we're already there.
I don't agree with Tyler on this point (although o3 really is a thing to behold)
But reasonable people could argue that we've achieved AGI (not artificial super intelligence)
https://marginalrevolution.com/marginalrevolution/2025/04/o3...
Fwiw, Sam Altman will have already seen the next models they're planning to release
The goalposts seem to have shifted to a point where the "AGI" label will only be retroactively applied to an AI that was able to develop ASI
how many times must we repeat that AGI is whatever will sell the project. it means nothing. even philosophers don't have a good definition of "intelligence"
AGI just refers roughly to the intelligence it would take to replace most if not all white collar workers. There is no precise definition, but it's not meaningless.
An AGI would be able to learn things while you are talking to it, for example.
Isn't this already the case? Perhaps you mean in a non-transient fashion, i.e. internalizes the in-context learning into the model itself, sort of an ongoing training, that isn't sort of a "hack" like writing notes or adding to a RAG database or whatever.
They still can't reliably do what humans can do across our attributes. That's what AGI was originally about. They have become quite capable, though.
We overestimate the impact of AGI. We need 1 Einstein, not millions of Joe IQ100
I see OpenAI's original form as the last gasp of a kind of liberal tech; in a world where "doing good" was seen as very important, the non-profit approach made sense and got a lot of people on board. These days the Altmans and the pmarcas of the world are much more comfortable expressing their authoritarian, self-centered world views; the "evolving" structure of Open AI is fully in line with that. They want to be the kings they always thought of themselves as, and now they get to do so without couching it in "doing good".
That world never existed. Yes, pockets did - IT professionals with broadband lines and spare kit hosting IRC servers and phpBB forums from their homes free of charge, a few VC-funded companies offering idealistic visions of the net until funding ran dry (RIP CoHost) - but once the web became privatized, it was all in service of the bottom line by companies. Web 2.0 onwards was all about centralization, surveillance, advertising, and manipulation of the populace at scale - and that intent was never really a secret to those who bothered to pay attention. While the world was reeling from Cambridge Analytica, us pre-1.0 farts who cut our teeth on Telnet and Mosaic were just kind of flabbergasted that ya'll were surprised by overtly obvious intentions.
That doesn't mean it has to always be this way, though. Back when I had more trust in the present government and USPS, I mused on how much of a game changer it might be for the USPS to provide free hosting and e-mail to citizens, repurposing the glut of unused real estate into smaller edge compute providers. Everyone gets a web server and 5GB of storage, with 1A Protections letting them say and host whatever they like from their little Post Office Box. Everyone has an e-mail address tied to their real identity, with encryption and security for digital mail just like the law provides for physical mail. I still think the answer is about enabling more people to engage with the internet on their selective terms (including the option of disengagement), rather than the present psychological manipulation everyone engages in to keep us glued to our screens, tethered to our phones, and constantly uploading new data to advertisers and surveillance firms alike.
But the nostalgic view that the internet used to be different is just that: rose-tinted memories of a past that never really existed. The first step to fixing this mess is acknowledging its harm.
I don’t think the parent was saying that everyone’s intentions were pure until recently, but rather that naked greed wasn’t cool before, but now it is.
The Internet has changed a lot over the decades, and it did used to be different, with the differences depending on how many years you go back.
As recently as the Silicon Valley tv show, the joke was that every startup pitch claimed they were “making the world a better place”.
"I don't want to live in a world where someone else makes the world a better place better than we do." -Gavin Belson
What we are observing is the effects of profit maximization when the core value to the user is already fulfilled. It's a type of pathological optimization that is useful at the beginning but eventually pathologizes.
When we already have efficient food production that drove down costs and increased profits (a good thing), what else is there for companies to optimize for, if not loading it with sugar, putting it in cheap plastic, bamboozling us with ads?
This same dynamic plays out in every industry. Markets are a great thing when the low hanging fruit hasn't been picked, because the low hanging fruit is usually "cut the waste, develop basic tech, be efficient". But eventually the low hanging fruit becomes "game human's primitive reward circuits".
I think it did and still does today - every single time an engineer sees a problem an starts an open-source project to solve it - not out of any profit motive and without any monetization strategy in mind, but just because they can, and they think the world would be better off.
> That world never existed
It absolutely did. Steve Wozniak was real. Silicon Valley wasn't always a hive of liars and sycophants.
I have to agree. That's one of the dangers of today's world; the risk of believing that we never had a better one. Yes, the altruism of yesteryear was partially born of convenience, but it still existed. And I remember people actually believing it was important and acting as such. Today's cynicism and selfishness seem a lot more arbitrary to me. There's absolutely no reason things have to be this way. Collectively, we have access to more wealth and power now than we ever did previously. By all accounts, things ought to be great. It seems we just need the current generation of leaders to re-learn a few lessons from history.
You and I are on the same path, just at different points in the journey. Your response is very similar to my own tone and position a decade ago, trying to celebrate what we had before in an attempt to shepherd others towards a better future together. Time wore down that naivety into the cynicism of today, because I’ve come to realize that those celebrations simply coddle those who do not wish to put in the effort for change and yearn for a return to past glories.
We should acknowledge the past flatly and objectively for what it was and spend more time building that future, than listening to the victors of the past brag and boast, content to wallow in their accomplishments instead of rejoining contributors to tomorrow. The good leaders of yesteryear have stepped aside in lieu of championing newer, younger visionaries; those still demanding respect for what they did fifty years ago in circumstances we can only dream about, are part of the problem.
Sure it has. For every Woz, there was a Jobs; for every Linus, a Bill (Gates). For every starry-eyed engineer or developer who just wants to help people, there are business people who will pervert it into an empire and jettison them as soon as practical. For every TED, there’s a Davos; for every DEFCON, there’s a glut of vendor-specific conferences.
We should champion the good people who did the good things and managed to resist the temptations of the poisoned apple, but we shouldn’t hold an entire city on a pedestal because of nostalgia alone. Nobody, and no entity, is that deserving.
> For every Woz, there was a Jobs; for every Linus, a Bill (Gates)
Nobody said there were no bastards. Just that they didn’t have dominion. We let this happen, in part by being lazy and cynical.
I would argue that cynicism is born of attempting to assert accountability and finding repeated harm from said attempts, rather than some intrinsic pre-existing apathy or laziness.
I think most people will snitch on bad behavior as children. However, our systems often allow other children to discipline the snitch, rather than correct the negative behavior the snitch raised. We see it in adult systems as well: whistleblowers often end up with substantially shorter and poorer lives for attempting to assert accountability or consequences on those who committed them, while the perpetrators often enjoy lives of immense wealth and reward regardless of the whistleblower's actions.
If you want people to stop being "lazy" and "cynical", then you have to support them when systems turn against them. In my experience, none of ya'll actually want to also walk out of work when layoffs happen following a profitable quarter for no other reason than to juice the share price, none of ya'll also want to walk off the job because your employer is taking contracts from authoritarian regimes, none of ya'll also want to put yourselves in the line of fire and risk harm over your purported values.
Don't blame us cynics when we have the battle scars showing our commitment to a better tomorrow. What have you done to prevent cynicism?
[dead]
Coincidentally, and as another pre-1.0 fart myself :-) -- one who remembers when Ted Nelson's "Computer Lib / Dream Machines" was still just a wild hope -- I was thinking of something similar the other day (not USPS-specific for hosting, but I like that).
It was sparked by going to a video conference "Hyperlocal Heroes: Building Community Knowledge in the Digital Age" hosted by New_ Public: https://newpublic.org/ "Reimagine social media: We are researchers, engineers, designers, and community leaders working together to explore creating digital public spaces where people can thrive and connect."
A not-insignificant amount of time in that one-hour teleconference was spent related to funding models for local social media and local reporting.
Afterwards, I got to thinking. The USA spent literally trillions of dollars on the (so-many-problematical-things-about-it-I-better-stop-now) Iraq war. https://en.wikipedia.org/wiki/Financial_cost_of_the_Iraq_War "According to a Congressional Budget Office (CBO) report published in October 2007, the US wars in Iraq and Afghanistan could cost taxpayers a total of $2.4 trillion by 2017 including interest."
Or, from a different direction, the USA spends about US$200 billion per year on mostly-billboard-free roads: https://www.urban.org/policy-centers/cross-center-initiative... "In 2021, state and local governments provided three-quarters of highway and road funding ($154 billion) and federal transfers accounted for $52 billion (25 percent)."
That's about US$700 per person per year on US roads.
So, clearly huge amounts of money are available in the USA if enough people think something is important. Imagine if a similar amount of money went to funding exactly what you outlined -- a free web presence for distributed social media -- with an infrastructure funded by tax dollars instead of advertisements. Isn't a healthy social media system essential to 21st century online democracy with public town squares?
And frankly such a distributed social media ecosystem in the USA might be possible for at most a tenth of what roads cost, like perhaps US$70 per person per year (or US$20 billion per year)?
Yes, there are all sorts of privacy and free speech issues to work through -- but it is not like we don't have those all now with the advertiser-funded social media systems we have. So, it is not clear to me that such a system would be immensely worse than what we have.
But what do I know? :-) Here was a previous big government suggestion be me from 2010 -- also mostly ignored (until now 15 years later the USA is in political crisis over supply chain dependency and still isn't doing anything very related to it yet): "Build 21000 flexible fabrication facilities across the USA" https://web.archive.org/web/20100708160738/http://pcast.idea... "Being able to make things is an important part of prosperity, but that capability (and related confidence) has been slipping away in the USA. The USA needs more large neighborhood shops with a lot of flexible machine tools. The US government should fund the construction of 21,000 flexible fabrication facilities across the USA at a cost of US$50 billion, places where any American can go to learn about and use CNC equipment like mills and lathes and a variety of other advanced tools and processes including biotech ones. That is one for every town and county in the USA. These shops might be seen as public extensions of local schools, essentially turning the shops of public schools into more like a public library of tools. This project is essential to US national security, to provide a technologically literate populace who has learned about post-scarcity technology in a hands-on way. The greatest challenge our society faces right now is post-scarcity technology (like robots, AI, nanotech, biotech, etc.) in the hands of people still obsessed with fighting over scarcity (whether in big organizations or in small groups). This project would help educate our entire society about the potential of these technologies to produce abundance for all."
They deeply believe in the Ayn Rand mindset that the system that brings them the most individual wealth is also the best system for humanity as a whole.
The problem with that mindset is that money is a proxy for the Marxist idea of inherent value. The distinction does not matter when you are just an average dude, doubling your money doubles the amount of material wealth you have access to.
But once you control a significant enough chunk of money, it becomes clear the pie doesn't get any bigger the more shiny coins you have, you only have more relative purchasing power, automatically making everyone else poorer.
When people that wealthy are that delusional... With few checks or balances from politics, media, or even social media... I don't think humanity as a whole is in for a great time.
They are roughly as delusional as everyone else. There is an image human bias to convince yourself that what benefits you is also best for everyone else.
It’s just that their biases have much more capacity to cause damage as their wealth gives them so much power.
Yes, people are generally delusional; importantly though, some people are much less so (and some more so). Being connected to reality, being grounded, are learnable traits (but not very valuable to CEOs and narcissists).
> They are roughly as delusional as everyone else.
I would bet serious money that people who believe in Ayn Rand are generally more delusional than others, and the same goes for the ultra-wealthy living in a bubble of sycophants.
And their wealth gives them much more capacity - and motive - to cause damage.
It got you the 20th century
Which Ayn Rand book says that?
Hopelessly over-idealistic premise. Sama and pg have never been anything other than opportunistic muck. This will be my last ever comment on HN.
I feel this so hard, I think this may be my last time using the site as well. They don't care about advancement, they only care about money.
Like everything, it's projection. Those who loudly scream against something are almost always the ones engaging in it.
Google screamed against service revenue and advertising while building the world's largest advertising empire. Facebook screamed against misinformation and surveillance while enabling it on a global scale. Netflix screamed against the overpriced cable TV industry while turning streaming into modern overpriced cable television. Uber screamed against the entrenched taxi industry harming workers and passengers while creating an unregulated monster that harmed workers and passengers.
Altman and OpenAI are no different in this regard, loudly screaming against AI harming humanity while doing everything in their capacity to create AI tools that will knowingly harm humanity while enriching themselves.
If people trust the performance instead of the actions and their outcomes, then we can't convince them otherwise.
Oh I'm not saying they every believed more than their self-centered views, but that in a world that leaned more liberal there was value in trying to frame their work in those terms. Now there's no need to pretend.
And to those who "say" at least now they're honest, I say "WHY?!" Unconditionally being "good" would be better than disguising selfishness as good. But that's not really a thing. Having to maintain the presence of doing good puts significant boundaries on what you can get away with, and increases the consequence when people uncover some shit.
Condoning "honest liars" enables a whole other level of open and unrestricted criminality.
[flagged]
inb4 deleted
> They want to be the kings they always thought of themselves as, and now they get to do so without couching it in "doing good".
You mean, AGI will benefit all of humanity like War on Terror spread democracy?
Why are you changing the subject? The “War on Terror” was never intended to spread democracy as far as I know; democracy was a means by which to achieve the objective of safety from terrorism.
> The “War on Terror” was never intended to spread democracy as far as I know;
Regardless of intent, it was most definitely sold to the American public on that premise.
Is it reasonable to assign the descriptor “authoritarian” to anyone who simply does not subscribe to the common orthodoxy of one faction in the american culture war? That is what it seems to me is happening here, though I would love to be wrong.
I have not seen anything from sama or pmarca that I would classify as “authoritarian”.
Donating millions to a fascist president (in Altman’s case) seems pretty authoritarian to me. And he seems happy enough hanging out with Thiel and other Yarvin groupies.
Yup, if Elon hadn't gotten so jealous and spiteful to him I'm sure he'd be one of Elon's leading sycophants.
I think this is more a symptom of the level of commonplace corruption in the American regulatory environment than any indication of the political views of the person directing such donations.
Tim Apple did it too, and we don’t assume he’s an authoritarian now too, do we? I imagine they would probably have done similarly regardless of who won the election.
It sure seems like an endorsement, but I think it’s simply modern corporate strategy in the American regulatory environment, same as when foreign dignitaries stay in overpriced suites in the Trump hotel in DC.
Those who don’t kiss the ring are clearly and obviously punished. It’s not in the interest of your shareholders (or your launch partners) to be the tall poppy.
I do feel that way about every CEO in those cheery inauguration day photos (https://apnews.com/article/trump-inauguration-tech-billionai...). Zuckerberg, Bezos, Pichai, Cook, Altman, Musk, Thiel: enablers of fascism, every one. However, it should be noted that Cook donated from his own name and not Apple. Guess he didn't want his shittiness to rub off on his company.
As for the shareholders, Cook was more than happy to "do the right thing" in the past, even when under pressure (https://en.wikipedia.org/wiki/Apple–FBI_encryption_dispute).
As far as “enablers” of fascism - would we have the same amount of fascism if they didn’t participate? I posit that the answer is yes.
Furthermore, you are dead wrong on the last point. The “dispute” between the FBI and Apple is a fiction designed to restore public trust in Apple’s privacy stance following the Snowden revelations about FAA702 (aka PRISM) that shows that companies allow the USG warrantless access to their data in realtime via special APIs or portals.
https://www.reuters.com/article/us-apple-fbi-icloud-exclusiv...
The tech executives came to DC to meet with Obama in the wake of the whole Snowden thing to discuss it, though it was widely reported as being a consult on fixing healthcare.gov (lol) a few outlets reported it correctly. There are photos of the meeting kicking around.
I imagine the Apple-vs-the-FBI narrative (which is widely regarded as true and has resulted in mainstream false belief, such as yours demonstrated here) was borne directly out of these meetings.
Apple intentionally maintains access to the majority of their users’ data by the USG and the CCP (in their respective zones). It is required for them to continue operating in their current fashion. Every iMessage and (basically) every file in iCloud (photos included) is readable by Apple and the government. Apple has the technical capability to prevent this by migrating their userbase to e2ee systems, and they do not.
I firmly believe that this is by design, and that they would be very severely punished, legally or extralegally, if they changed the status quo.
I’m not sure exactly what they meant by “liberal” in this case, but since they put it in contrast with authoritarianism, I assume they meant it in the conventional definition of the word (where it is the polar opposite of authoritarianism). Instead of the American politics-as-sports definition that makes it a synonym for “team blue.”
correct. "liberal" as in the general ideas that ie expanding the franchise is important, press freedoms are good, that government can do good things for people and for capital etc. Wikipedia's intro paragraph does a good job of describing what I was getting at (below). In prior decades Republicans in the US would have been categorized as "liberal" under this definition; in recent years, not so much.
>Liberalism is a political and moral philosophy based on the rights of the individual, liberty, consent of the governed, political equality, the right to private property, and equality before the law. Liberals espouse various and often mutually conflicting views depending on their understanding of these principles but generally support private property, market economies, individual rights (including civil rights and human rights), liberal democracy, secularism, rule of law, economic and political freedom, freedom of speech, freedom of the press, freedom of assembly, and freedom of religion. Liberalism is frequently cited as the dominant ideology of modern history.
No, "authoritarian" is a word with a specific meaning. I'm not sure about applying it to Sam Altman, but Marc Andreessen has expressed views that I consider authoritarian in his victory lap tour since last year's presidential election.
No I don't think it is. I DO think those two people want to be in charge (along with other billionaires) and they want the rest of us to follow along, which is in my book an authoritarian POV. pmarca's recent "VC is the only job that can't be done by AI" is a good example of that; the rest of us are to be managed and controlled by VCs and robots.
are you aware of worldcoin?
altman building a centralised authority of who will be classed as "human" is about as authoritarian as you could get
Worldcoin is opt-in, which is the opposite of authoritarian. Nobody who doesn’t like it is required to participate.
it is opt in until they manage to convince some government to allow them to be the contracted provider of "humanness verification" that is then made a prerequisite to access services.
Comcast is also opt-in. Except, in many areas there are no real alternatives.
I doubt Worldcoin will actually manage to corner the market. But the point is, if it did, bad things would happen. Though, that’s probably true of most products.
it's always opt-in until it isn't
For better or worse, OpenAI removing the capped structure and turning the nonprofit from AGI considerations to just philanthropy feels like the shedding of the last remnants of sanctity.
Huh, so Elon's lawsuit worked? The nonprofit will retain control? Or is this just spin on a plan that will eventually still sideline the nonprofit?
The whole article feels like justifying a bunch of legal nonsense to get to the end result of removing the capped structure.
To be specific: The nonprofit currently retains control. It will stop once more dilution sets in.
Yes and no. It sounds like the capped profit PPU holders will get to have their units convert 1:1 with unlimited profit equity shares, which are obviously way more valuable. So the nonprofit loses insanely in this move and all current investors and employees make a huge amount.
It more sounds like the district attorneys won
> transition to a Public Benefit Corporation
Can some business person give us a summary on PBCs vs. alternative registrations?
A PBC is just a for-profit company that has _some_ sort of specific mandate to benefit the "public good" - however it chooses to define that. It's generally meant to provide some balance toward societal good over the more common, strictly shareholder profit-maximizing alternative.
(IANAL but run a PBC that uses this charter[1] and have written about it here[2] as part of our biennial reporting process.)
[1] https://github.com/OpenCoreVentures/ocv-public-benefit-compa...
[2] https://goauthentik.io/blog/2024-09-25-our-biennial-pbc-repo...
The charter of a public-benefit corporation gives the company's board and management a bit of legal cover for making decisions that don't serve to maximize, or may even limit, financial returns to shareholders, when those decisions are made for the benefit of the public.
But the reverse isn't true, right? It doesn't prevent the board from maximizing financial returns even when doing so would harm the "public".
Reality: It is the same as any other for-profit with a better-sounding name. It confuses a lot of people into thinking it's a non-profit without being one.
Theory: It allows the CEO to make decisions motivated not just by maximizing shareholder value but by some other social good. Of course, very few PBC CEOs choose to do that.
you could've just asked this to chatgpt....
There are a lot of good points here, by multiple vantage points as far as views for the argument of how imminent, if it - metaphysically or logistically - viable at all even, AGI is.
I personally think the conversation, including obviously in the post itself, has swung too far in the direction of how AGI can or will potentially affect the ethical landscape regarding AI, however. I think we really ought to concern ourselves with addressing and mitigating effects that it already HAS brought - both good and bad - rather than engaging in any excessive speculation.
That's just me, though.
That’s an intentional misdirection, and an all too common one :(
[dead]
SamA is in a hurry because he's set to lose the race. We're at peak valuation and he needs to convert something now.
If the entrenched giants (Google, Microsoft and Apple) catch up - and Google 100% has, if not surpassed - they have a thousand levers to pull and OpenAI is done for. Microsoft has realized this, hence why they're breaking up with them - Google and Anthropic have shown they don't need OpenAI. Galaxy phones will get a Gemini button, Chrome will get it built into the browser. MS can either develop their own thing , use opensource models, or just ask every frontier model provider (and there's already 3-4 as we speak) how cheaply they're willing to deliver. Then chuck it right in the OS and Office first-class. Which half the white collar world spends their entire day staring at. Apple devices too will get an AI button (or gesture, given it's Apple) and just like MS they'll do it inhouse or have the providers bid against each other.
The only way OpenAI David was ever going to beat the Goliaths GMA in the long run was if it were near-impossible to catch up to them, á la TSMC/ASML. But they did catch up.
It's doubtful if there even is a race anymore. The last significant AI advancement in the consumer LLM space was fluent human language synthesis around 2020, with its following assistant/chat interface. Since then, everything has been incremental — larger models, new ways to prompt them, cheaper ways to run them, more human feedback, and gaming evaluations.
The wisest move in the chatbot business might be to wait and see if anyone discovers anything profitable before spending more effort and wasting more money on chat R&D, which includes most agentic stuff. Reliable assistants or something along those lines might be the next big breakthrough (if you ask certain futurologists), but the technology we have seems unsuitable for any provable reliability.
ML can be applied in a thousand ways other than LLMs, and many will positively impact our lives and create their own markets. But OpenAI is not in that business. I think the writing is on the wall, and Sama's vocal fry, "AGI is close," and humanity verification crypto coins are smoke and mirrors.
Saying LLMs have only incrementally improved is like saying my 13 year old has only incrementally approved over the last 5 years. Sure, it's been a set of continuous improvements, but that has taken it from a toy to genuinely insanely useful.
Personally, deep research and o3 have been transformative, taking LLMs from something I have never used to something that I am using daily.
Even if the progress ends up plateauing (which I do not believe will happen in the near term), behaviors are changing; OpenAI is capturing users, and taking them from companies like Google. Google may be able to fight back and win - Gemini 2.5 Pro is great - but any company sitting this out risks being unable to capture users back from Open AI at a later date.
> any company sitting this out risks being unable to capture users back from Open AI at a later date.
Why? I paid for Claude for a while, but with Deepseek, Gemini and the free hits on Mistral, ChatGPT, Claude and Perplexity I'm not sure why I would now. This is anecdotal of course, but I'm very rarely unique in my behaviour. I think the best the subscription companies can hope for is that their subscribers don't realize that Deepseek and Gemini can basically do all you need for free.
>I'm very rarely unique in my behaviour
I cannot stress this enough: if you know what Deepseek, Claude, Mistral, and Perplexity are, you are not a typical consumer.
Arguably, if you have used a single one of those brands you are not a typical consumer.
The vast majority of people have used ChatGPT and nothing else, except maybe clicking on Gemini or Meta AI by accident.
I doubt it. Google is shoving Gemini on everyone’s face through search, and Meta AI is embedded in every Meta product. Heck, instagram created a bot marketplace.
They might not “know” the brand as well as ChatGPT, but the average consumer has definitely been exposed to those at the very least.
DeepSeek also made a lot of noise, to the point that, anecdotally, I’ve seen a lot of people outside of tech using it.
I can't square how OpenAi can capture users and presumably retain them when the incumbents have been capturing users for multiple decades and why can they not retain them?
If every major player had an AI option, i'm just not understanding how because OpenAi moved first or got big first, the hugely massively successful companies that did the same thing for multiple decades don't have the same advantage?
Who knows how this will play out, but user behavior is always somewhat sticky and OpenAI now has 400M+ weekly active users. Currently, I'm not sure there is much of a moat, as many would jump if, say, Google released a model that is 10x better. However, there are myriad ways that OpenAI could slowly try to make their userbase even stickier:
1. OpenAI is apparently in the process of building a social network.
2. OpenAI is apparently working with Jonny Ive on some sort of hardware.
3. OpenAI is increasingly working on "memory" as a LLM feature. Users may be less likely to switch as an LLM increasingly feels like a person that knows you, understands you, has a history with you, etc.
4. Google and MSFT are leveraging their existing strengths. Perhaps you will stick with Gemini given deep integration with Android, Google Drive, Sheets, Docs, etc.
5. LLMs, as depressing as this sounds, will increasingly be used for romantic/friend purposes. These users may not want to switch, as it would be like breaking up and finding a new partner.
6. Your chat history, if it can't be easily exported/imported, may be a sticky feature, especially if it can be improved (e.g. easily search, cross-reference, chats, like a supercharged interconnecting note app with brains).
I could list 100 more of these. Perhaps none of the above will happen, but again, they have 400M weekly users and they will find ways to keep them. It's a lot easier to keep users that have a habit of showing up, then getting them in the first place. There's a reason that Google is treating this like an emergency; they are at serious risk of having their search cash cow permanently disrupted if they don't act fast to win.
6 (can’t export/import chat history) is already a wrap since every user is prohibited from using ChatGPT chat logs to “develop models that compete with OpenAI,” if you export your chats and give it to Gemini or Claude or post it on X and Grok reads it, then you just violated the OpenAI terms, that’s grounds for a permaban or lawsuit for breach of contract (lol) … maybe your companies accept this risk but I’m in malicious compliance mode
Google is alright, but they have similar stupid noncompete vendor lock in rule, and no way to opt out of training, so there’s no real reason to trust Google. Yeah they could ship tool use in reasoning to catch up to o3, but it’ll just be catching up and not passing unless they fix the stupid legal terms.
Claude IDK how to trust, they train on feedback and everything is feedback, and they have the noncompete rule written even more broadly, dumb to use that.
Grok has a noncompete rule but also has a way to opt out of training, so it’s on the same tier of ClosedAI. I use it sometimes for jokey toy image generation crap but there’s no way to use it for anything serious since it has a copypasted closed ai prohibition
Mistral needs better models and simpler legalese, it’s so complicated and impossible to know which of the million legal contracts applies
IMHO meta is the only player, but they shot themselves in the foot by making Llama 4 too big for the local llama community to even use, super dumb, killed their most valuable thing which was the community.
That means the best models we can use for work without needing to worry about a lawsuit, are Qwen, and DeepSeek distills, no American AI is even in the same ballpark, Gemma 3 is refusal king if you even hint at something controversial. basically, America is getting actively stomped by China in AI right now, because their stuff is open and interoperable, and ours is closed and has legal noncompete bullshit, what can we actually build that doesn’t compete with these companies? Nothing
Very thought provoking reply. #3 sounds the most sticky to me, in the product sense that you'd build "your own LLM/agent" and plug it other services. I heard this on a product podcast [1], think of it like Okta SSO integration: access controls for your personal/sensitive LLM stuff vs all other services trying to get you to use their LLM.
#5 stands out as well as a substantial barrier.
The rest to me our sticky, but no more uniquely sticky than any other service that retains data. Like the switching cost of email or a browser. It does stick but not insurmountable and once the switch is made, it's like why did I wait so long? (I'm a Safari user!)
Anyway, thanks for the thoughtful reply.
[1] https://www.reforge.com/podcast/unsolicited-feedback/the-gre...
No, it's still just a toy. Until they can make the models actually consistently good at things, they aren't going to be useful. Right now they still BS you far too much to trust them, and because you have to double check their work every time they are worse than no tool at all.
To extend your illustration, 5 years ago no one could train an LLM with the capabilities of a 13 year old human; now many companies can both train LLMs and integrate them into products.
> taken it from a toy to genuinely insanely useful.
Really?
It's been five years. There is no AI killer app. Agentic coding is still hot garbage. Normal people don't want to use AI tools despite them being shoved into every SaaS under the sun. LLMs are most famous among non-tech users for telling you to put glue into pizza. No one has been able to scale their chatbots into something profitable, and no one can put a date on when they'll be profitable.
Why are you still pretending anything is going to come out of this?
Just to get things right. The big AI LLM hype started end of 2022 with the launch of ChatGPT, DALL-E 2, ....
Most people in society connect AI directly to ChatGPT and hence OpenAI. And there has been a lot of progress in image generation, video generation, ...
So I think your timeline and views are slightly off.
> Just to get things right. The big AI LLM hype started end of 2022 with the launch of ChatGPT, DALL-E 2, ....
GPT-2 was released in 2019, GPT-3 in 2020. I'd say 2020 is significant because that's when people seriously considered the Turing test passed reliably for the first time. But for the sake of this argument, it hardly matters what date years back we choose. There's been enough time since then to see the plateau.
> Most people in society connect AI directly to ChatGPT and hence OpenAI.
I'd double-check that assumption. Many people I've spoken to take a moment to remember that "AI" stands for artificial intelligence. Outside of tongue-in-cheek jokes, OpenAI has about 50% market share in LLMs, but you can't forget that Samsung makes AI washing machines, let alone all the purely fraudulent uses of the "AI" label.
> And there has been a lot of progress in image generation, video generation, ...
These are entirely different architectures from LLM/chat though. But you're right that OpenAI does that, too. When I said that they don't stray much from chat, I was thinking more about AlexNet and the broad applications of ML in general. But you're right, OpenAI also did/does diffusion, GANs, transformer vision.
This doesn't change my views much on chat being "not seeing the forest for the trees" though. In the big picture, I think there aren't many hockey sticks/exponentials left in LLMs to discover. That is not true about other AI/ML.
>In the big picture, I think there aren't many hockey sticks/exponentials left in LLMs to discover. That is not true about other AI/ML.
We do appear to be hitting a cap on the current generation of auto-regressive LLMs, but this isn't a surprise to anyone on the frontier. The leaked conversations between Ilya, Sam and Elon from the early OpenAI days acknowledge they didn't have a clue as to architecture, only that scale was the key to making experiments even possible. No one expected this generation of LLMs to make it nearly this far. There's a general feeling of "quiet before the storm" in the industry, in anticipation of an architecture/training breakthrough, with a focus on more agentic, RL-centric training methods. But it's going to take a while for anyone to prove out an architecture sufficiently, train it at scale to be competitive with SOTA LLMs and perform enough post training, validation and red-teamint to be comfortable releasing to the public.
Current LLMs are years and hundreds of millions of dollars of training in. That's a very high bar for a new architecture, even if it significantly improves on LLMs.
ChatGPT was not released to the general public until November 2022, and the mobile apps were not released until May 2023. For most of the world LLM's did not exist before those dates.
LLM AI hype started well before ChatGPT.
This site and many others were littered with OpenAI stories calling it the next Bell Labs or Xerox PARC and other such nonsense going back to 2016.
And GPT stories kicked into high gear all over the web and TV in 2019 in the lead-up to GPT-2 when OpenAI was telling the world it was too dangerous to release.
Certainly by 2021 and early 2022, LLM AI was being reported on all over the place.
>For most of the world LLM's did not exist before those dates.
Just because people don't use something doesn't mean they don't know about it. Plenty of people were hearing about the existential threat of (LLM) AI long before ChatGPT. Fox News and CNN had stories on GPT-2 years before ChatGPT was even a thing. Exposure doesn't get much more mainstream than that.
> LLM AI was being reported on all over the place.
No, it wasn't.
As a proxy, here's HN results prior to November, 2022 - 13 results.
https://hn.algolia.com/?dateEnd=1667260800&dateRange=custom&...
Here's Google Trends, showing a clear uptick May 2023, and basically no search volume before (the small increase Feb. 2023 probably Meta's Llama).
https://trends.google.com/trends/explore?date=today%205-y&ge...
https://trends.google.com/trends/explore?date=today%205-y&ge...
As another proxy, compare Nvidia revenues - $26.91bln in 2022, $26.97bln in 2023, $60bln 2024, $130bln 2025. I think it's clear the hype didn't start until 2023.
You're welcome to point out articles and stores before this time period "hyping" LLM's, but what I remember is that before ChatGPT there was very little conversation around LLM's.
If you're in this space and follow it closely, it can be difficult to notice the scale. It just feels like the hype was always big. 15 years ago it was all big data and sentiment analysis and NLP, machine translation buzz. In 2016 Google Translate switched to neural nets (LSTM) which was relatively big news. The king+woman-man=queen stuff with word2vec. Transformer in 2017. BERT and ELMo. GPT2 was a meme in techie culture, there was even a joke subreddit where GPT2 models were posting comments. GPT3 was also big news in the techie circles. But it was only after ChatGPT that the average person on the street would know about it.
Image generation was also a continuous slope of hype all the way from the original GAN, then thispersondoesnotexist, the sketch-to-photo toys by Nvidia and others, the avocado sofa of DallE. Then DallE2, etc.
The hype can continue to grow beyond our limit of perception. For people who follow such news their hype sensor can be maxed out earlier, and they don't see how ridiculously broadly it has spread in society now, because they didn't notice how niche it was before, even though it seemed to be "everywhere".
There's a canyon of a difference between excitement and buzz vs. hype. There was buzz in 2022, there was hype in 2023. No one was spending billions in this space until a public demarcation point that, not coincidentally, happened right after ChatGPT.
Seems like an arbitrary distinction.
I'd say Chain-of-Thought has massively improved LLM output. Is that "incremental"? Why is that more incremental than the move from GPT-2 to GPT-3? Sure you can say that this is when LLMs first passed some sort of Turing test, but fundamentally there was no technological difference from GPT-3 to GPT-4. In fact I would say the quality of GPT-4 unlocked thousands (millions?) more use-cases that were not very viable with the quality delivered by GPT-3. I don't see any reason for more use-cases to keep being unlocked by LLM improvements.
You saying —- with a straight face —- that post 2020 LLM AIs have made only incremental progress?
Yep, compared to beating the Turing test, the progress has been linear with exponentially growing investment. That's diminishing marginal returns.
Yes. But they have also improved a lot. Incremental just means that the function is going up without breaking points. We haven't seen anything revolutionary, just evolutionary in the last 3 years. But the models do provide 2 or 3 times more value. So their pace of advancement is not slow.
The better you know a field the more it looks incremental. In other words, incrementalness is more a function of how much attention you pay or how deep you research it. Relativity and quantum mechanics were also incremental. Copernicus and Kepler were incremental. Deep learning itself was incremental. Based on almost identical networks from the 90s (CNN), which were using methods from the 80s (backprop) on architectures from the 70s (neocognitron) using activation functions from the 60s and the basic neuron model from the 40s (McCullough and Pitts), which was just a mathematization of observations in biology via microscopy integration with mathematical logic and electrical logic gates developed around the same time (Shannon), so it's just logic as formalized by Gödel and others and it goes back to Hilbert's program, which can be extrapolated from Leibniz etc. etc. It's not hard to say that "it's really just previous thing X plus previous thing Y, nothing new under the sun" to literally anything.
"It just suddenly appeared out of nowhere" is just a perception based on missing info. Many average people think ChatGPT was a sudden innovation specifically by OpenAI seemingly out of nowhere. Because they didn't follow it.
This is a sufficiently advanced science is indistinguishable from magic phenomenon.
The more you know about it, the less groundbreaking it is.
Well I think you’re correct that they know the jig is up, but I would say they know the AI bubble is about to burst so they want to cash out before that happens.
There is little to no money to be made in GAI, it will never turn into AGI, and people like Altman know this, so now they’re looking for a greater fool before it is too late.
AI companies are already automating huge swaths of document analysis, customer service. Doctors are straight up using ChatGPT to diagnose patients. I know it’s fun to imagine AI is some big scam like crypto, but you’d have to be ignoring a lot of genuine non hype economic movement at this point to assume GAI isn’t making any money.
Why does the forum of an incubator that now has a portfolio that is like 80% AI so routinely bearish on AI? Is it a fear of irrelevance?
> AI companies are already automating huge swaths of document analysis, customer service. Doctors are straight up using ChatGPT to diagnose patients
I don't think there is serious argument that LLMs won't generate tremendous value. The question is who will capture it. PCs generated massive value. But other than a handful of manufacturers and designers (namely, Apple, HP, Lenovo, Dell and ASUS), most PC builders went bankrupt. And out of the value generated by PCs in the world, the vast majority was captured by other businesses and consumers.
Doctors were using Google to diagnose patients before. The thing is, it's still the doctor delivering the diagnosis, the doctor writing the prescription, and the doctor billing insurance. Unless and until patients or hospitals are willing and legally able to use ChatGPT as a replacement for a doctor (unwise), ChatGPT is not about to eat any doctor's lunch.
Not OP, but I think this makes the point, not argues against it. Something has come along that can supplant Google for a wide range of things. And it comes without ads (for now). It’s an opportunity to try a different business model, and if they succeed at that then it’s off to the races indeed.
It makes one point: that LLMs are useful.
The other point is still suspect: that LLMs will ever scale to AGI.
Which specifically means reliability and explainability for higher-order thinking.
The writing is on the wall that LLMs are going to automate failure-tolerant work.
But the rub there is that failure-tolerant work is also tolerant of less than state of the art, cost-optimized LLMs.
Which leaves OpenAI where? AGI or bust.
And I wouldn't take that bet, when MS, Google, and Apple are alternative options.
Doctors weren't paying for Google either. If ChatGPT or other LLM AIs play that same role, the OP remains correct.
When the wright brothers made their plane they didn't expect today that there are thousands of planes flying at a time.
When the Internet was developed they didn't imagine the world wide Web.
When cars started to get popular people still thought there would be those who are going to stick with horses.
I think you're right on the AI we're just on the cusp of it and it'll be a hundred times bigger than we can imagine.
Back when oil was discovered and started to be used it was about equal to 500 laborers now automated. One AI computer with some video cards are now worth x number of knowledge workers. That never stop working as long as the electricity keeps flowing.
They did actually imagine the World Wide Web at the time of developing the first computer networks. This is one of the most obvious outcomes of a system of networked devices.
Even five years into this "AI revolution," the boosters haven't been able to paint a coherent picture of what AI could reasonably deliver – and they've delivered even less.
> Doctors are straight up using ChatGPT to diagnose patients
This makes me want to invest in malpractice lawyers, not OpenAI
The lawyers will be obsolete far faster than the doctors
Lol they are not using ChatGPT for the full diagnosis. They're used in steps of double checking knowledge like drug interactions and such. If you're gonna speak on something like this in a vague manner I'd suggest you google this stuff first. I can tell you for certain that that part in particular is a highly inaccurate statement.
> Doctors are straight up using ChatGPT to diagnose patients.
Oh we know: https://pmc.ncbi.nlm.nih.gov/articles/PMC11006786/
The article you posted describes a patient using ChatGPT to get a second opinion from what their doctor told them, not the doctor themself using ChatGPT.
The article could just as easily be about “Delayed diagnosis of a transient ischemic attack caused by talking to some rando on Reddit” and it would be just as (non) newsworthy.
People aren't saying that AI as a tool is going to go bust. Instead, people are saying that this practice of spending 100s of millions, or even billions of dollars on training massive models is going bust.
AI isn't going to be the world changing, AGI, that was sold to the public. Instead, it will simply be another B2B SaaS product. Useful, for sure. Even profitable for startups.
But "take over the world" good? Unlikely.
Yes. The answer is yes.
The world is changing and that is scary.
They made $4 billion last year, not really "little to no money". I agree it's not clear they can justify their valuation but it's certainly not a bubble.
But didn't they spend $9 billion? If I have a machine that magically turns $9 billion of investor money into $4 billion in revenue, I need to have a pretty awesome story for how in the future I am going to be making enormous piles of money to pay back that investment. If it looks like frontier models are going to be a commodity and it is not going to be winner-take-all... that's a lot harder story to tell.
Most of that 9 billion was spent on training new models and on staff. If they stopped spending money on R&D, they would already be profitable.
> if they stopped spending money on R&D, they would already be profitable
OpenAI has claimed this. But Altman is a pathological liar. There are lots of ways of disguising operating costs as capital costs or R&D.
In a space that moves this fast and is defined by research breakthroughs, they’d be profitable for about 5 minutes.
Says literally every startup ever i.r.t. R&D/marketing/ad spend yet that's rarely reality.
> If they stopped spending money on R&D, they would already be profitable.
The news that they did that would make them lose most of their revenue pretty fast.
But only if everyone else stopped improving models as well.
In this niche you can be irrellevant in months when your models drop behind.
I guarantee you that I could surpass that revenue if I started a business that would give people back $9 if they gave me $4.
OpenAI models are already of the most expensive, they don’t have a lot of levers to pull.
There is a pretty significant different between “buy $9 for $4” and selling a service that costs $9 to build and run per year for $4 per year. Especially when some people think that service could be an absolute game changer for the species.
It’s ok to not buy into the vision or think it’s impossible. But it’s a shallow dismissal to make the unnuanced comparison, especially when we’re talking about a brand new technology - who knows what the cost optimization levers are. Who knows what the market will bear after a few more revs.
When the iPhone first came out, it was too expensive, didn’t do enough, and many people thought it was a waste of apples time when they should be making music players.
It's a commodity technology and VCs are investing as if this were still a winner-takes-all play. It's obviously not, if there were any doubt about that, Deepseek's R1 release should have made it obvious.
> But it’s a shallow dismissal to make the unnuanced comparison, especially when we’re talking about a brand new technology - who knows what the cost optimization levers are. Who knows what the market will bear after a few more revs.
You're acting as-if OpenAI is still the only player in this space. OpenAI has plenty of competitors who can deliver similar models for cheaper. Gemini 2.5 is an excellent and affordable model and Google has a substantially better capacity to scale because of a multi-year investment in its TPUs.
Whatever first mover advantage OpenAI had has been quickly eliminated, they've lost a lot of their talent, and the chief hypothesis they used to attract the capital they've raised so far is utterly wrong. VCs would be mad to be continuing to pump money into OpenAI just to extend their runway -- at 5 Bln losses per year they need to actually consider cost, especially when their frontier releases are only marginal improvements over competitors.
... this is a bubble despite the promise of the technology and anyone paying attention can see it. For all of the dumb money employed in this space to make it out alive, we'll have to at least see a fairly strong form of AGI developed, and by that point the tech will be threatening the general economic stability of the US consumer.
> When the iPhone first came out, it was too expensive, didn’t do enough, and many people thought it was a waste of apples time when they should be making music players.
This comparison is always used when people are trying to hype something. For every "iPhone" there are thousands of failures
> I started a business that would give people back $9 if they gave me $4
I feel like people overuse this criticism. That's not the only way that companies with a lot of revenue lose money. And this isn't at all what OpenAI is doing, at least from their customers' perspective. It's not like customers are subscribing to ChatGPT simply because it gives them something they were going to buy anyway for cheaper.
Cognitive dissonance is a psychological phenomenon that occurs when a person holds two contradictory beliefs at the same time.
But he said he was doing it just for love!! [1]
1: https://www.techpolicy.press/transcript-senate-judiciary-sub...
Sounds a lot like "Google+ will catch Facebook in no time".
OpenAI has been on a winning streak that makes ChatGPT the default chatbot for most of the planet.
Everybody else like you describe is trying to add some AI crap behind a button on a congested UI.
B2B market will stay open but OpenAI has certainly not peaked yet.
Facebook had immense network effects working for it back then.
What network effect does OpenAI have? Far as I can tell, moving from OpenAI to Gemini or something else is easy. It’s not sticky at all. There’s no “my friends are primarily using OpenAI so I am too” or anything like that.
So again, I ask, what makes it sticky?
OpenAI (or, more specifically, Chat GPT) is CocaCola, not Facebook.
They have the brand recognition and consumer goodwill no other brand in AI has, incredibly so with school students, who will soon go into the professional world and bring that goodwill with them.
I think better models are enough to dethrone OpenAI in API, B2C and internal enterprise use cases, but OpenAI has consumer mindshare, and they're going to be the king of chatbots forever. Unless somebody else figures out something which is better by orders of magnitude and that Open AI can't copy quickly, it's going to stay that way.
Apple had the opportunity to do something really great here. With Siri's deep device integration on one hand and Apple's willingness to force 3rd-party devs to do the right thing for users on the other, they could have had a compelling product that nobody else could copy, but it seems like they're not willing to go that route, mostly for privacy, antitrust and internal competency reasons, in that order. Google is on the right track and might get something similar (although not as polished as typical Apple) done, but Android's mindshare among tech-savvy consumers isn't great enough for it to get traction.
> Unless somebody else figures out something which is better by orders of magnitude and that Open AI can't copy quickly, it's going to stay that way.
This will happen, and it won't be another model which Open AI can't copy, it'll be products.
I don't doubt OpenA I can create the better models but they're no moat if they're not in better products. Right now the main product is chat, which is easy enough to build, but as integrations get deeper how can OpenAI actually ensure it keeps traffic?
Case in point, Siri. Apple allows you to use ChatGPT with Siri right now. If Apple chooses so, they could easily remove that setting. On most devices ChatGPT lives within the confines of an app or the browser. A phone with deep AI integration is arguably a fantastic product— much better than having to open an app and chat with a model. How quickly could Open AI build a phone that's as good as those of the big phone companies today?
To draw a parallel— Google Assistant has long been better than Siri, but to use Siri you don't have to install an app. I've used both Android and iOS, and every time I'm on iPhone I switch back to Siri because in spite of being a worse assistant, it's overall a better product. It integrates well with the rest of the phone, because Apple has chosen to not allow any other voice assistant integrate deeply with the rest of the phone.
Does Google not have brand recognition and Consumer good will? We might read all sorts of deep opinions of Google on HN, but I think Search and Chrome market share speak themselves. For the average consumer, I'm skeptical that OpenAI carries much weight.
> For the average consumer, I'm skeptical that OpenAI carries much weight.
My friend teaches at a Catholic girls’ high school and based on what he tells me, everyone knows about ChatGPT, both staff and students. He just had to fail an entire class on an assignment because they all used it to write a book summary (which many of them royally screwed up because there’s another book with a nearly identical title).
It’s all anecdotal and whatnot but I don’t think many of them even know about Claude or Gemini, while ChatGPT has broad adoption within education. (I’m far less clear on how much mindshare it has within the general population though)
Coca Cola does insane amounts of advertising to maintain their position in the mind of the consumer. I don't think it is as sticky as you say it is for OpenAi
> who will soon go into the professional world and bring that goodwill with them.
...Until their employer forces them to use Microsoft Copilot, or Google Gemini, or whatever, because that's what they pay for and what integrates into their enterprise stack. And the new employee shrugs and accepts it.
Just like people are forced to use web Office and Microsoft Teams, and start prefering them over Google Docs and Slack? I don't think so.
> Just like people are forced to use web Office and Microsoft Teams, and start prefering them over Google Docs and Slack? I don't think so
...yes. Office is the market leader. Slack has between a fifth and a fourth of the market. Coca-Cola's products have like 70% market share in the American carbonated soft-drink market [1].
[1] https://www.investopedia.com/ask/answers/060415/how-much-glo...
Yep, I mostly interact with these AIs through Cursor. When I want to ask it a question, there's a little dropdown box and I can select openai/anthropic/deepseek whatever model. It's as easy as that to switch.
Most of my exposure to LLMs has been through GitHub's Copilot, which has that same interface.
Yeah but I remember when search first started getting integrated with the browser and the "switch search engine" thing was significantly more prominent. Then Google became the default and nobody ever switched it and the rest is history.
So the interesting question is: How did that happen? Why wasn't Google search an easily swapped commodity? Or if it was, how did they win and defend their default status? Why didn't the existing juggernauts at the time (Microsoft) beat them at this game?
I have my own answers for these, and I'm sure all the smart people figuring out strategy at Open AI have thought about similar things.
It's not clear if Open AI will be able to overcome this commodification issue (personally, I think they won't), but I don't think it's impossible, and there is prior art for at least some of the pages in this playbook.
Yes, I think people severely underrate the data flywheel effects that distribution gives an ML-based product, which is what Google was and ChatGPT is. It is also an extremely capital-intensive industry to be in, so even if LLMs are commoditized, it will be to the benefit of a few players, and barring a sustained lead by any one company over the others, I suspect the first mover will be very difficult to unseat.
Google is doing well for the moment, but OpenAI just closed a $40 billion round. Neither will be able to rest for a while.
Yeah, a very interesting metric to know would be how many tokens of prompt data (that is allowed to be used for training) the different products are seeing per day.
> So the interesting question is: How did that happen? Why wasn't Google search an easily swapped commodity? Or if it was, how did they win and defend their default status? Why didn't the existing juggernauts at the time (Microsoft) beat them at this game?
Maybe the big amount of money they've given to Apple which is their direct competitor in the mobile space. Also good amount of money given to Firefox, which is their direct competitor in the browser space, alongside side Safari from Apple.
Most people don't care about the search engine. The default is what they will used unless said default is bad.
I don't think my comment implied that the answers to these questions aren't knowable! And indeed, I agree that the deals to pay for default status in different channels is a big part of that answer.
So then apply that to Open AI. What are the distribution channels? Should they be paying Cursor to make them the default model? Or who else? Would that work? If not, why not? What's different?
My intuition is that this wouldn't work for them. I think if this "pay to be default" strategy works for someone, it will be one of their deeper pocketed rivals.
But I also don't think this was the only reason Google won search. In my memory, those deals to pay to be the default came fairly long after they had successfully built the brand image as the best search engine. That's how they had the cash to afford to pay for this.
A couple years ago, I thought it seemed likely that Open AI would win the market in that way, by being known as the clear best model. But that seems pretty unclear now! There are a few different models that are pretty similarly capable at this point.
Essentially, I think the reason Google was able to win search whereas the prospects look less obvious for Open AI is that they just have stronger competition!
To me, it just highlights the extent to which the big players at the time of Google's rise - Microsoft, Yahoo, ... Oracle maybe? - really dropped the ball on putting up strong competition. (Or conversely, Google was just further ahead of its time.)
From talking to people, the average user relies on memories and chat history, which is not easy to migrate. I imagine that's the part of the strategy to keep people from hopping model providers.
Google, MS, Apple and Meta are all quite capable of generating such a history for new users, if they'd like to.
That sounds eminently solvable.
Brand counts for a lot
Google is one of the most valuable brands ever. Everyone knows it. It is even used for "searching the web" openai is not that strong of a brand
I think for the general public ChatGPT is a much stronger brand than OpenAI itself.
Google is a far bigger brand than ChatGPT and OpenAI combined.
No one has a deep emotional connection with OpenAI that would impede switching.
At best they have a bit of cheap tribalism that might prevent some incurious people who don't care much about using the best tools noticing that they aren't.
Defacto victory.
Facebook wasn't some startup when Google+ entered the scene; they were already cash flow positive, and had roughly 30% ads market share.
OpenAI is still operating at a loss despite having 50+% of the chatbot "market". There is no easy path to victory for them here.
Facebook couldnt be overtaken because of network effects. What network effects are there to a chatbot.
If you look at Gemini, I know people using it daily.
IMHO "ChatGPT the default chatbot" is a meaningful but unstable first-mover advantage. The way things are apparently headed, it seems less like Google+ chasing FB, more like Chrome eating IE + NN's lunch.
OpenAI is a relatively unknown company outside of the tech bubble. I told my own mom to install Gemini on her phone because she's heard of Google and is more likely going to trust Google with whatever info she dumps into a chat. I can’t think of a reason she would be compelled to use ChatGPT instead.
Consumer brand companies such as Coca Cola and Pepsi spend millions on brand awareness advertising just to be the “default” in everyone’s heads. When there’s not much consequence choosing one option over another, the one you’ve heard of is all that matters
Not sure if Google+ is a good analogy, it reminds me more of the Netscape vs IE fight. Netscape sprinted like it was going to dominate the early internet era and it worked until Microsoft bundled IE with Windows for free.
LLMs themselves aren't the moat, product integration is. Google, Apple and Microsoft already have the huge user bases and platforms with a big surface area covering a good chunk of our daily life, that's why I think they're better positioned if models become a commodity. OpenAI has the lead now, but distribution is way more powerful in the long run.
I know a single person who uses ChatGPT daily, and only because their company has an enterprise subscription.
My impression is that Claude is a lot more popular – and it’s the one I use myself, though as someone else said the vast majority of people, even in software engineering, don’t use AI often at all.
> OpenAI has been on a winning streak that makes ChatGPT the default chatbot for most of the planet
OpenAI has like 10 to 20% market share [1][2]. They're also an American company whose CEO got on stage with an increasingly-hated world leader. There is no universe in which they keep equal access to the world's largest economies.
[1] https://iot-analytics.com/leading-generative-ai-companies/
[2] https://www.enterpriseappstoday.com/stats/openai-statistics....
Social media has the benefit of network effects which is a pretty formidable moat.
This moat is non-existent when it comes to Open AI.
That reminds me of the Dictator movie.
All dissidents went into Little Wadyia.
When the Dictator himself visited it, he started to fake his name by copying the signs and names he saw on the walls. Everyone knew what he was.
Internet social networks are like that.
Now, this moat thing. That's hilarious.
The comparison of Chrome and IE is much more apt, IMO, because the deciding factor as other mentioned for social media is network effects, or next-gen dopamine algorithms (TikTok). And that's unique to them.
For example, I'd never suggest that e.g. MS could take on TikTok, despite all the levers they can pull, and being worth magnitudes more. No chance.
Most of the planet doesn’t use chat bots at all.
Facebook fundamentally had network effects.
That's not at all the same thing: social media has network effects that keep people locked in because their friends are there. Meanwhile, most of the people I know using LLMs cancel and resubscribe to Chat-GPT, Claude and Gemini constantly based on whatever has the most buzz that month. There's no lock-in whatsoever in this market, which means they compete on quality, and the general consensus is that Gemini 2.5 is currently winning that war. Of course that won't be true forever, but the point is that OpenAI isn't running away with it anymore.
And nobody's saying OpenAI will go bankrupt, they'll certainly continue to be a huge player in this space. But their astronomical valuation was based on the initial impression that they were the only game in town, and it will come down now that that's no longer true. Hence why Altman wants to cash out ASAP.
Google+ absolutely would have won, and it was clear to me that somebody at Google decided they didn't want to be in the business of social networking. It was killed deliberately, it didn't just peter out.
Even Alibaba is releasing some amazing models these days. Qwen 3 is pretty remarkable, especially considering the variety of hardware the variants of it can run on.
ask 10 people on the street about chatgpt or gemini and see which one they know
Now switch chatgpt and gemini on them and see if they notice.
Ask 10 people on the street in 2009 about IE and Chrome and ask which one they knew.
The names don't even matter when everything is baked in.
On the other hand...If you asked, 5-6-7 years ago, 100 people which of the following they used:
Slack? Zoom? Teams?
I'm sure you'd get a somewhat uniform distribution.
Ask the same today, and I'd bet most will say Teams. Why Teams? Because it comes with office / windows, so that's what most people will use.
Same logic goes for the AI / language models...which one are people going to use? The ones that are provided as "batteries included" in whatever software or platform they use the most. And for the vast majority of regular people / workers, it is going to be something by microsoft / google / whatever.
That's the wrong question. See how many people know Google vs. ChatGPT. As popular as ChatGPT is, Google's the stronger brand.
thats just brand recognition.
The fact that people know Coca Cola doesnt mean they drink it.
It doesn’t?
That name recognition made Coca Cola into a very successful global corporation.
About 95% of people know the Coca Cola brand, about 70% of soda drinkers in the US drink one of its sodas, and about 40% of all people in the US drink it.
Knowing and using are not the same thing.
Your numbers indicate to me their name recognition drives a big part of their value.
40% of the US is a huge customer base.
But whether the competition will emerge as Pepsi or as RC-Cola is still tbd.
or that they would drink it if a well designed, delicious, but no HFCS nor sugar alternative were marketed with funding
The real money is for enterprise use (via APIs), so public perception is not as crucial as for a consumer product.
Ask them about Google or OpenAI and...
Agreed on Google dominance. Gemini models from this year are significantly more helpful than anything from OAI.. and they're being handed out for free to anyone with a Google account.
Makes for a good underdog story! But OpenAI is dominating and will continue to do so. They have the je ne sais quoi. It’s therefore laborious to speak to it, but it manifests in self-reinforcing flywheels of talent, capital, aesthetic, popular consciousness, and so forth. But hey, Bing still makes Microsoft billions a year, so there will be other winners. Underestimating focused breakout leaders in new rapidly growing markets is as cliche as those breakouts ultimately succeeding, so even if we go into an AI winter it’s clear who comes out on top the other side. A product has never been adopted this quickly, ever. AGI or not, skepticism that merely points to conventional resource imbalances misses the big picture and such opinions age poorly. Doesn’t have to be obvious only in hindsight if you actually examine the current record of disruptive innovation.
at least 6-9 months too late
> SamA is in a hurry because he's set to lose the race.
OpenAI trained GPT-4.1 and 4.5—both originally intended to be GPT-5 but they were considered disappointments, which is why they were named differently. Did they really believe that scaling the number of parameters would continue indefinitely without diminishing returns? Not only is there no moat, but there's also no reasonable path forward with this architecture for an actual breakthrough.
Sorry but perhaps you haven't looked at the actual numbers.
Market share of OpenAI is like 90%+.
> Market share of OpenAI is like 90%+
Source? I've seen 10 to 20% [1][2].
[1] https://iot-analytics.com/leading-generative-ai-companies/
[2] https://www.enterpriseappstoday.com/stats/openai-statistics....
Hmm ...
I probably need to clarify what I'm talking about, so that peeps like @JumpCrisscross can get a better grasp of it.
I do not mean the total market share of the category of businesses that could be labeled as "AI companies", like Microsoft or NVIDIA, on your first link.
I will not talk about your second link because it does not seem to make sense within the context of this conversation (zero mentions or references to market share).
What I mean is:
* The main product that OpenAI sells is AI models (GPT-4o, etc...)
* OpenAI does not make hardware. OpenAI is not in the business of cloud infrastructure. OpenAI is not in the business of selling smartphones. A comparison between OpenAI and any of those companies would only make sense for someone with a very casual understanding of this topic. I can think of someone, perhaps, who only used ChatGPT a couple times and inferred it was made by Apple because it was there on its phone. This discussion calls for a deeper understanding of what OpenAI is.
* Other examples of companies that sell their own AI models, and thus compete directly with OpenAI in the same market that OpenAI operates by taking a look at their products and services, are Anthropic (w/ Claude), Google (w/ Gemini) and some others ones like Meta and Mistral with open models.
* All those companies/models, together, make up some market that you can put any name you want to it (The AI Model Market TM)
That is the market I'm talking about, and that is the one that I estimated to be 90%+ which was pretty much on point, as usual :).
1: https://gs.statcounter.com/ai-chatbot-market-share
2: https://www.ctol.digital/news/latest-llm-market-share-mar-20...
> that is the market that I'm talking about, and that is the one that I (correctly, as usual) estimated to be around 90% [1][2]
Your second source doesn’t say what it’s measuring and disclaims itself as from its “‘experimental era’ — a beautiful mess of enthusiasm, caffeine, and user-submitted chaos.” Your first link only measures chatbots.
ChatGPT is a chatbot. OpenAI sells AI models, including via ChatGPT. Among chatbots, sure, 84% per your source. (Not “90%+,” as you stated.) But OpenAI makes more than chatbots, and in the broader AI model market, its lead is far from 80+ percent.
TL; DR It is entirely wrong to say the “market share of OpenAI is like 90%+.”
[1] https://firstpagesage.com/reports/top-generative-ai-chatbots...
Sorry, I was off by 6% and you're right, I'm usually way more precise in my estimates.
>10%-20%
Lmao, not even in Puchal wildest dreams.
> I'm usually way more precise in my estimates
One, you suggested OP had not “looked at the actual numbers.” That implies you have. If you were just guessing, that’s misleading.
Two, you misquoted (and perhaps misunderstand) a statistic that doesn’t match your claim. Even in your last comment, you defined the market as “companies that sell their own AI models” before doubling down on the chatbot-only figure.
> not even in Puchal wildest dreams
Okay, so what’s your source? Because so far you’ve put forward two sources, a retracted one and one that measures a single product that you went ahead and misquoted.
In 2006 IE's market share was higher than current OpenAI's market share.
[flagged]
I have no problem with 'OpenAI', so much as the individual running it and, more generally, rich financiers making the world worse in every capitalizable way and even some they can't capitalize on.
Come on, at least provide your argument, we're curious! I've brought the bear case, so what's your bull case? :)
And yet the analysis is spot on. Gemini and Claude are both clearly better, today.
there are plenty of things that I simply cannot use Gemini for
I asked Gemini today to replace the background of a very simple logo and it refused. ChatGPT did it no problem (though it did take a long time because apparently lots of people were doing image generation).
I guess Gemini just refused because of a poor filter for sensitive content. But still, it was annoying.
Very curious - what is it besides image generation?
Literally the founder of Y Combinator all but outright called Sam Altman a conniving dickbag. That’s the consensus view advanced by the very man who made him.
This seems like misinformation, are you talking about how Sam left YC after OpenAI took off? What PG said was "we didn't want him to leave, just to choose one or the other"[1].
[1]: https://x.com/paulg/status/1796107666265108940
"You could parachute [Sam] into an island full of cannibals and come back in 5 years and he'd be the king."
http://paulgraham.com/fundraising.html
That says PG thinks Sam is clever. I don't think there's any moral judgement there. The statement I posted suggests PG likes Sam and would love to keep working with him.
Google is pretty far behind. They have random one off demos and they beat benchmarks yes, but try to use Google’s AI stuff for real work and it falls apart really fast.
People are using Gemini for real work. I prefer Claude myself, but Gemini is as good (or alternatively: as bad) as OpenAI’s models.
The only thing OpenAI has right now is the ChatGPT name, which has become THE word for modern LLMs among lay people.
That's not what early adopter numbers are showing. Even the poll from r/openai a few days ago show Gemini 2.5 with nearly 3x more votes than o3 (and far beyond Claude): https://www.reddit.com/r/OpenAI/comments/1k67bya/what_is_cur...
Anecdotally, I've switched to Gemini as my daily driver for complex coding tasks. I prefer Claude's cleaner code, but it is less capable at difficult problems, and Anthropic's servers are unreliable.
Define “real work”
So the non-profit retains control but we all know that Altman controls the board of the non-profit and I'd be shocked if he won't have significant stock in the new for-profit (from TFA: "we are moving to a normal capital structure where everyone has stock"). Which means that regardless of whether the non-profit has control on paper, OpenAI is now even better structured for Sam Altman's personal enrichment.
No more caps on profit, a simpler structure to sell to investors, and Altman can finally get that 7% equity stake he's been eyeing. Not a bad outcome for him given the constraints apparently imposed on them by "the Attorney General of Delaware and the Attorney General of California".
We have seen how much power does the board have after the firing of Altman - none.
Let's see how this plays out. PBC effectively means nothing - just take a look at Xai and its purchase of Twitter. I would love to listen reasoning explaining why this ~33 billion USD move is benefiting public.
The board had plenty of power.
There was never a coherent explanation of its firing the CEO.
But they could have stuck with that decision if they believed in it.
The explanation seemed pretty obvious to me: They set up a nonprofit to deliver an AI that was Open.
Then things went unexpectedly well, people were valuing them at billions of dollars, and they suddenly decided they weren't open any more. Suddenly they were all about Altman's Interests Safety (AI Safety for short).
The board tried to fulfil its obligation to get the nonprofit to do the things in its charter, and they were unsuccessful.
The explanation was pretty clear and coherent: The CEO was no longer adhering to the mission of the non-profit (which the board was upholding).
But they found themselves alone in that it turns out the employees (who were employed by the for-profit company) and investors (MSFT in particular) didn't care about the mission and wanted to follow the money instead.
So the board had no choice but to capitulate and leave.
The question is not if they could, it is if they would.
> We have seen how much power does the board have after the firing of Altman - none.
Right; so, "Worker Unions" work.
ChatGPT is free. That's the public benefit.
Google offers a great many things for free. Should they get beneficial tax treatment for it?
PBCs have no beneficial tax treatment and neither does OpenAI.
Huh. Then yah, what the heck? Why not just be a regular corp?
Branding, and perhaps a demand from the judges. In practice it doesn't mean anything if/when they stuff the board with people who want to run it as a normal LLC.
So, what's the point of a PBC?
Not being snarky here, like what is the purported thesis behind them?
marketing to certain types of philanthropic investors? I think
Mostly branding, like Google's "do no evil"
Some founders truly believe in structuring the company for the benefit of the public, but Altman has already shown he's not one of them.
They don't collect data?
That's where the real money is.
If you use it, that means you received more value than you gave up. It's called consumer surplus.
If I pay £200,000 for a car, I received more value than I gave up, otherwise I wouldn't have given the owner £200,000 for her car. No reasonable person would say the car was "free"...
> If you use it, that means you received more value than you gave up. It's called consumer surplus
This is true for literally any transaction. Actually, it's true for any rational action. If you're being tortured, and you decide it's not worth it to keep your secrets hidden any longer, you get more than you give up when you stop being tortured.
It’s only true in theory and over a single transaction, not necessarily over time. The hack that VCs have exploited for decades now is subsidizing products and acquiring competition to eventually enshittify. In this case, when OpenAI dials up the inevitable enshittification, they’ll have gotten a ton of data from their users to use for their proprietary closed AI.
That's effectively every business that isn't a complete rent-seeking monopoly. It's not a very good measure.
edit: to be clear, it's not a bad thing - we should want companies that create consumer surplus. But that's the default state of companies in a healthy market.
“Use of prison facilities is explicit admission of guilt.”
It’s called prisoners dilemma when even the government is propping this up.
Define “free”.
It’s like a free beer, but it’s Bud Light, lukewarm, and your reaction to tasting the beer goes toward researching ways to make you appreciate the lukewarm Bud Light for its marginal value, rather than making that beer taste better or less unhealthy. They’ll try very hard to convince you that they have though. It parallels their approach to AI Alignment.
This description has no business being as spot on as it is.
Makes me glad I haven't tried the Kool-aid. Uh, crap - 'scuse me, craft - IPA. Uh, beer.
I don't pay money for it?
I will give you a free beer if I can listen all your personal conversations.
free as in free beer
That's like saying AWS is free. ChatGPT has a limited use free tier just like most other SaaS products out there.
Or, alternatively, it’s much harder to fight with one hand behind your back. They need to be able to compete for resources and talent given the market structure, or they fail on the mission.
This is already impossibly hard. Approximately zero people commenting would be able to win this battle in Sam’s shoes. What would they need to do to begin to have a chance? Rather than make all the obvious comments “bad evil man wants to get rich”, think what it would take to achieve the mission. What would you need to do in his shoes, aside from just give up and close up shop? Probably this, at the very least.
Edit: I don’t know the guy and many near YC do. So I accept there may be a lens I don’t have. But I’d rather discuss the problem, not the person.
It seems like they lost most of their top talent - probably because of Altman.
Ok cool so what should he do today? Close up shop? Resign? Or try?
Given how they've been losing their lead recently, replacing Sam Altman with someone more innovative might be a good idea.
The moment we stop treating "bad evil man wants to ge it rich" as a straw man, we can heal.
Extra! Extra! Read all about it! "Bad evil man wants to get rich! We should enrich Google and Microsoft instead!"
What would they have to do to have a chance supporting the mission they were incorporated and given preferential tax treatment for a decade to make happen? Certainly not this.
Isn’t Sam already very rich? I mean it wouldn’t be the first time a guy wanted to be even richer, but I feel like we need to be more creative when divining his intentions
Why would we need to be more creative? The explanation of him wanting more money is perfectly adequate.
Being rich results in a kind of limitation of scope for ambition. To the sufferer, a person who has everything they could want, there is no other objective worth having. They become eccentric and they pursue more money.
We should have enrichment facilities for these people where they play incremental games and don’t ruin the world like the paperclip maximizers they are.
> Why would we need to be more creative? The explanation of him wanting more money is perfectly adequate. Being rich results in a kind of limitation of scope for ambition.
The dude announces new initiatives from the White House, regularly briefs Senators and senior DoD leaders, and is the top get for interviews around the world for AI topics.
There’s a lot more to be ambitious about than just money.
These are all activities he is engaging in to generate money through the company he has a stake in. None of those activities have a purpose other than selling the work of his company and presenting it as a good investment which is how he gets money.
Maybe he wants to use the money in some nebulous future way, subjugating all people in a way that deals with his childhood trauma or whatever. That’s also something rich people do when they need a hobby aside from gathering more money. It’s not their main goal, except when they run into setbacks.
People are not complicated when they are money hoarders. They might have had hidden depths once, but they are thin furrows in the ground next to the giant piles of money that define them now.
> These are all activities he is engaging in to generate money through the company he has a stake in. None of those activities have a purpose other than selling the work of his company and presenting it as a good investment which is how he gets money.
So he doesn't enjoy the attention? Prestige or power? Respect?
Are you Sam Altman? Because you're making a lot of assumptions on his psyche right now.
Nah, worldcoin is now going to the US. He just wants to get richer. https://archive.is/JTuGE
"It's not about the money, it's about winning"
--Gordon Gekko
OpenAI doesn’t have the lead anymore.
Google/Anthropic are catching up, or already surpassed.
how? The internet says 400 m weekly chatgpt users, 19 m weekly Anthropic, 47.3 m Monthly Gemini, Grok 6.7 m daily, 430 m Baidu.
It seems a defining feature of nearly every single extremely rich person is their belief that they somehow are smarter than filthy peasants, and so he decides to "educate" them of the sacred knowledge. This may take vastly different forms - genocide, war, trying to create via bribes a better government, create a city from scratch, create a new corporate "culture", do public proselytizing of their "do better" faith, write books, classes etc.
St. Altman plans to create a corporate god for us dumb schmucks, and he will be it's prophet.
Never understood his appeal. Lacks charisma. Not technically savvy relative to many engineers at OpenAI(I doubt he would pass their own intern interviews, even less so their FT). Very unlikeable in person (comes off as fake for some reason, like a political plant). Who is vouching for this guy. When I met him, for some reason, he reminded me of Thiel. He is no Jobs
Altman is a clear sociopath. He's a sales guy and good executive. But he's only out for himself.
The intro sounds awfully familiar...
> Sam’s Letter to Employees.
> OpenAI is not a normal company and never will be.
Where did I hear something like that before...
> Founders' IPO Letter
> Google is not a conventional company. We do not intend to become one.
I wonder if it's intentional or perhaps some AI-assisted regurgitation prompted by "write me a successful letter to introduce a new corporate structure of a tech company".
When I got to that part (line 1) I stopped reading.
Everything about AI really is fraudulent.
"Instead of our current complex capped-profit structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal capital structure where everyone has stock. This is not a sale, but a change of structure to something simpler."
OpenAI admitting that they're not going to win?
Imagine having a mission of “ensure[ing] that artificial general intelligence (AGI) benefits all of humanity” while also believing that it can only be trusted in the hands of the few
> A lot of people around OpenAI in the early days thought AI should only be in the hands of a few trusted people who could “handle it”.
He's very clearly stating that trusting AI to a few hands was an old, naiive idea that they have evolved from. Which establishes their need to keep evolving as the technology matures.
There is a lot to criticize about OpenAI and Sama, but this isn't it.
To the benefit of OpenAI. I think LLMs would still exist, but we wouldn't have access to them.
Whether they are a net positive or a net negative is arguable. If it's a net negative, then unleashing them to the masses was maybe the danger itself.
I wonder if this meets the requirements set by the recent round of outside investors?
Not according to Microsoft: https://www.wsj.com/tech/ai/sam-altman-satya-nadella-rift-30...
I don't see any comments about the PBC in that article (archive link: https://archive.is/cPLWd)
Is there a sport where the actual sport is moving goalposts?
There is the game of Nomic where a turn involves proposing a rule change.
From least to most speculative:
* The nonprofit is staying the same, and will continue to control the for-profit entity OpenAI created to raise capital
* The for-profit is changing from a capped-profit LLC to a PBC like Anthropic and Xai
* These changes have been at least tacitly agreed to by the attorneys general of California and Delaware
* The non-profit won’t be the largest shareholder in the PBC (likely Microsoft) but will retain control (super voting shares?)
* OpenAI thinks there will be multiple labs that achieve AGI, although possibly on different timelines
Another possibility is that OpenAL thinks _none_ of the labs will achieve AGI in a meaningful timeframe so they are trying to cash out with whatever you want to call the current models. There will only be one or two of those before investors start looking at the incredible losses.
I'm fairly sure that OpenAI has never really believed in AGI - it's like with Uber and "self driving cabs" - it's a lure for the investors.
It's just that this bait has a shelf life and it looks like it's going to expire soon.
The least speculative: PPUs will be converted from capped profit to unlimited profit equity shares at the benefit of PPU holders and at the expense of OpenAI the nonprofit. This is why they are doing it.
> Our mission is to ensure that artificial general intelligence (AGI) benefits all of humanity
They already fight transparency in this space to prevent harmful bias. Why should I believe anything else they have to say if they refuse to take even small steps toward transparency and open auditing?
Matt Levine on OpenAI's weird capped return structure in November 2023:
And the investors wailed and gnashed their teeth but it’s true, that is what they agreed to, and they had no legal recourse. And OpenAI’s new CEO, and its nonprofit board, cut them a check for their capped return and said “bye” and went back to running OpenAI for the benefit of humanity. It turned out that a benign, carefully governed artificial superintelligence is really good for humanity, and OpenAI quickly solved all of humanity’s problems and ushered in an age of peace and abundance in which nobody wanted for anything or needed any Microsoft products. And capitalism came to an end.
https://www.bloomberg.com/opinion/articles/2023-11-20/who-co...
AGI was achieved the first time a model replied "it worked when I ran it"
I think the main issue is they accidentally created an incredible consumer brand with ChatGPT. They should sell that asset to World.
ClosedAI
Does anybody outside OAI still think of them as anything other that a "normal" for-profit company?
AI actually wrote this article for them which is the craziest thing
The explosion of PBC structured corps recently has me thinking it must just be a tax loophole at this point. I can't possibly imagine there is any meaningful enforcement around any of its restrictions or guidelines.
Not a loophole as they pay taxes (unlike non-profits) but a fig leaf to cover commercial activity with some feel-good label. The real purpose of PBC is the legal protection it may afford to the company from shareholders unhappy with less than maximal profit generation. It gives the board some legal space to do some good if they choose to but has no mandate like real non-profits which get a tax break for creating a public good or service, a tax break that can be withdrawn if they do not annually prove that public benefit to the IRS.
It’s not a tax thing, it’s a power thing. PBCs transfer power from shareholders to management as long as management can say they were acting for a public benefit.
PBCs don’t get special tax treatment. As far as I know they’re taxed exactly the same as typical C or S corps.
abc.xyz: "Google is not a conventional company. We do not intend to become one"
sam altman: "OpenAI is not a normal company and never will be."
Hmmm
Can't wait to hear Ed Zitron's take on this
The recent flap over ChatGPT's fluffery/flattery/glazing of users doesn't bode well for the direction that OpenAI is headed in. Someone at the outfit appeared to think that giving users a dopamine hit would increase time-spent-on-app or some other metric - and that smells like contempt for the intelligence of the user base and a manipulative approach designed not to improve the quality of the output, but to addict the user population to the ChatGPT experience. Your own personal yes-person to praise everything you do, how wonderful. Perfect for writing the scripts for government cabinent ministers to recite when the grand poobah-in-chief comes calling, I suppose.
What it really says is that if a user wants to control the interaction and get the useful responses, direct programmatic calls to the API that control the system prompt are going to be needed. And who knows how much longer even that will be allowed? As ChatGPT reports,
> "OpenAI has updated the ChatGPT UI (especially in GPT-4-turbo and ChatGPT Plus environments) to no longer expose the full system prompt or baseline prompt directly."
sounds like they need a few more Dinorwig's
I agree that this is simply Altman extending his ability to control, shape and benefit from OpenAI. Yes, this is clearly (further) subverting the original intent under which the org was created - and that's unfortunate. But in terms of impact on the world, or even just AI safety, I'm not sure the governance of OpenAI matters all that much anymore. The "governance" wasn't that great after the first couple years and OpenAI hasn't been "open" since long before the board spat.
More crucially, since OpenAI's founding and especially over the past 18 months, it's grown increasingly clear that AI leadership probably won't be dominated by one company, progress of "frontier models" is stalling while costs are spiraling, and 'Foom' AGI scenarios are highly unlikely anytime soon. It looks like this is going to be a much longer, slower slog than some hoped and others feared.
I'm not gonna get caught in the details, I'm just going to assume this is legalese cognitive dissonance to avoid saying "we want this to stop being an NFP because we want the profits."
I wonder which non-profit will be looted next.
Ed Zitron's going to have a field day with this ...
Here’s a breakdown of the *key structural changes*, and an analysis of *potential risks or concerns*:
---
## *What Has Changed*
### 1. *OpenAI’s For-Profit Arm is Becoming a Public Benefit Corporation (PBC)*
* *Before:* OpenAI LP (limited partnership with a “capped-profit” model). * *After:* OpenAI LP becomes a *Public Benefit Corporation* (PBC).
*Implications:*
* A PBC is still a *for-profit* entity, but legally required to balance shareholder value with a declared public mission. * OpenAI’s mission (“AGI that benefits all humanity”) becomes part of the legal charter of the new PBC.
---
### 2. *The Nonprofit Remains in Control and Gains Equity*
* The *original OpenAI nonprofit* will *continue to control* the new PBC and will now also *hold equity* in it. * The nonprofit will use this equity stake to fund “mission-aligned” initiatives in areas like health, education, etc.
*Implications:*
* This strengthens the nonprofit’s influence and potentially its resources. * But the balance between nonprofit oversight and for-profit ambition becomes more delicate as stakes rise.
---
### 3. *Elimination of the “Capped-Profit” Structure*
* The old “capped-return” model (investors could only make \~100x on investments) is being dropped. * Instead, OpenAI will now have a *“normal capital structure”* where everyone holds unrestricted equity.
*Implications:*
* This likely makes OpenAI more attractive to investors. * However, it also increases the *incentive to prioritize commercial growth*, which could conflict with mission-first priorities.
---
## *Potential Negative Implications*
### 1. *Increased Commercial Pressure*
* Moving from a capped-profit model to unrestricted equity introduces *stronger financial incentives*. * This could push the company toward *more aggressive monetization*, potentially compromising safety, openness, or alignment goals.
### 2. *Accountability Trade-offs*
* While the nonprofit “controls” the PBC, actual accountability and oversight may be limited if the nonprofit and PBC leadership overlap (as has been a concern before). * Past board turmoil in late 2023 (Altman's temporary ousting) highlighted how difficult it is to hold leadership accountable under complex structures.
### 3. *Risk of “Mission Drift”*
* Over time, with more funding and commercial scale, *stakeholder interests* (e.g., major investors or partners like Microsoft) might influence product and policy decisions. * Even with the mission enshrined in a PBC charter, *profit-driven pressures could subtly shape choices*—especially around safety disclosures, model releases, or regulatory lobbying.
---
## *What Remains the Same (According to the Letter)*
* OpenAI’s *mission* stays unchanged. * The *nonprofit retains formal control*. * There’s a stated commitment to safety, open access, and democratic use of AI.
You missed the part where OpenAI the nonprofit gives away the value that’s between capped profit PPUs and unlimited profit equity shares, enriching current PPUs at the expense of the nonprofit. Surely, this is illegal.
If you can move from capped profit to unlimited profit, it was never actually capped profit, just a fig leaf
Again?
What are the implications of this for Softbank's $40B?
hi i thik it's alsowm
>current complex capped-profit structure
Is OpenAI making a profit?
Can you commit to a "swords into ploughshares" goal?
We know it's a sword. And there's war, yadda yadda. However, let's do the cultivating thing instead.
What other AI players we need to convince?
Still waiting for o3-Pro.
This sounds like a good middle ground between going full capitalism and non-profit. This way they can still raise money and also have the same mission, but a weakened one. You can't have everything.
> Our mission is to ensure that artificial general intelligence (AGI) benefits all of humanity.
Then why is it paywalled? Why are you making/have made people across the world sift through the worst material on offer by the wide uncensored Internet to train your LLMs? Why do you have a for-profit LLC operating under a non-profit, or for that matter, a "Public Benefit Corporation" that has to answer to shareholders at all?
Related to that:
> or the needs for hundreds of billions of dollars of compute to train models and serve users.
How does that serve humanity? Redirecting billions of dollars to fancy autocomplete who's power demands strain already struggling electrical grids and offset the gains of green energy worldwide?
> A lot of people around OpenAI in the early days thought AI should only be in the hands of a few trusted people who could “handle it”.
No, we thought your plagiarism machine was a disgusting abuse of the public square, and to be clear, this criticism would've been easily handled by simply requesting people opt-in to have their material used for AI training. But we all know why you didn't do that, don't we Sam.
> It will of course not be all used for good, but we trust humanity and think the good will outweigh the bad by orders of magnitude.
Well so far, we've got vulnerable, lonely people being scammed on Facebook, we've got companies charging subscriptions for people to sext their chatbots, we've got various states using it to target their opposition for military intervention, and the White House may have used it to draft the dumbest basis for a trade war in human history. Oh and fake therapists too.
When's the good kick in?
> We believe this is the best path forward—AGI should enable all of humanity^1 to benefit each other.
^1 who subscribe to our services
> Then why is it paywalled? Why are you making/have made people across the world sift through the worst material on offer by the wide uncensored Internet to train your LLMs?
Because they're concerned about AI use the same way Google is concerned about your private data.
[removed]
No, it's good that you feel this. Don't give up on tech, protest.
I've been feeling for some time now that we're sort of in the Vietnam War era of the tech industry.
I feel a strong urge to have more "ok, so where do we go from here?" and "what does a tech industry that promotes net good actually look like?" internal discourse in the community of practice, and some sort of ethical social contract for software engineering.
The open source movement has been fabulous and sometimes adjacent to or one aspect of these concerns, but really we need a movement for socially conscious and responsible software.
We need a tech counter-culture. We had one once, but now we need one.
Not all non-profits are doomed. It's natural that the biggest companies will be the ones who have growth and profit as their primary goal.
But there are still plenty of mission-focused technology non-profits out there. Many of which have lasted decades. For example: Linux Foundation, Internet Archive, Mozilla, Wikimedia, Free Software Foundation, and Python Software Foundation.
Don't get me wrong, I'm also disappointed in the direction and actions of big tech, but I don't think it's fair to dismiss the non-profit foundations. They aren't worth a trillion dollars, however they are still doing good and important work.
amen brother
"We made the decision for the nonprofit to retain control of OpenAI after hearing from..." [CHIEF LAW ENFORCEMENT OFFICERS IN CALIFORNIA AND DELAWARE]
This indicates that they didn't actually want the nonprofit to retain control and they're only doing it because they were forced to by threats of legal action.
Commercial entities aren’t social beings. They’re asocial goal-maximizers.
Threats of legal action are among the only behavioral signals it can act on while staying in its mandate. Others include regulation and the market.
This is all operating as it was designed, by humans, multiple economic cycles ago.
When I read that, I was actually fairly surprised about how brazen they were about who they called on for this action. They simply just said it.
Lots of words to say OpenAI will remain an SABC (Sam Altman Benefit Corporation)
> We are committed to this path of democratic AI.
So were do I vote? How do I became a candidate to be a representative or a delegate of voters? I assume every single human is eligible for both, as OpenAI serves the humanity?
Democratic AI but we don’t want it regulated by any democratic process
Democratic People's Republic of AI
I wonder if democracy is some kind of corporate speech homonym of some totally different concept I'm familiar with. Perhaps it's even an interesting linguistic case where a word is a homonym of its antonym?
Edit: also apparently known as contronym.
> wonder if democracy is some kind of corporate speech
It generally means broadening access to something. Finance loves democratising access to stupid things, for example.
> word is a homonym of its antonym?
Inflammable in common use.
"democratize" has been often abused: https://intage.us/articles/words/democratize/
Path of, so it's getting there
Via a temporary vanguard board composed of the most conscious and disciplined profit maximizers.
Lenin and the Bolsheviks were also committed to the path of fully democratic government. As soon as the people are ready. In the interim we'll make all the decisions.
They are committed, they didn't say they pushed yet. Or will ever.
Carcinisation in action:
No, this only happens if:
1) You're successful.
2) You mess up checks-and-balances at the beginning.
OpenAI did both.
Personally, I think at some point, the AGs ought to take over and push it back into a non-profit format. OAI undermines the concept of a non-profit.
With 2, the real problem is that approximately 0% of the OpenAI employees actually believed in the mission. Pretty much every single one of them signed the letter to the board demanding that if the company's existence ever comes into conflict with humanity's survival, the company's existence comes first.
That's the reality of every organization if it survives long enough.
Checks-and-balances need to be robust enough to survive bad people. Otherwise, they're not checks-and-balances.
One of the tricks is a broad range of diverse stakeholders with enforcement power. For example, if OpenAI does anything non-open, you'd like organizations FSF, CC, and similar to be represented on their board and to be able to enforce those rules in court.
Turns out the non profit structure wasn't very profitable
There's really nothing "open" about this company. If they want to be, then:
(1) be transparent about exactly which data was collected for the model
(2) release all the source code
If you want to benefit humanity, then put it under a strong copyleft license with no CLA. Simple.
They would do this if their mission was what you wish it was. But it isn't, so they won't.
Arguments by semantics are always tiresome.
what exactly do you think the name OpenAI is supposed to evoke?
This restructuring is essentially a sophisticated maneuver toward wealth and power maximization shrouded in altruistic language.
Does anyone truly believe Musk had benevolent intentions? But before we even evaluate the substance of that claim, we must ask whether he has standing to make it. In his court filing, Musk uses the word "nonprofit" 111 times, yet fails to explain how reverting OpenAI to a nonprofit structure would save humanity, elevate the public interest, or mitigate AI’s risks. The legal brief offers no humanitarian roadmap, no governance proposal, and no evidence that Musk has the authority to dictate the trajectory of an organization he holds no equity in. It reads like a bait and switch — full of virtue-signaling, devoid of actionable virtue. And he never had a contract or an agreement for with OpenAI to keep it a non-profit.
Musk claimed Fraud, but never asked for his money back in the brief. Could it be his intentions were to limit OpenAI to donations thereby sucking the oxygen out of the venture capital space to fund Xai's Grok?
Musk claimed he donated $100mil, later in a CNBC interview, he said $50-mil. TechCrunch suggests it was way less.
Speakingof humanitarian, how about this 600lbs Oxymoron in the room: A Boston University mathematician has now tracked an estimated 10,000 deaths linked to the Musk's destruction of USAID programs, many of which provided basic health services to vulnerable populations. He may have a death count on his reume in the coming year.
Non profits has regulation than publicly traded companies. Each quarterly filings is like a colonoscopy with Sorbonne Oxley rules etc. Non profits just file a tax statement. Did you know the Chirch of Scientology is a non-profit.
Replace Musk with "any billionaire."
He's a symptom of a problem. He's not actually the problem.
If you are a materialist, the laws of physics are the problem.
But to speak plainly, Musk is a complex figure, frequently problematic, and he often exacts a tool on the people around him. Part of this is attributable to his wealth, part to his particulars. When he goes into "demon mode", to use Walter Isaacson's phrase, you don't want to be in his way.
> If you are a materialist, the laws of physics are the problem.
I'm a citizen, the laws of politics are the problem.
> Musk is a complex figure
Hogwash. He's greedy. There's nothing complex about that.
> and he often exacts a tool on the people around him
Yea it's a one way transfer of wealth from them to him. The _literal_ definition of a "toll."
> When he goes into "demon mode"
When he decides to lie, cheat and steal? Why do you strain so hard to lionize this behavior?
> you don't want to be in his way.
Name a billionaire who's way you would _like_ to be in. Elon Musk literally stops existing tomorrow. A person who's name you don't currently know will become known and take his place.
His place needs to be removed. It's not a function of his "personality" or "particulars." That's just goofy "temporarily embarrassed billionaire" thinking.
> Why do you strain so hard to lionize this behavior?
> lionize: give a lot of public attention and approval to (someone); treat as a celebrity: modern athletes are lionized.
Where in my comment do I lionize Musk?
Please calm down. Please try to be charitable and curious rather than accusatory towards me.
> Where in my comment do I lionize Musk?
You attribute to personality what should be attributed to malice. You do this three times.
> Please calm down
I am perfectly calm.
> Please try to be charitable and curious rather than accusatory towards me.
In attempting to explain why my point of view has been misunderstood by you I also attempted to find a reason for it. I do not think my explanation makes you a bad person nor do I think you should be particularly confronted by it.
> In attempting to explain why my point of view has been misunderstood by you I also attempted to find a reason for it.
What have I misunderstood? Help me understand. What is the key point you want to make that you think I misunderstand?
>> (me) When he goes into "demon mode"
> When he decides to lie, cheat and steal? Why do you strain so hard to lionize this behavior?
I hope this is clear: I'm not defending Musk's actions. Above, I'm just using the phrase that Walter Isaacson uses: "demon mode". Have you read the book or watched an interview with Isaacson about it? The phrase is hardly flattering, and I certainly don't use it to lionize Musk. Is there some misunderstanding on this part?
>>>> (me) But to speak plainly, Musk is a complex figure, frequently problematic, and he often exacts a tool on the people around him. Part of this is attributable to his wealth, part to his particulars. When he goes into "demon mode", to use Walter Isaacson's phrase, you don't want to be in his way.
>> (me) Where in my comment do I lionize Musk?
> You attribute to personality what should be attributed to malice. You do this three times.
Please spell this out for me. Where are the three times I do this?
Also, let's step back. Is the core of this disagreement about trying to detect malice in Elon's head? Detecting malice is not easy. Malice may not even be present; many people rationalize actions in such a way so they feel like they are acting justly.
Even if we could detect "malice", wouldn't we want to assess what causes that malice? That's going to be tough to disentangle with him being on the Autism spectrum and also having various mental health struggles.
Along with most philosophers, I think free will (as traditionally understood) is an illusion. From my POV, attempting to blame Musk requires careful explanation. What do we mean? A short lapse of judgment? His willful actions? His intentions? His character? The overall condition of his brain? His upbringing? Which of these is Elon "in control of"? From the materialist POV, none.
From a social and legal POV, we usually draw lines somewhere. We don't want to defenestrate ethics or morality; we still have to find ways to live together. This requires careful thinking about justice: prevention, punishment, reintegration, etc. Overall, the focus shifts to policies that improve societal well-being. It doesn't help to pretend like people could have done otherwise given their situation. We _want_ people to behave better, so we should design systems to encourage that.
I dislike a huge part of what Musk has done, and I think more is likely to surface. Like we said earlier -- and I think we probably agree -- Musk is part of a system. Is he a cause or symptom? It depends on how you frame the problem.
[dead]
OpenAI is busy rearranging the chairs while their competitors surpass them.
Yup. Haven't used an OpenAI model for anything in 6+ months now, except to check the latest one and confirm that it is still hilariously behind Google/Anthropic.
what are your personal evals?
o3 still drives really well for me.
[dead]
Mmh am I the only one who has been offered to participate in a “comparison between 2 chatgpt versions”?
The newer version included sponsored products in its response. I thought that was quite effed up.
I've gotten those messages, but the products recommended in both versions were the same, down to the model number, so I don't think it's strictly product placement. The products I was looking at were old oscilloscopes.
I'm getting really tired of hearing about OpenAI "evolving".
Ok, but can you please not post unsubstantive comments to HN? We're looking for curious conversation here, and this is not that.
https://news.ycombinator.com/newsguidelines.html
It seems like there are other comments that have the same amount of substance as mine, yet it looks like mine was the only one that was flagged.
Quite possibly! Consistency in moderation is impossible [1]. We don't come close to seeing everything that gets posted here, and the explanation for most of these things is randomness (or the absence of time travel - https://news.ycombinator.com/item?id=43823271)
If you see a post that ought to have been moderated but hasn't been, the likeliest explanation is that we didn't see it. You can help by flagging it or emailing us at hn@ycombinator.com.
At the same time, though, we need you (<-- I don't mean you personally, but all commenters) to follow HN's rules regardless of what other commenters are doing.
Think of it like speeding tickets [2]. There are always lots of other drivers speeding just as bad (nay, worse) than you were, and yet it's always you who gets pulled over, right? Or at least it always feels that way.
[1] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
[2] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
[flagged]
Here's a critical summary:
Key Structure Changes:
- Abandoning the "capped profit" model (which limited investor returns) in favor of traditional equity structure - Converting for-profit LLC to Public Benefit Corporation (PBC) - Nonprofit remains in control but also becomes a major shareholder
Reading Between the Lines:
1. Power Play: The "nonprofit control" messaging appears to be damage control following previous governance crises. Heavy emphasis on regulator involvement (CA/DE AGs) suggests this was likely not entirely voluntary.
2. Capital Structure Reality: They need "hundreds of billions to trillions" for compute. The capped-profit structure was clearly limiting their ability to raise capital at scale. This move enables unlimited upside for investors while maintaining the PR benefit of nonprofit oversight.
3. Governance Complexity: The "nonprofit controls PBC but is also major shareholder" structure creates interesting conflicts. Who controls the nonprofit? Who appoints its board? These details are conspicuously absent.
4. Competition Positioning: Multiple references to "democratic AI" vs "authoritarian AI" and "many great AGI companies" signal they're positioning against perceived centralized control (likely aimed at competitors).
Red Flags:
- Vague details about actual control mechanisms - No specifics on nonprofit board composition or appointment process - Heavy reliance on buzzwords ("democratic AI") without concrete governance details - Unclear what specific powers the nonprofit retains besides shareholding
This reads like a classic Silicon Valley power consolidation dressed up in altruistic language - enabling massive capital raising while maintaining insider control through a nonprofit structure whose own governance remains opaque.
Was this AI generated?
[flagged]
[flagged]
Random question, is anyone else unable to see a ‘select all’ on their iPhone?
I was trying to put all the text into gpt4 to see what it thought, but the select all function is gone.
Some websites do that to protect their text IP, which would be crazy to me if that’s what they did considering how their ai is built. Ha