I feel like the only two sides I ever hear about the current AI boom is either "Chat-GPT is AGI, the singularity is upon us, all embrace our new machine god" and "There is nothing new nor exciting about the current AI developments, and the whole thing is a massive waste of resources".
But the arguments for both sides consist of a combination of outright lies and gross hyberbole. LLMs are undeniably new, and cool as hell. We killed the Turing test! Computers are now dramatically better at understanding human language than they were 5 years ago. And we are seeing improvement, perhaps not at the same rate we were 5 years ago, but still at an impressive clip. It's far from impossible that we'll have language models that, at the very least, can reliably (as in, you don't need to be constantly checking its work) parse out out human language for interfacing with more tradition computation within the foreseeable future.
On the other hand, it's definitely not AGI yet, and a machine that can't consistently be trusted to do a job is of inherently limited utility. Companies investing in the space are definitely burning money on a moonshot.
I don't know what you mean by killing the Turing test, but the Eliza already passed the test in the 60's.
If it is natural like a human being when you talk to it, I hear that there are actually better models than LLM.
If the LLM says it is really able to parse the language, then things like halcynation do not happen.
As a computer engineer on the fringe, I am not inclined to trust the output of a general-purpose AI whose use case is still unknown over a powerful algorithm that reliably produces the same output with the same input.
It's hinted at in the article, but so far, it seems that 95%+ of use cases are satisfied by free-tier models, and will imminently be satisfied by local, open-weight models. It's not clear that people or businesses will ever be willing to pony up for the extra marginal accuracy when they can get most of the benefits for cheap or for free.
Mostly, the problem is the compute makes the products expensive and they haven't yet unveiled much that a significant number of people will pay enough for.
Eventually their runway will run out -- and, given their costs, that might be soon.
It's pretty obvious from OpenAI's corporate structure that they always knew they were a moonshot. If their products reached the point where they were reliable enough to fully offload human labor onto, the increase in productivity would be extremely valuable. At the level of reliability where a human still needs to check the machine's work, the utility is extremely limited.
And I don't think there's anything wrong with that. Compared to most things that we burn respurces on, at least the AI investment has produced something that doesn't unambiguously make the world a worse place.
Much like crypto it’s a thin veneer dusted on regular products. B2B will consume anything it’s mostly social spending like a fashion show, crypto out, AI in.
I've never used crypto or blockchain for anything. But I've used LLMs every day for years now. They are extremely useful. For example, I was able to get a quick summary of this 10,000 word article:
- OpenAI loses money on every interaction
- Hallucinations are unsolved
- Zitron questions the validity of OpenAI's user numbers
- He also criticizes their lack of product differentiation
- He points to the low adoption rates of AI products
Those are valid criticisms, but that's too many words for what mostly amounts to stating well known issues and then arguing based on our current limited knowledge that they likely won't be overcome. I'm not claiming the AI hype is real, but the doomsaying is overblown, too.
I feel like the only two sides I ever hear about the current AI boom is either "Chat-GPT is AGI, the singularity is upon us, all embrace our new machine god" and "There is nothing new nor exciting about the current AI developments, and the whole thing is a massive waste of resources".
But the arguments for both sides consist of a combination of outright lies and gross hyberbole. LLMs are undeniably new, and cool as hell. We killed the Turing test! Computers are now dramatically better at understanding human language than they were 5 years ago. And we are seeing improvement, perhaps not at the same rate we were 5 years ago, but still at an impressive clip. It's far from impossible that we'll have language models that, at the very least, can reliably (as in, you don't need to be constantly checking its work) parse out out human language for interfacing with more tradition computation within the foreseeable future.
On the other hand, it's definitely not AGI yet, and a machine that can't consistently be trusted to do a job is of inherently limited utility. Companies investing in the space are definitely burning money on a moonshot.
I don't know what you mean by killing the Turing test, but the Eliza already passed the test in the 60's.
If it is natural like a human being when you talk to it, I hear that there are actually better models than LLM.
If the LLM says it is really able to parse the language, then things like halcynation do not happen.
As a computer engineer on the fringe, I am not inclined to trust the output of a general-purpose AI whose use case is still unknown over a powerful algorithm that reliably produces the same output with the same input.
It's hinted at in the article, but so far, it seems that 95%+ of use cases are satisfied by free-tier models, and will imminently be satisfied by local, open-weight models. It's not clear that people or businesses will ever be willing to pony up for the extra marginal accuracy when they can get most of the benefits for cheap or for free.
Mostly, the problem is the compute makes the products expensive and they haven't yet unveiled much that a significant number of people will pay enough for.
Eventually their runway will run out -- and, given their costs, that might be soon.
It's pretty obvious from OpenAI's corporate structure that they always knew they were a moonshot. If their products reached the point where they were reliable enough to fully offload human labor onto, the increase in productivity would be extremely valuable. At the level of reliability where a human still needs to check the machine's work, the utility is extremely limited.
And I don't think there's anything wrong with that. Compared to most things that we burn respurces on, at least the AI investment has produced something that doesn't unambiguously make the world a worse place.
Much like crypto it’s a thin veneer dusted on regular products. B2B will consume anything it’s mostly social spending like a fashion show, crypto out, AI in.
I've never used crypto or blockchain for anything. But I've used LLMs every day for years now. They are extremely useful. For example, I was able to get a quick summary of this 10,000 word article:
- OpenAI loses money on every interaction
- Hallucinations are unsolved
- Zitron questions the validity of OpenAI's user numbers
- He also criticizes their lack of product differentiation
- He points to the low adoption rates of AI products
Those are valid criticisms, but that's too many words for what mostly amounts to stating well known issues and then arguing based on our current limited knowledge that they likely won't be overcome. I'm not claiming the AI hype is real, but the doomsaying is overblown, too.