OkayPhysicist 4 months ago

I feel like the only two sides I ever hear about the current AI boom is either "Chat-GPT is AGI, the singularity is upon us, all embrace our new machine god" and "There is nothing new nor exciting about the current AI developments, and the whole thing is a massive waste of resources".

But the arguments for both sides consist of a combination of outright lies and gross hyberbole. LLMs are undeniably new, and cool as hell. We killed the Turing test! Computers are now dramatically better at understanding human language than they were 5 years ago. And we are seeing improvement, perhaps not at the same rate we were 5 years ago, but still at an impressive clip. It's far from impossible that we'll have language models that, at the very least, can reliably (as in, you don't need to be constantly checking its work) parse out out human language for interfacing with more tradition computation within the foreseeable future.

On the other hand, it's definitely not AGI yet, and a machine that can't consistently be trusted to do a job is of inherently limited utility. Companies investing in the space are definitely burning money on a moonshot.

  • enpera 4 months ago

    I don't know what you mean by killing the Turing test, but the Eliza already passed the test in the 60's.

    If it is natural like a human being when you talk to it, I hear that there are actually better models than LLM.

    If the LLM says it is really able to parse the language, then things like halcynation do not happen.

    As a computer engineer on the fringe, I am not inclined to trust the output of a general-purpose AI whose use case is still unknown over a powerful algorithm that reliably produces the same output with the same input.

    • OkayPhysicist 4 months ago

      I played with Eliza before the rise of modern text generation. It's neat, but it's abundantly clear you're talking to a robot. There was no chance that somebody would be fooled into holding a conversation with it. Likewise with Markov-chain based text generators: by the end of a paragraph, it's obviously not human. GPT-3 and beyond absolutely can fool someone who's not aware they're being tested, and frankly in limited conversations it can be difficult even in the non-blind case. That's Turing test passing.

      I deliberately avoided the problem solving/doing work side of things in discussing modern AI, because that's a place where there is a significant amount of progress necessary to be useful. I completely agree that it's abilities in that regard, at present, are being grossly overblown. But the ability to parse human language, decipher intent, and synthesize responses in human language very much is a new capability that modern LLMs are extremely good at, and will likely reach reliability levels necessary for autonomous application in the very near future.

      A program that can parse human language perfectly absolutely could still hallucinate. When I ask an LLM a question, and it makes up a patently false response, it accurately parsed what I asked of it. It just failed to synthesize a correct response. The parsing of human language and synthesis of information into human language is, in of itself, a powerful capability, that we shouldn't overlook just because it's no longer science fiction.

  • tim333 4 months ago

    Also Zitron doesn't really seem to distinguish the AI tech steadily progressing which will probably go on and the investment boom which is probably a bit of a bubble. Quite likely there will be a financial pullback or crash where less money is spent but development will go on.

    • rsynnott 4 months ago

      Well, yeah; that happened in the previous eight or so AI bubbles, going back to the 1950s. His stance seems to be primarily “this stuff isn’t very useful and is absurdly expensive”, which seems fair enough?

      • tim333 4 months ago

        Kinda but I think "there is no AI revolution" is overdoing it. If he said the AI revolution is a bit rubbish and overpriced I could go with it. This time is different from the past so called AI bubbles in that current hardware is around human level whereas in the previous ones it was nowhere near. Also I don't remember a major investment boom like this in the past? I think the Japanese put aside $850m for their "Fifth Generation project" which flopped, but nothing on the current scale.

        • rsynnott 4 months ago

          Eh, I mean you can pick and choose what you call an AI bubble, but the CV one in the early 2010s (remember when self-driving cars were going to be a thing any day now?) and the speech recognition one in the 90s, where MS was claiming that within years people would primarily interact with their computer by talking to it, were attainable with consumer hardware. They were just a bit shit.

          Not that either were completely useless (and nor were, say, expert systems before them), but they ended up squarely in the “yeah, that’s an occasionally useful feature” space, rather than being transformative.

          • tim333 4 months ago

            Fair enough but I still think the current thing is in a different category.

            As a kind of argument that way you can check out Wait But Why's "The AI Revolution" which kind of argues the other way from Zitron's "There Is No AI Revolution", especially if you scroll to the animation of a lake filling up about half way down. Bear in mind the animation was done in 2013 https://waitbutwhy.com/2015/01/artificial-intelligence-revol...

          • OkayPhysicist 4 months ago

            Self driving cars are a thing, and they're awesome. I took one home from the bar 2 days ago. One of the very few "wow, the future is now" type innovations I've experienced.

lsy 4 months ago

It's hinted at in the article, but so far, it seems that 95%+ of use cases are satisfied by free-tier models, and will imminently be satisfied by local, open-weight models. It's not clear that people or businesses will ever be willing to pony up for the extra marginal accuracy when they can get most of the benefits for cheap or for free.

adamc 4 months ago

Mostly, the problem is the compute makes the products expensive and they haven't yet unveiled much that a significant number of people will pay enough for.

Eventually their runway will run out -- and, given their costs, that might be soon.

  • OkayPhysicist 4 months ago

    It's pretty obvious from OpenAI's corporate structure that they always knew they were a moonshot. If their products reached the point where they were reliable enough to fully offload human labor onto, the increase in productivity would be extremely valuable. At the level of reliability where a human still needs to check the machine's work, the utility is extremely limited.

    And I don't think there's anything wrong with that. Compared to most things that we burn respurces on, at least the AI investment has produced something that doesn't unambiguously make the world a worse place.

gddgb 4 months ago

Much like crypto it’s a thin veneer dusted on regular products. B2B will consume anything it’s mostly social spending like a fashion show, crypto out, AI in.

  • cainxinth 4 months ago

    I've never used crypto or blockchain for anything. But I've used LLMs every day for years now. They are extremely useful. For example, I was able to get a quick summary of this 10,000 word article:

    - OpenAI loses money on every interaction

    - Hallucinations are unsolved

    - Zitron questions the validity of OpenAI's user numbers

    - He also criticizes their lack of product differentiation

    - He points to the low adoption rates of AI products

    Those are valid criticisms, but that's too many words for what mostly amounts to stating well known issues and then arguing based on our current limited knowledge that they likely won't be overcome. I'm not claiming the AI hype is real, but the doomsaying is overblown, too.