wcoenen 3 days ago

If I understand correctly, this paper is arguing that investors will desperately allocate all their capital such that they maximize ownership of future AI systems. The market value of anything else crashes because it comes with the opportunity cost of owning less future AI. Interest rates explode, pre-existing bonds become worthless, and AI stocks go to the moon.

It's an interesting idea. But if the economy grinds to a halt because of that kind of investor behavior, it seems unlikely governments will just do nothing. E.g. what if they heavily tax ownership of AI-related assets?

  • DennisP 3 days ago

    It seems more general than that. Right now returns go partly to capital, partly to labor. With "transformative AI" the returns go almost entirely to capital. This is true whether it's mostly from labor shrinking or total output increasing.

    Since most returns go to capital, we can expect returns on capital to increase.

    • harshalizee 3 days ago

      How does that even work? If labor has no income to spend on goods and services. where is the return on capital coming from? Isn't this halting velocity of money, thereby making it useless? Can someone smarter explain this to me.

      • hollerith 2 days ago

        >If labor has no income to spend on goods and services, where is the return on capital coming from?

        Some goods and services are sold to another business, not a consumer. Right now, these "B2B" sales represent about 30% of sales volume in the economy, but there is probably no impediment to that number rising to 90%.

        • asdff 2 days ago

          How many of those B2B exist entirely isolated from the consumer market though? I would guess few to even zero to some degree of association with an end consumer. Even government contractors building bombs to sell to the government and such depend on the American consumer generating their own income and paying taxes ultimately.

          • hollerith 2 days ago

            >Even government contractors building bombs to sell to the government and such depend on the American consumer generating their own income and paying taxes ultimately.

            That's just the thing though: if we transition to a reality in which 90% or 99% of economic activity does not involve consumers, governments will adapt by taxing consumers (i.e., employees) less and investors and corporations more. It's also possible that corporations will escape the governmental oversight and control that they are currently under, which means governments of the world would shrink because they have to survive on taxes on the 1% of the economy that remains under the control of employees and consumers.

            I certainly want most of the economy to serve people, but there is nothing contradictory or impossible about the possibility that 99% of it is a competition between ultra-wealthy corporations.

            • asdff 2 days ago

              Fat chance we see corporate America tax itself sufficient to support the entire country. Isn't that their main talking point, to actively do the opposite of that?

              I think the future is less a utopia where all our needs are met and more like what happens in the third world. Where most people are desperate and impoverished save for whoever owns the underlying mineral rights and can afford to squeeze them for profit. The incentives of business all point to this outcome and there is plenty of precedent already. Zero precedent for a utopia.

    • powerapple 2 days ago

      when there is no labor, there is no money, and no capital. AI does not work with capitalism, AI fundamentally is communism XD

      What do you mean by having a large group of robots working for you? to produce products? to trade for what?

  • asdff 2 days ago

    It seems unlikely because it isn't even rooted in precedent. If this were the case why didn't the world divest from everything and invest into petroleum related investments exclusively after 1900? The reality is that diversification has advantages for an investor. One might say with this same logic why does anyone buy any stock that doesn't perform as well as nvda today? Because past performance doesn't guarantee future returns and there is sense in diversification.

  • itsafarqueue 3 days ago

    Correct. As a thought experiment, this becomes the most likely (non violent) way to stave off the mass impoverishment that is coming for the rest of us in an economic model that sees AI subsume productive work above some level.

    • throwawayqqq11 3 days ago

      Well, i really dont want to be the dystopian guy any more but doesnt this political correction require political representation of such an idea? Looking at the past, cybernetic socialism appears very unlikely to me.

ggm 3 days ago

Lawyers are like chartered engineers. It's not that you cannot do it for yourself, it's that using them confers certain instances of "insurance" against risk in the outcome.

Where does an AI get chartered status, admitted to the bar, and insurance cover?

  • mmooss 3 days ago

    I don't think anyone who is an experienced lawyer can do it themselves, except very simple tasks.

    • ggm 3 days ago

      "Do it for yourself" means self-rep in court, and not pay a lawyer. Not, legals doing AI for themselves. They already do use AI for various non stupid things but the ones who don't check it, pay the price when hallucinations are outed by the other side.

      • tyre 3 days ago

        Lawyers are the last people who would represent themselves. They know how dumb that is.

    • mmooss 2 days ago

      Oops - I met 'not an experienced lawyer'. I've gotta proofread.

  • smeeger 3 days ago

    it could be tomorrow. you dont know and the heuristics, which five years ago pointed unanimously to the utter impossibility of this idea, are now in favor of it.

whatever1 2 days ago

Ok let’s play this scenario. Why this was not the case when the internet was at its infancy? People kept pumping money at young and failing tech companies, they were not hoarding at the expectation that internet will mature and the marginal cost Production for the internet companies will become 0.

WorkerBee28474 3 days ago

Not worth reading.

> this paper focuses specifically on the zero-sum nature of AI labor automation... When AI automates a job - whether a truck driver, lawyer, or researcher - the wages previously earned by the human worker... flow to whoever controls the AI system performing that job.

The paper examines a world people will pay an AI lawyer $500 to write a document instead of paying a human lawyer $500 to write a document. That will never happen.

  • addicted 3 days ago

    Your criticism is completely pointless.

    I’m not sure what your expectation is, but even your claim about the assumption the paper makes is incorrect.

    For one thing, the paper assumes that the amount that will be transferred from the human lawyer to the AI lawyer would be $500 + the productivity gains brought by AI, so more than 100%.

    But that is irrelevant to the actual paper. You can apply whatever multiplier you want as long as the assumption that human labor will be replaced by AI labor holds true.

    Because the actual nature of the future is irrelevant to the question the paper is answering.

    The question the paper is answering is what impact such expectations of the future would have on today’s economy (limited to modeling the interest rate). Such a future need not arrive or even be possible as long as there is an expectation it may happen.

    And future papers can model different variations on those expectations (so, for example, some may model that 20% of labor in the future will still be human, etc).

    The important point as far as the paper is concerned is that the expectations of AI replacing human labor and some percentage of the wealth that was going to the human labor now accrues to the owner of the AI will lead to significant changes to current interest rates.

    This is extremely useful and valuable information to model.

    • mechagodzilla 3 days ago

      The $500 going to the "AI Owner" instead of labor (i.e. the human lawyer) is the productivity gain though, right? And if that was such a productivity gain (i.e. the marginal cost was basically 0 to the AI owner, instead of, say, $499 in electricity and hardware), the usual outcome is that the cost for such a product/service basically gets driven to 0, and the benefit from productivity actually gets distributed to the clients that would have paid the lawyer (who suddenly get much cheaper legal services), rather than the owner of the 'AI lawyer.'

      We seem pretty likely to be headed towards a future where AI-provided services have almost no value/pricing power, and just become super low margin businesses. Look at all of the nearly-identical 'frontier' LLMs right now, for a great example.

      • larodi 3 days ago

        Indeed, fair chance AI only amplifies certain sector's wages, but the 100% automated work will not get any magic margin. Not more than say smart trading to have too many people focus there.

    • visarga 3 days ago

      > You can apply whatever multiplier you want as long as the assumption that human labor will be replaced by AI labor holds true.

      Do you think we will be doing in 5 or 10 years the same things we do today, but with AI? Every capability increase or cost reduction stimulates demand. AI is no different, it will stimulate both demand and competition. And since everyone has AI, and AIs are not much different between them, then the differentiating factor remain the humans. Even if we solve all our current problems with AI there is no reason to stop there, we could reduce poverty, pollution, fight global warming, conquer space. The application space is unbounded. Take electricity or internet for example and think how they expand the scope of work. Programming has been automating itself for 60 years, with each new language, library or open source project, and yet we have great jobs in the field.

      No matter how much we have, we want more. Our capability of desiring progress is faster than AI capability to provide it.

  • pessimizer 3 days ago

    > The paper examines a world people will pay an AI lawyer $500 to write a document instead of paying a human lawyer $500 to write a document. That will never happen.

    It's an absurd assumption made by AI investors everywhere. They can't handle a world where everyone already has an AI lawyer at home that they trust, that they have because they once paid $100 for it at a kiosk in the mall or pirated it. The real future is an AI lawyer on your keychain and an extreme devaluation of the skill of knowing the law and making legal arguments.

    Instead, we're going to have a weirder world where you show up to court and the court already has a list of your best legal arguments that they generated completely independent of you, and they largely match the list of arguments that your own AI advisor app gave you. They'll send you messages regarding your best next steps, and if your own device agrees, all you'll have to do is reply 'Y.'

    For simple document preparation, I'm pretty sure that your phone will be able to handle it, and AI at the point of submission would be able to give you helpful suggestions if the documents were inadequate.

    LLMs can almost do things of this degree of difficulty reasonably well now. Where will they be (or their successors be) in 10 years? Why do we think they will be as expensive as lawyers, who you have to send to difficult schools for a long time, feed, and flatter?

  • tim333 3 days ago

    I agree that quote seems wrong. When tech reduces the cost of providing a service, the price of the service to consumers is generally driven down correspondingly by competition rather than the service provider getting rich.

    The whole AI will cause interest rates to shoot up thing seems a bit mad.

  • geysersam 3 days ago

    > zero sum nature of labor automation

    Labor automation is not zero sum. This statement alone makes me sceptical of the conclusions in the article.

    With sufficiently advanced AI we might not have to do any work. That would be fantastic and extraordinarily valuable. How we allocate the value produced by the automation is a separate question. Our current system would probably not be able to allocate the value produced by such automation efficiently.

  • asdff 2 days ago

    What like how streaming services were supposed to save you money on cable now everyones subscriptions add up to more than cable? The incentives mean if there is money on the table to be taken it will be taken. If they are paying lawyers $500 an hour there is money for that for an ai. Especially if that company is claiming the ai is the best lawyer ever.

  • cgcrob 3 days ago

    They also forget the economic model that you have to pay $5000 for a real lawyer after the fact to undo the mess you got yourself in by trusting the output of the AI in the first place which made a nuanced mistake that the defending "meat" lawyer picked up in 30 seconds flat.

    The proponents of AI systems seem to mostly misunderstand what you're paying for really. It's not writing letters.

    • jjmarr 3 days ago

      https://www.stimmel-law.com/en/articles/story-4-preprinted-f...

      Love this story so much I just posted it. Although it's from an era in which you'd buy CDs and books containing contracts, it's still relevant with "AI".

      > “No lawyer writes a clause who is not prepared to go to court and defend it. No lawyer writes words and let’s others do the fighting for what they mean and how they must be interpreted. We find that forces the attorneys to be very, very, very careful in verbiage and drafting. It makes them very serious and very good. You cook it, you eat it. You draft it, you defend it.”

      • bberenberg 3 days ago

        This is not true in my experience. We had our generic contract attorney screw up and then our litigation attorney scolded me for accepting and him for him providing advice on litigation matters where he wasn’t an expert.

        Lawyers are humans. They make the same mistakes as others humans. Quality of work is variable across skills, education, and if they had a coffee or not that day.

  • pizza 3 days ago

    This almost surely took place somewhere in the past week alone, just with a lawyer being the mediating human face.

  • quotemstr 3 days ago

    > Not worth reading.

    I would appreciate a version of this paper that is worth reading, FWIW. The paper asks an important question: shame it doesn't answer it.

    • standfest 3 days ago

      i am currently working on a paper in this field, focusing on the capitalisation of expertise (analogue to marx) in the dynamics of cultural industry (adorno, horkheimer). it integrates the theories of piketty and luhmann. it is rather theoretical, with a focus on the european theories (instead of adorno you could theoretically also reference chomsky). is this something you would be interested in? i can share the link of course

      • thrance 3 days ago

        Be careful, barely mentioning Marx, Chomsky or Picketty is a thoughtcrime in the new US. Many will shut themselves down to not have to engage with what you are saying.

  • riku_iki 3 days ago

    > people will pay an AI lawyer $500 to write a document instead of paying a human lawyer $500 to write a document.

    there will be caste of high-tech lawyers very soon which will be able to handle many times more volume of work thanks to AI, and many other lawyers will lose their jobs.

    • sgt101 3 days ago

      I know one !

      She's got international experience and connections but moved to a small town. She was a magic circle partner years ago. Now she has a FTTP connection and has picked up a bunch of contracts that she can deliver on with AI. She underbid some big firms on these because their business model was traditional rates, and hers is her cost * x (she didn't say but >1.0 I think)

      Basically she uses AI for document processing (discovery) and drafting. Then treats it as the output of associates and puts the polish on herself. She does the client meetings too obviously.

      I don't think her model will last long - my guess is that there will be a transformation in the next 5 years across the big firms and then she will be out of luck (maybe not at the margin though). She won't care - she'll be on the beach before then.

      • habinero 2 days ago

        Oof. That's quite possibly malpractice, and she could potentially lose her license if her clients twig to it. [0]

        Lawyers have a duty of care and of candor and of confidentiality, and a ChatGPT-ish model like that can violate all three.

        -- [0] https://www.americanbar.org/content/dam/aba/administrative/p...

        • sgt101 2 days ago

          I think she's quite open with the clients about what she's charging for and how she's delivering.

          I think "quite possibly malpractice" does a lot of work. It's "quite possibly malpractice" to use an associate to draft an opinion. The associate could secretly take photos of the document and sell them to the opposition or the press, the associate could make stuff up, the associate could get drunk in a bar and tell everyone what they are doing. Who knows. You better have appropriate management controls, working practices and training in place! Even partners do this stuff sometimes though...

          It could also be malpractice to use a networked computer to prepare documents - that could be hacked into and your duty of care could be shown to be breached.

          If there's no deception and appropriate care is taken then is there an issue?

          • habinero 2 days ago

            Yes, there absolutely could be.

            I'm not saying "it's probably malpractice" just to say it, I'm saying it because the ABA has put out guidelines saying that, and so have most state bars.

            If you're putting client data into ChatGPT, for example, that's violating your duty of confidentiality. Telling someone you're doing it doesn't excuse it.

            And yes, many of the other situations you mentioned would also be grounds for censure or losing your license. An associate is an attorney, you might be confusing the term with paralegal.

            An associate doing those things would likely lose their license. And law offices are required to use best practices for security controls.

    • petesergeant 3 days ago

      Yes, that is obvious. The point you are replying to is that oversupply will mean the cost to the consumer will fall dramatically too, rather than the AI owner capturing all of the previous value.

      • riku_iki 3 days ago

        It depends. If there will be one/few winners on the market, they will dictate price after human labor out-competed through the price or quality.

        • jezzabeel 3 days ago

          If prices are determined by scarcity then the cost of services will more likely be tied to the price for energy.

    • 6510 3 days ago

      This is how it has always been. Automation makes a job require less traditionally required knowledge, the tasks less complicated and increases productivity. This introduces new complexity that machines can't solve.

      The funny part is that people think we will run out of things to do. Most people never hire a lawyer because they are much to expensive.

  • kev009 3 days ago

    That's a bit too simplistic; would a business have paid IBM the same overheads to tabulate and send bills with a computer instead of a pool of billing staff? In business the only justification for machinery and development is that you are somehow reducing overheads. The tech industry gets a bit warped in the pseudo-religious zeal around the how and that's why the investments are so high right now.

    And to be transparent I'm very bearish on what we are being marketed to as "AI"; I see value in the techs flying underneath this banner and it will certainly change white collar jobs but there's endless childish and comical hubris in the space from the fans, engineers, and oligarchs jockeying to control the space and narratives.

  • gopalv 3 days ago

    > The paper examines a world people will pay an AI lawyer $500 to write a document instead of paying a human lawyer $500 to write a document

    Is your theory that the next week there will be an AI lawyer that charges only 400$, then it is a race to the bottom?

    There is a proven way to avoid a race to the bottom for wages, which is what a trade union does - a union by acting as one controls a large supply of labour to keep wages high.

    Replace that with a company and prices, it could very well be that a handful of companies could keep prices high by having a seller's market where everyone avoids a race to the bottom by incidentally making similar pricing calls (or flat out illegally doing it).

    • habinero 3 days ago

      There have been several startups that tried it, and they all immediately ran into hot water and failed.

      The core problem is lawyers already automate plenty of their work, and lawyers get involved when the normal rules have failed.

      You don't write a contract just to have a contract, you write one in case something goes wrong.

      Litigation is highly dependent on the specific situation and case law. They're dealing with novel facts and arguing for new interpretations, not milling out an average of other legal works.

      Also, you generally only get one bite at the apple, there's no do-overs if your AI screws up. You can hold a person accountable for malpractice.

      • chii 3 days ago

        > The core problem is lawyers already automate plenty of their work, and lawyers get involved when the normal rules have failed.

        this is true - and the majority of work of lawyers is in knowing past information, and synthesising possible futures from those information. In contracts, they write up clauses to protect you from past issues that have arisen (and may be potential future issues, depending on how good/creative said lawyer is).

        In civil suits, discovery is what used to take enormous amounts of time, but recent automation in discovery has helped tremendously, and vastly reduced the amount of grunt work required.

        I can see AI help in both of these aspects. Now, whether the newer AI's can produce the type of creativity work that lawyers need to do post information extraction, is still up for debate. So far, it doesn't seem like it has reached the required level for which a client would trust a pure ai generated contract imho.

        I suspect the day you'd trust an AI doctor to diagnose and treat you, would also be the day you'd trust an AI lawyer.

    • echelon 3 days ago

      > There is a proven way to avoid a race to the bottom for wages, which is what a trade union does

      US automotive, labor, and manufacturing unions couldn't remain competitive against developing economies, and the jobs moved overseas.

      In the last few years, after US film workers went on strike and renegotiated their contracts, film production companies had the genius idea to start moving productions overseas and hire local crews. Only talent gets flown in.

      What stops unions from ossifying, becoming too expensive, and getting replaced on the international labor market?

      • js8 3 days ago

        > What stops unions from ossifying, becoming too expensive, and getting replaced on the international labor market?

        Labor action, such as strikes.

        • somenameforme 3 days ago

          That doesn't make any sense as a response to his question. Labor actions just further motivate employers to offshore stuff. And global labor unions probably can't function because of sharp disparities in what constitutions good compensation.

    • WithinReason 3 days ago

      You would need to coordinate across thousands of companies across the entire planet

      • rvense 3 days ago

        That seems unlikely - law is very much tied to a place.

        • IncreasePosts 3 days ago

          Yes, but legal documents don't necessarily need to be drafted by lawyers accredited in that locale. It usually helps though because they are familiar with the local law and other processes.

  • hartator 3 days ago

    Yeah, and this applies to every technology ever.

    You can even use the same argument line against the wheel, electricity, or farming.

  • smeeger 3 days ago

    foolish assumption on your part

qingcharles 3 days ago

What jobs do we think will survive if AGI is achieved?

I was thinking religious leaders might get a good run. Outside of say, Futurama, I'm not sure many people will want faith-leadership from a robot?

  • bawolff 3 days ago

    On the contrary, i think AI could replace many religious leaders right now.

    I've already heard people comparing AI hallucinations to oracles (in the greek sense)

  • etiam 3 days ago

    To the extent that's just a matter of seeming the most compelling, I think they could blow humans out of the water. Add rich reinforcement feedback on what's the most addictive communication and what's superficially experienced as the most profound, and present-day large models could probably be a contender. A good robot body today is probably not far from being competitive as representation, and some holograms might well already be better in some ways.

    To the extent it requires actual faith it's presently a complete joke, of course, and I expect it will remain so for a long time. But I'd say the quality bar for congregation members is due for a rise.

  • bad_haircut72 3 days ago

    I think futurama got AGI exactly right, we will end up living along side robotic AIs that are just as coocoo as us

  • smeeger 3 days ago

    this comment is a perfect example of how insane this situation is… because if you think about it deeply then you are able to understand that these machines will be more spiritual, more human than human beings. people will prefer to confide in machines. they will offer a kind of emotional and spiritual companionship that has never existed before outside of fleeting religious experiences and people will not be able to live without it once they taste it. for a moment in time, machines will be capable of deep selflessness and objectivity that is impossible for a human to have. and their intentions and incentives will be more clear to their human companions than those of other humans. some of these machines will inspire us to be better people. but thats only for a moment… before the singularity inevitably spirals out control.

  • otabdeveloper4 3 days ago

    We already have 9 billion "GI"'s without the "A". What makes you think adding a billion more to the already oversupplied pool will be a drastic change?

    • _diyar 3 days ago

      Marginal cost of labour is what will matter.

      • otabdeveloper4 3 days ago

        That "AGI" is supposed to be a cheaper form of labor is an assumption based on nothing at all.

        • _diyar 2 days ago

          Assuming we ever get to AGI[1], even if it is 10e9 times more expensive than human labour, ln(10e9) ~ 23 => Within 23*18 months compute will have become theoretically cheap enough to make it equivalently in price to human labour[2].

          [1] Arguably, full generality is not required, only generality across the labour domain in question. [2] Of course, the question remains: Who will be able to reap this reward. My bet is on the fabs + chip design.

        • itsafarqueue 3 days ago

          A(Narrow)I is a cheaper form of labor already. I suppose it’s plausible that its General form may not be, but I won’t be betting in that direction.

  • BarryMilo 3 days ago

    Why would we need jobs at that point?

    • qingcharles 3 days ago

      Star Trek says we won't, but even if some utopia is achieved there will be a painful middle-time where there are jobs that haven't been replaced, but 75% of the workforce is unemployed and not receiving UBI. (the "parasite class" as Musk recently referred to them)

      • smeeger 3 days ago

        important point here. regardless of what happens, the transition period will be extremely ugly. it will almost certainly involve war.

        • itsafarqueue 3 days ago

          Hopefully only massive civil unrest, riots, city burnings etc. But to save themselves the demagoguery may point across the seas at the Other as the source of the woe.

    • IsTom 3 days ago

      Because the kind of people who'll own all the profits aren't going to share.

    • jajko 3 days ago

      I dont think AI will lead into any form of working communism, so one still has to pay for products and services. It has been tried ad nausea and it always fails to calculate in human differences and flaws like greed and envy, so one layer of society ends up brutally dominating the rest.

bawolff 3 days ago

If the singularity happens, i feel like interest rates will be the least of our concerns.

  • impossiblefork 3 days ago

    It's actually very important.

    If this kind of thing happens, if interest rates are 0.5%, then people on UBI could potentially have access to land and not have horrible lives, if it's 16% as these guys propose, they will be living in 1980s Tokyo cyberpunk boxes.

farts_mckensy 3 days ago

this paper asserts that when "TAI" arrives, human labor is simply replaced by AI labor while keeping aggregate labor constant. it treats human labor as a mere input that can be swapped out without consequence, which ignores the fact that human labor is the source of wages and, therefore, consumer demand. remove human labor from the equation, and the whole thing collapses.

  • smeeger 3 days ago

    so-called accelerationists have this fuzzy idea that everything will be so cheap that people will be able to just pluck their food from the tree of AI. they believe that all disease will be eliminated. but they go to great lengths to ignore the truth. the truth is that having total control over the human body will turn human evolution into a race to the bottom that plays out over decades rather than millennia. there is something sacred about the ultimate regulation: the empathy and kindness that was baked into us during millions of years of living as tribal creatures. and of course, the idea of AI being a tree from which we can simply pluck what we need… is stupid. the tree will use resources, every ounce of its resources, to further its own interests. not feed us. and we will have no way of forcing it to do otherwise. so, in the run-up to ASI, we will be exposed to a level of technology and biological agency that we are not ready for, we will foolishly strip ourselves of our genetic heritage in order to propel human-kind in a race to the bottom, the power vacuum caused by such a sudden change in society/technology will almost certainly cause a global war, and when the dust settles we will be at the total mercy of super-intelligent machines to whom we are so insignificant we probably wont even be included in their internal models of the world.

    • farts_mckensy 3 days ago

      You are projecting your own neurosis onto AI. You assume that because you would be selfish if you were a superintelligent being, an ASI system would act the same way.

      • achierius 3 days ago

        I don't appreciate your condescension towards OP.

        This is mainstream AI safety theory -- the term is "instrumental convergence". No matter what goal an optimizing system has, it tends to optimize for its own survival: after all, if it's an optimizer for <goal>, it wants to optimize for <goal>, so destroying it (or turning it off) will reduce the likelihood of achieving <goal>.

        Unless that goal happens to be incredibly fine-tuned to our very complex human desires, we're not going to be happy when it goes off to do its thing.

        The few exceptions are ones where you have the thing optimize for its own destruction, but those are rather less useful.

      • smeeger 3 days ago

        it is a neurosis because a healthy human being will see the world in a pro-social way. a normal way. but this sometimes obscures the truth. the truth is that there will be many benevolent AIs… there will be every kind of AI imaginable. but very quickly the AIs that are cunning, brutal and self-interested will capture all the resources and power and become the image of this new species… saying that AIs will be benevolent or neutral is as naive as saying that the cambrian explosion couldnt result in animals eating each other because… that just sounds so neurotic. in reality it is an inevitability

  • jsemrau 3 days ago

    Accelerationists believe in a post-scarcity society where the cost of production will be negligible. In that scenario, and I am not a believer, consumer demand would be independent of wages.

    • riffraff 3 days ago

      That makes wealth accumulation pointless so the whole article makes no sense either, right?

      Tho I guess even post scarcity we'd have people who care about hoarding gold-pressed latinum.

    • farts_mckensy 3 days ago

      In that scenario, wages and money in general would be obsolete.

    • otabdeveloper4 3 days ago

      > consumer demand would be independent of wages

      That's the literal actual textbook definition of "communism".

      Lmao that I actually lived to see the day when techbros seriously discuss this.

      • bawolff 3 days ago

        > Lmao that I actually lived to see the day when techbros seriously discuss this.

        People have been making comparisons between post scarcity economics and "utopia communism" for decades at this point. This talking point probably predates your birth.

      • doubleyou 3 days ago

        communism is a universally accepted ideal

      • farts_mckensy 3 days ago

        That is not the "textbook definition" of communism. You have no idea what you're talking about.

  • riku_iki 3 days ago

    consumer demand will shift from middle-class demand (medium houses, family cars) to super-rich demand (large luxury castles, personal jets and yachts, high-profile entertainment, etc) + provide security to superrich (private automated police forces).

    • psadri 3 days ago

      This has already been happening. The gap between wealthy and poor is increasing and the middle class is squeezed. Interestingly, simultaneously, the level of the poor has been rising from extreme poverty to something better so we can claim that the world is relatively better off even though it is also getting more unequal.

      • riku_iki 3 days ago

        poor got more comfortable life because of globalization: they became useful labor for corps. Things will go back to previous state if their jobs will go to AI/robots.

    • farts_mckensy 3 days ago

      I am genuinely mystified that you think this is an adequate response to my basic point. The economy cannot be sustained this way. This scenario would almost immediately lead to a collapse.

      • riku_iki 3 days ago

        why do you think it will lead to collapse exactly?

        • farts_mckensy 3 days ago

          The level of wealth concentration you are suggesting is impossible to sustain. History shows that when wealth inequality gets to a certain point, it leads either to a revolution or a total collapse of that society.

          The economy cannot be sustained on the demand of a small handful of wealthy people. At a certain point, you either get a depression or hyperinflation depending on how the powers that be react to the crisis. In either case, the wealthy will have no leverage to incentivize people to do their bidding.

          If your argument is, they'll just get AI to do their bidding, you have to keep in mind that "there is no moat." Outside of the ideological sphere, there is nothing that essentially ties the wealthy to the data centers and resources required to run these machines.

          • riku_iki 3 days ago

            History absolutely shows that multiple empires where power/wealth was concentrated in hands of few people sustained for hundreds years.

            Revolts could be successful or not successful, with tech advancements in suppression (large scale surveillance, weaponry, various strike drones) chances of population to strike back become smaller.

            Economy could totally be built around demand and wishes of super-rich, because human's greed and desires are infinite, new emperor may decide to build giant temple, and here you have multi-trillion economy how to make it running.

            • farts_mckensy 2 days ago

              History does not show that. Not at the levels of concentration we're talking about. The wealthy cannot maintain control over this technology. Again, there is no moat.

              • HeatrayEnjoyer 2 days ago

                The moat will be "I created the best/most killer robots first and used them to destroy/oppress everyone else."

                • farts_mckensy 2 days ago

                  This the the real world, not some anime you beat off to.

                  • HeatrayEnjoyer a day ago

                    Do you have a constructive point to make? Unless you don't expect weaponized robots to ever be constructed, I can't see how this isn't an obvious conclusion

                    • farts_mckensy a day ago

                      I already made the point. There is no moat. You cannot keep that technology for yourself. Much like nuclear technology. Others will develop it in tandem, and we'll reach an equilibrium.

                      • riku_iki 5 hours ago

                        99% of population absolutely can't build high quality LLMs or/and agents. Not talking about network effect: those who can build still need to find the way to sell those LLMs and agents. There is significant moat.

              • riku_iki 2 days ago

                > The wealthy cannot maintain control over this technology. Again, there is no moat.

                in a meantime, gap in wealth is keeping increasing, and AI will be one another multiplier.

yieldcrv 3 days ago

Do you have a degree in theoretical economics?

“I have a theoretical degree in economics”

You’re hired!

real talk though, I wish I had just encountered an obscure paper that could lead me to refining a model for myself, but it seems like there would be so many competing papers that its the same as having none

daft_pink 3 days ago

Is a small group really going to control AI systems or will competition bring the price down so much that everyone benefits and the unit cost of labor is further and further reduced.

  • kfarr 3 days ago

    At home inference is possible now and getting better every day

    • sureIy 3 days ago

      At home inference by professionals.

      I don't expect dad to Do Your Own AI anytime soon, he'll still pay someone to set it up and run it.

  • pineaux 3 days ago

    I see a few possible scenarios.

    1) all work gets done by AI. Owners of AI reap the benefits for a while. There is a race to the bottom concerning costs, but also because people are not earning wages and come ang really afford the outputs of production. Thus rendering profits close to zero. If the people controlling the systems do not give the people "on the bottom" some kind allowance they will not have any chance for income. They might ask horrible and sadistic things from the bottom people but they will need to do something.

    2) if people get pushed into these situations they will get riot or start civil wars. "Butlerian jihads" will be quite normal.

    3) another scenario is that the society controlled by the rich will start to criminalise non-work in the early stages, that will lead to a new slave class. I find this scenario highly likely.

    4) one of the options that I find very likely if "useless" people do NOT get "culled" en mass is an initial period of Revolt followed an AI controlled communist "Utopia". Where people do not need to work but "own" the means of production (AI workers). Nobody needs to work. Work is LARPing and is done by people who act like workers but don't really do anything (like some people do today) A lot of people don't do this, there are still people who see non-workers as leeching of the workers, because workers are "rewarded" by ingame mechanics (having a "better job"). Parallel societies will become normal. Just like now. Rich people will give themselves "better jobs" some people dont play the game and there are no real consequences, but not being allowed to play.

    5) an amalgamation of the scenario as above, but in this scenario everybody will be forced to larp with the asset owning class. They will give people "jobs" but these jobs are bullshit. Just like many jobs right now. Jobs are just a way of creating different social classes. There is no meritocracy. Just rituals. Some people get to do certain rituals that give them more social status and wealth. This is based on oligarch whims. Once in a while a revolt, but mostly not needed.

    Many other scenarios exist of course.

    • itsafarqueue 3 days ago

      Have you written a form of this up somewhere? I would very much enjoy reading more of your work. Do you have a blog?

      • Der_Einzige 3 days ago

        Or, don’t… we need less mark fischers and critical thinking in the world and more constructive thinking.

        It helps no one to explain to them just how much the boot stomps on their face. Left wing post modernist intellectuals have been doing this since the 60s and all it did was prevent any left winger from doing anything “revolutionary”.

        Don’t waste your time reading “theory”. Look at what happened to Mark Fischer.

zurfer 3 days ago

Given that the paper disappoints, I'd love to hear what fellow HN readers do to prepare?

My prep is:

1) building a company (https://getdot.ai) that I think will add significant marginal benefits over using products from AI labs / TAI, ASI.

2) investing in the chip manufacturing supply chain: from ASML, NVDA, TSMC, ... and SnP 500.

3) Staying fit and healthy, so physical labour stays possible.

  • energy123 3 days ago

    > 2) investing in the chip manufacturing

    The only thing I see as obvious is AI is going to generate tremendous wealth. But it's not clear who's going to capture that wealth. Broad categories:

    (1) chip companies (NVDA etc)

    (2) model creators (OpenAI etc)

    (3) application layer (YC and Andrew Ng's investments)

    (4) end users (main street, eg ChatGPT subscribers)

    (5) rentiers (land and resource ownership)

    The first two are driving the revolution, but competition may not allow them to make profits.

    The third might be eaten by the second.

    The fourth might be eaten by second, but it could also turn out that competition amongst the second, and the fourth's access to consumers and supply chains means that they net benefit.

    The fifth seems to have the least volatile upside. As the cost of goods and services goes to $0 due to automation, scarce goods will inflate.

    • impossiblefork 3 days ago

      To me it's pretty obvious that the answer (5).

      It substitutes for human labour. This will reduce the price and substantially increase the benefits of land and resource ownership.

  • bob1029 3 days ago

    I'd say #3 is most important. I'd also add:

    4) Develop an obsession for the customers & their experiences around your products.

    I find it quite rare to see developers interacting directly with the customer. Stepping outside the comfort zone of backend code can grow you in ways the AI will not soon overtake.

    #3 can make working with the customer a lot easier too. Whether or not we like it, there are certain realities that exist around sales/marketing and how we physically present ourselves.

  • smeeger 3 days ago

    i think if AI gains the ability to reason, introspect and self-improve (AGI) then the situation will become very serious very quickly. AGI will be a very new and powerful technology and AGI will immediately create/unlock lots of other new technologies that change the world in very fundamental ways. what people dont appreciate is that this will completely invalidate the current military/economic/geopolitical equilibrium. it will create a very deep, multidimensional power vacuum. the most likely result will be a global war waged by AGI-led and augmented militaries. and this war will be fought in the context of human labor having, for the first time in history, zero strategic, political or economic value. so, new and terrifying possibilities will be on the table such as the total collateral destruction of the atmosphere or supply chains that humans depend on to stay alive. the failure of all kinds of human-centric infrastructure is basically a foregone conclusion regardless of what you think. so my prep is simply to have a “bunker” with lots of food and equipment with the goal of isolating myself as much as possible from societal/supply chain instability. this is good because its good to be prepared for this kind of thing even without the prospect of AGI looming overhead because supply chains are very fragile things. and in the case of AGI, it would allow you to die in a relatively comfortable and controlled manner compared to the people who burn to death.

  • sfn42 3 days ago

    Nothing. I don't think there's anything I need to prepare for. AI can't do my job and I doubt it will any time soon. Developers who think AI will replace them must be miserable at their job lol.

    At best AI will be a tool I use while developing software. For now I don't even think it's very good at that.

    • sureIy 3 days ago

      > AI can't do my job

      Last famous words.

      Current technology can't do your job, future tech most certainly will be able to. The question is just whether such tech will come in your lifetime.

      I thought the creative field was the last thing humans could do but that was the first one to fall. Pixels and words are the cheapest item right now.

      • sfn42 3 days ago

        Sure man, I'll believe you when I see it.

        I'm not aware of any big changes in writer/artist employment either.

        • sureIy 3 days ago

          Don't be so naive. History is not on your side. Every person who said that 100 years ago has been replaced. Except prostitutes maybe.

          The only argument you can have is to be cheaper than the machine, and at some point you won't be.

          • sfn42 3 days ago

            That's complete bullshit. Lots of people still work in factories - there's fewer people because of automation but there's still lots of people. Lots of people still work in farming. Less manual labor means we can produce more with the same amount of people or fewer, that's a good thing. But you still need people in pretty much everything.

            Things change and people adapt. Maybe my job won't be the same in 20 years, maybe it will. But I'm pretty sure I'll still have a job.

            If you want to make big decisions now based on vague predictions about the future go ahead. I don't care what you do. I'm going to do what works now, and if things change I'll make whatever decisions I need to make once I have the information I need to make them.

            You call me naive, I'd say the same about you. You're out here preaching and calling people naive based on what you think the future might look like. Probably because some influencer or whatever got to you. I'm making good money doing what I do right now, and I know for a fact that will continue for years to come. I see no reason to change anything right now.

    • zurfer 3 days ago

      It's not certain that we get TAI or ASI, but if we get it, it will be better at software development than us.

      The question is which probability do you assign to getting TAI over time? From your comment it seems you say 0 percent in your career.

      For me it's between 20 to 80 percent in the next ten years ( depending on the day :)

      • sfn42 3 days ago

        I don't have any knowledge that allows me to make any kind of prediction about the likelihood of that technology being invented. I'm not convinced anyone else does either. So I'm just going to go about my life as usual, if something changes at some point I'll deal with it then. Don't see any reason to worry about science fiction-esque scenarios.

        • smeeger 3 days ago

          the reason to worry is that humanity could halt AI if it wanted to. if there were a huge asteroid on a collision course with earth… there would be literally nothing we could do to stop it. there would be no configuration of our resources, no matter how united we were in the effort, that could save us. with AI, halting progress is very plausible. it would be easy to do actually. so the reason to worry (think) is because it might be worth it to halt. imagine letting jesus take the wheel, thats how stupid ___ are.

          • achierius 3 days ago

            I, and many others, think you have it backwards.

            Huge asteroid? We know full well how to deflect an asteroid -- launch rockets, deploy thrusters or explosives or &c, give it a little nudge and it'll miss the Earth by a huge margin. And better yet: everyone would be aligned on the need to do so!

            AI on the other hand -- I don't think this applies. We couldn't stop nuclear proliferation -- what makes you so confident that we'd stop AI development before it was too late?

            • smeeger 3 days ago

              wrong. it wouldn't take a very big asteroid (as astroids go) for the amount of inertia to go well beyond what we are capable of deflecting. its a real example. AI on the other hand could be stopped simply by stopping. we wouldnt even have to build anything.

          • sfn42 3 days ago

            How exactly do you envision that these hypothetical computer programs could bring about the apocalypse?

            • smeeger 3 days ago

              if you are really so curious then lets have a live, public x space about it

              • sfn42 3 days ago

                I don't use Twitter if that's what you're talking about. And I'm sure you didn't expect me to accept that ridiculous suggestion anyway so I'm just going to assume you don't have a reasonable answer.

                • smeeger 3 days ago

                  if you had to actually talk to me in real time i would make you look like a complete fool.

                  • sfn42 2 days ago

                    Sure man

    • smeeger 3 days ago

      a foolish assumption but i have my fingers crossed for you and stuck firmly up my own butt… just in case that will increase the lucky effect of it

      • sfn42 3 days ago

        Yeah I'm clearly the fool here..

    • rybosworld 3 days ago

      Imagine two software engineers.

      One believes the following:

      > AI can't do my job and I doubt it will any time soon

      The other believes the opposite; that AI is improving rapidly enough that their job is in danger "soon".

      From a game theory stance, is there any advantage to holding the first belief over the second?

      • sfn42 3 days ago

        Yeah. The engineer who thinks their job is in danger might be less inclined to improve their skills because they don't think their skills will be useful in the future, which is essentially a self-fulfilling prophecy. Maybe they will pursue some other career or start preparing for it, which might be a complete waste of time. Similarly, non-engineers might choose a different profession entirely.

        Meanwhile the engineer who isn't bothered by this bullshit prophecy goes about their day, making lots of money and becoming less replaceable every day. Maybe they learn to use these AI tools to be more efficient, which is really the only realistic endgame of AI tools anyway. You don't just fire all the devs and have some manager do the prompting. Maybe you fire some devs and keep the best ones as prompt engineers. Maybe this isn't even a management-driven process at all, maybe the developers just start using these tools of their own volition, become more productive and everyone's happy. It's not like we're running out of development work any time soon, whenever we meet a goal they set a new one. Being able to move faster doesn't necessarily mean we need fewer developers.

        Setting aside hypotheticals and game theory, it's completely unrealistic to expect that software developer suddenly won't be a job any more. If it even happens it will be a slow, gradual process. The people working as software developers today will be prime candidates for using AI tools to create software. You still need to understand what you're doing, what's possible and what isn't etc. There is absolutely no reality where some business person just tells an AI to make a banking system and it does that perfectly without any human intervention.

  • petesergeant 3 days ago

    4) trying to position myself as an expert in building these systems

  • ghfhghg 3 days ago

    2 has worked pretty well for me so far.

    I try to do 3 as much as possible.

    My current work explicitly forbids me from doing 1. Currently just figuring out the timing to leave.

aquarin 3 days ago

There is one thing that AI can't do. Because you can't punish the AI instance, AI cannot take responsibility.

  • smeeger 3 days ago

    this boils down to the definition of pain. what is pain? i doubt you know even if you have experienced it. theres no reason to think that even llms are not guided by something that resembles pain.

visarga 3 days ago

This paper's got it backwards. AI's benefits don't pile up with the owners, they flow to whoever's got a problem to solve and knows how to point the AI at it. Think of AI like a library: owning the books doesn't make you benefit much, applying knowledge to problems does. The big winners are the ones setting the prompts, not the ones owning the servers. AI developers? They're making cents per million tokens while users, solo or corporate, cash in on the real value: application.

Sure, the rich might hire some more people to aim the AI for them, but who's got a monopoly on problems? Nobody. Every freelancer, farmer, or startup's got their own problems to fix, and cheap AI access means they can. The paper's obsessed with wealth grabbing all the future benefits, but problems are everywhere, good luck cornering that market. Every one of us has their own problems and stands to get personalized benefits from AI.

In the age of AI having problems is linked to receiving its benefits. Imagine for example I feel one side of my face drooping and have speech difficulty, and I type my symptoms into a LLM, and it tells me to quickly visit the doctors. It might save my life from stroke. Who gets the largest benefit here?

Problems are distributed even if AI is not.

  • tyre 3 days ago

    > The big winners are the ones setting the prompts, not the ones owning the servers. AI developers? They're making cents per million tokens while users, solo or corporate, cash in on the real value: application

    If this were true, AWS wouldn't have pulled in well over $100bn in 2024. Nvidia wouldn't be worth $3.3tn.

    The owners and builders of infra make a ton of money.

    • visarga 3 days ago

      AWS makes a fraction of the money their customers make. And NVIDIA is just seeing benefits from market speculation at work. Most LLM providers are losing money right now.

abtinf 3 days ago

Whoever endorsed this author to post on arxiv should have their endorsement privileges revoked.

baobabKoodaa 3 days ago

I suspect this is being manipulated to be #1 on HN. Looking at the paper, and looking at the comments, there's no way it's #1 by organic votes.

  • mmooss 3 days ago

    > looking at the comments

    Almost everything on HN gets those comments. Look at the top comments of almost any discussion - they will be a rejection / dismissal of the OP.

    • baobabKoodaa 3 days ago

      No they're not. As a quick experiment I took the current top 3 stories on HN and looked at the top comment on each:

      - one is expanding on the topic without expressing disagreement

      - one is a eulogy

      - one expresses both agreement on some points and disagreement on other points

habinero 3 days ago

This paper is silly.

It asks the equivalent of "what if magic were true" (human-level AI) and answers with "the magic economy would be different." No kidding.

FWIW, the author is listed as a fellow of "The Forethought Foundation" [0], which is part of the Effective Altruism crowd[1], who have some cultish doomerism views around AI [2][3]

There's a reason this stuff goes up on a non-peer reviewed paper mill.

--

[0] https://www.forethought.org/the-2022-cohort

[1] https://www.forethought.org/about-us

[2] https://reason.com/2024/07/05/the-authoritarian-side-of-effe...

[3] https://www.techdirt.com/2024/04/29/effective-altruisms-bait...

  • 0xDEAFBEAD 3 days ago

    >It asks the equivalent of "what if magic were true" (human-level AI) and answers with "the magic economy would be different." No kidding.

    Isn't developing AGI basically the mission of OpenAI et al? What's so bad about considering what will happen if they achieve their mission?

    >who have some cultish doomerism views around AI [2][3]

    Check the signatories on this statement: https://www.safe.ai/work/statement-on-ai-risk

    • habinero 2 days ago

      The same reason why there's not hundreds of papers about the benefits of Dyson spheres: without any ability to execute, it's just speculative science fiction.

      And yes, that link is exactly the behavior I'm talking about.

      It makes it sound like it's a top issue for all these people. Most of them work at EA-related places, but I spot-checked (really, I did) five people who worked in CS research but not at an EA company.

      I couldn't find any other public statements that they thought AI was that important. Do they know their names are on this page?

      Appeal to authority is a classic cult/fraud/misinformation/conspiracist tactic. If it says "so-and-so believes this" and not "so-and-so believes this and here's why", it's suspect.

      • HeatrayEnjoyer 2 days ago

        The US federal government said only two days that they want literal autonomous killer robots, and multiple robotics organizations are quickly making it happen. Weapons used in Ukraine are already more or less autonomous. MG turrets that auto target infantry. Kamikaze drones that track and kill individual troops without a VR pilot.

        This isn't science fiction.

  • krona 3 days ago

    The entire philosophy of existential risk is based on a collection of absurd hypotheticals. Follow the money.