Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI

Bracing for Impact: AGI Trades 53

Entrepreneur Daniel Gross speculates about the wide-ranging economic, technological, geopolitical, and societal implications if advanced AI systems become extremely capable at many different tasks. He poses open-ended questions across areas like markets, real estate, energy, nations, inflation, and geopolitics regarding what the impacts could be, what trends may emerge, what historical parallels could exist, and what investments or trades might make sense in such a hypothetical future scenario. From the post: Nations: Who wins and loses?
$250b of India's GDP exports are essentially GPT-4 tokens... what happens now?
Are there any relevant analogies from history we can compare to?
What is the euclidean distance of reskilling in prior revolutions, and how does AGI compare? The typist became an EA, can the software engineer become a machinist?
Electrification and assembly lines lead to high unemployment and the New Deal, including the Works Progress Administration, a federal project that employed 8.5m Americans with a tremendous budget... does that repeat?
This discussion has been archived. No new comments can be posted.

Bracing for Impact: AGI Trades

Comments Filter:
  • by ebcdic ( 39948 ) on Monday January 22, 2024 @01:04PM (#64179737)

    ... we're well prepared with gobbledygook.

  • by Anonymous Coward

    This isn't an article. It isn't even a blog post.

    • by Junta ( 36770 )

      I'm guessing it's written by some LLM, to really show they are all in and hip on this thing.

      • by Tablizer ( 95088 )

        > written by...LLM?

        Humans oh humans, wherefore art thou? Bots so annoying and dreary, please deliver us from these soulless mechanical demons.

      • I'm guessing it's written by some LLM

        Very unlikely.

        LLMs produce text that is formulaic, unoriginal, and often wrong.

        They don't produce nonsensical questions about reskilling and Euclidian distance.

  • by crunchy_one ( 1047426 ) on Monday January 22, 2024 @01:15PM (#64179765)

    Electrification and assembly lines lead to high unemployment and the New Deal, including the Works Progress Administration, a federal project that employed 8.5m Americans with a tremendous budget... does that repeat?

    I rest my case.

    • can be traced to automation [businessinsider.com]. Meanwhile Arstechnical has an entire article [arstechnica.com] explaining how the modern gig economy isn't far off from what the Luddites faced

      Just saying "You're an idiot" is all well and good, but the numbers aren't on your side. Something is happening, and has been for some time. Can you keep your head in the sand while heads are being lobbed off? Maybe. There's 330 million heads to lob off in this country alone. Survival Bias is a hell of a drug...
      • Re: (Score:3, Interesting)

        Since 1980, automation resulted in plenty of job losses.

        It also resulted in plenty of job gains.

        In 1980, the unemployment rate was 7.5%. Today it is 3.7%.

  • "AI systems become extremely capable at many different tasks"? What is this person smoking and how completely blind to reality do you have to be to think something like that? At this point, even "minimally capable" would be a drastic improvement and one that there is no reason to expect. Sure some simple and simplistic tasks are things that current AI can do, but only with frequent complete failure. "Minimally capable" already requires absence of those failures in all standard situations and them being rare

    • is writing a perl script a difficult task? It pays pretty well so folks would tend to say yes. If I point out fast food workers deserve a living pay people shout me down with "that works too easy!".

      I think we need more MBAs around here. Or People who think like them. I don't have to replace 100% of your job. If I replace 20% of the work you do I can take your remaining 80%, spread it out across 4 employees and fire you. Then I just cut headcount 20%.

      Meanwhile you're out looking for a job now, so the
      • If I point out fast food workers deserve a living pay people shout me down with "that works too easy!".

        No, we shout you down because you are spouting nonsense unless you define your terms. What does "deserve" mean? What does "living pay" mean?

        If you mean that the government should set wages and a burger-flipping 17-year-old high school senior living with her parents should be paid enough to afford a house, a car, and support a family on a single income, that why don't you say that instead of using weasel words?

        I think we need more MBAs around here. ... If I replace 20% of the work you do I can take your remaining 80%, spread it out across 4 employees and fire you. Then I just cut headcount 20%.

        Anyone with an MBA can tell you that is nonsense. When an employee becomes 20% more profitable, you

    • We can’t even succinctly define plain old intelligence (https://en.wikipedia.org/wiki/Intelligence), let alone define it sufficiently to allow us to distinguish between Natural and Artificial intelligence. AI is an advancement but it's way over hyped. Looks more like snake oil to me.

  • Those who have brains will transition to other jobs, those who don't... well... they will stay the same.
    And those who profit from both will continue to profit from both.

    • by HiThere ( 15173 )

      So what are you guessing one should transition to? Our prior guesses about what is easy for an AI and what is hard have been almost uniformly wrong. (And AI is NOT equivalent to LLM. It contains LLMs.)

      FWIW, I think for current AIs to be useful you need to sanitize the training data and close the feedback loop with the physical world. This will be expensive, so it needs to be done in specialized areas. But it's my expectation that people are in the process of doing that right now, and have been for the

      • So what are you guessing one should transition to?

        Pretty much anything the AI can't do, or would do badly.

        • by HiThere ( 15173 )

          But what job is it going to be that an AI won't compete in?
          AI isn't standing still, but we don't know what direction it's going to develop. It probably shouldn't be spewing bafflegab, as AIs can already do that fairly well. It shouldn't be designing custom molecular structures, as AIs are doing that pretty well. I'm not sure that "rock star" is a decent career. It probably is, but there are a couple of AIs that are doing a decent job there...but maybe people will get tired of them. I've heard of one t

          • The first thing that comes to mind is prompting.
            AI is currently notoriously bad at understanding what is being asked, for example it never asks for clarifications, it doesn't know how to do that, and probably won't be able to do so properly for a long time.
            It can't infer data from speech or figure out what the gaps are.

            If you ask AI "design me a house", it won't play "twenty questions" to figure out all the details. It would simply spit something out and leave it to you to provide clarifications. Furthermor

  • by presidenteloco ( 659168 ) on Monday January 22, 2024 @01:37PM (#64179827)
    I think it is a fallacy that people will just have to and be able to re-train for new roles now that AI is pretty good and AGI and separately, humanoid robots, may be approaching.

    This trope (not job losses, just changes in jobs) is inferred from the episodes of technological replacements of human labor.

    The difference this time is three-fold.
    First, that the new technology is generalist (or more precisely is a generality of specialists). It is not merely specialist technology in one area of work.

    Second, this technology may be cognitively superior to many people, and certainly will be more knowledgeable than most people. A

    And third, the robotic side of the new tech may be good enough for a wide swathe of manual labor tasks, and will be quickly trainable for new tasks using its increasingly general and refined task and environment learning capabilities.

    The old retrain-around-new-tech trope is probably obsolete, given these factors.

    Instead, we as humans will need to re-imagine our worth and role in the world and economy, and will have to re-engineer society to distribute some of the wealth from the automated productive sector. The alternative is widespread chaos as inequality spikes toward infinite.
    • by hdyoung ( 5182939 ) on Monday January 22, 2024 @02:03PM (#64179893)
      You think retraining is obsolete? I think you're absolutely wrong.

      Let's set aside tech workers of all types, everyone from research scientists all the way down to technicians and auto mechanics, these people basically have to retrain significant parts of their job every 5 years at least. Also, there's no discussion about literally anyone who works at a keyboard. Retraining is literally part of the job. How often does a computer-centered workflow last for more than just a few years?

      Ok, so that leaves a ton of jobs. Let's look a bit deeper. Nurses? They gotta constantly retrain. Cashregister operators and retail workers in general? New systems, new cards, new ways of paying, commerce methods changing, covid, etc. etc. the list just keeps going. That leaves people who are working janitorial and changing bedpans.

      Anyone incapable of retraining is already at the bottom of the workforce or dropped out of the workforce long ago.

      Cognitively superior to people? So are calculators. We adjusted to that. More knowlegeable than most people? So is wikipedia. We adjusted to that.

      This a lot of hype about a technology that will be easily incorporated into human workflows.
      • unions can help with retraining with out the worker having to pay for it and have to do it on their time off.

      • I think he's right --- re-skilling isn't the answer it has been in the past.

        AI, today, is at a point where everyone should be adding it as another tool in their belt.

        AI, tomorrow, is going to replace significant enough chunks of your labor. You won't be taking a continuing education class like doctors take, but instead more akin to a career switch as you fundamentally change the part of the problem you're tackling.

        But even that bold career switch isn't going to be a great answer, because your new job might

      • by Anonymous Coward

        There is a difference between retraining (the issue being discussed), and keeping your skills up to date (your examples).

        Every, and I mean EVERY, profession has to keep their skills up to date as what and how the work is performed evolves over time.

        The sort of retraining that OP is talking about is a combination of two things 1) A complete pivot to the work being accomplished (think household plumber to general surgeon), and 2) the same occurring to the vast majorities of professions - not just plumbers, bu

      • by WaffleMonster ( 969671 ) on Monday January 22, 2024 @03:14PM (#64180127)

        You think retraining is obsolete? I think you're absolutely wrong.

        What's the point of retraining a person to do a new task once a machine can be retrained and the resulting capability instantly uploaded into millions of instances of machines which can all now do new thing?

        Let's set aside tech workers of all types, everyone from research scientists all the way down to technicians and auto mechanics, these people basically have to retrain significant parts of their job every 5 years at least. Also, there's no discussion about literally anyone who works at a keyboard. Retraining is literally part of the job. How often does a computer-centered workflow last for more than just a few years?

        The list of activities where humans come out ahead of machines keep dwindling... lets say that trend continues until the list is nearly empty why would anyone want to pay humans when a machine can do the same job better and at much lower cost?

        Who is even going to want to pay to retrain humans to do something new (or at all for that matter) when a machine can do it better and cheaper?

        Anyone incapable of retraining is already at the bottom of the workforce or dropped out of the workforce long ago.

        Cognitively superior to people? So are calculators. We adjusted to that. More knowlegeable than most people? So is wikipedia. We adjusted to that.

        This a lot of hype about a technology that will be easily incorporated into human workflows.

        The term I like to describe this is "zombie labor" which is akin to the concept of "dead labor" only it lives on producing ever greater value instead of that value diminishing with time.

        I wouldn't bet on age old inductive arguments about new technologies breeding new opportunities holding under such conditions.

        • Every technological leap has generated these doom-and-gloom scenarios. Every time, people said “this time is special and different”. And every time humans adapted to the new reality.

          Is this the thing that truly renders humans obsolete? Maybe, but I’m not betting on it.
          • Every technological leap has generated these doom-and-gloom scenarios. Every time, people said âoethis time is special and differentâ.

            The value associated with whatever endeavor a human does tends to fade with time. If I make pizzas once a pizza is gone it can't provide any more value. If I write software things change and evolve over time and the value of my software declines. If I build a machine that makes paperclips the longer the machine runs the more paperclips it makes but eventually the machine breaks down, someone invents a better paperclip or paperclips go out of style (sorry Clippy). If I construct a building that building

            • So far, I see very little fundamental change. Calculators changed the way most humans do math. Matlab and Mathematica then changed things again. These new ML algorithms are impressive, but I still see tons of places where humans will add value.

              Basically, you’re saying that sometime in the future, when some machine system or other life form can do everything better than humans, there will be a fundamental shift in our existence. Maybe, yeah, but I’m not even convinced of that. There’s s
    • by HiThere ( 15173 )

      I still guess that the first AGI is a few years off. I've been predicting "about 2035" for the last decade, and see no real reason to change my prediction. But as we get closer, things become less predictable. (I never would have predicted that LLMs would be as successful as they have been. But the generation of fiction seems to be an inherent requirement in the design. In people the technical term for it is "confabulation", but people know what the words mean, to an LLM, they're just a string of symbo

      • Where I would differ from your interpretation is that an LLM doesn't just have "strings of symbols" as its internal representation. Strings of symbols are what it inputs and outputs, yes, but in between, those go into and come out of a statistically weighted, hierarchically self-organized network of RELATIONSHIPS between symbols.

        And it's not clear how different, fundamentally, such a network is from a good old fashioned semantic network representing an ontology of the concepts and event-types, situation-typ
        • by HiThere ( 15173 )

          I have two major disagreements with your argument:
          1) The database that the AI learns from is systematically corrupted by people pushing an agenda. This causes it to deviate from "the truth" along multiple dimensions.
          2) I think the AI needs direct experience with the physical world before you can call what it does with the inputs analogous to thinking. You need to close the feedback loop. By definition an LLM doesn't do this.

          • I partially agree with your point 1) i.e. the input is far from all true. However, my hypothesis is that since there are many independent of creating a fiction (expressing untrue statements), and fewer ways of expressing that represent patterns of what is true (true relationships between general or specific concepts/things/events) in the world, there should be more reinforcement in a huge corpus of true (or true-ish) expressions than false expressions. So the AI would learn the less variable true-ish patte
    • I think it is a fallacy that people will just have to and be able to re-train for new roles now that AI is pretty good and AGI and separately, humanoid robots, may be approaching. This trope (not job losses, just changes in jobs) is inferred from the episodes of technological replacements of human labor. The difference this time is three-fold. First, that the new technology is generalist (or more precisely is a generality of specialists). It is not merely specialist technology in one area of work. Second, this technology may be cognitively superior to many people, and certainly will be more knowledgeable than most people. A And third, the robotic side of the new tech may be good enough for a wide swathe of manual labor tasks, and will be quickly trainable for new tasks using its increasingly general and refined task and environment learning capabilities. The old retrain-around-new-tech trope is probably obsolete, given these factors. Instead, we as humans will need to re-imagine our worth and role in the world and economy, and will have to re-engineer society to distribute some of the wealth from the automated productive sector. The alternative is widespread chaos as inequality spikes toward infinite.

      I would imagine in countries where the government still functions as a service to the people, that reimagining will see a somewhat leveled playing field for the remaining people within the country as this begins to pick up pace. In America? I expect the oligarchs to propose that anyone falling below the poverty line, in order to prevent them from further self-harm, be placed in "financially responsible housing" where they can be fed a bland paste of nutritionally so-so, but extremely affordable to prevent t

    • remove healthcare from jobs & cut full time down with higher OT levels.
      Maybe have an X2 X3 OT levels for extreme OT. Also the make the Exempt level an lot higher with an COL adder as well.

  • Well never fear, others have done it for you, go read some Iain Banks books, here is a good one to start with Player of Games, and here is what they think of Capitalism https://www.genolve.com/design... [genolve.com]
  • by carvalhao ( 774969 ) on Monday January 22, 2024 @02:30PM (#64179969) Journal
    I have been delivering for the past few years a talk on this topic and even created a predictive model. Preview: itâ(TM)s depressing. https://youtu.be/sWoOhDTLbnA?s... [youtu.be]
    • I've just started watching your video. That tidbit about almost 2 million robots being purchased by Foxconn is sobering and disturbing.

      I'm torn between thanking you for the link, and cursing you for adding another straw to the doomsday pile. But... thanks. The more awareness the better. Information like that is precisely what mainstream news media should be reporting, loudly and incessantly.

    • I copy-pasted the URL into an existing YouTube tab I had open. Appropriately, the video I had previously been watching was detailing a new feature in Minecraft called an "Autocrafter". It's a new block that can be created that will take material inputs and output objects that would normally require a human player to create using a crafting table.

      Your video is insightful, David. Thank you for posting it here.
  • These days the average Slashdot story is longer by far than the entire article referred to above. In this case I think simply pasting the complete article into TFS would have been better.

  • Our simple binary computers are purely algorithmic, no matter how big the scale, and will never achieve anything even remotely like AGI. Until there is a major break through with a new type of computing, we are just scaling up an abacus.

  • > euclidean distance of reskilling

    Yeah, OK.

  • I keep seeing it mentioned but Google just comes up with hits for taxes. What are AGI trades??

    • In this case I think he just means broadly: as AI improves, what goes up or down in value?
  • AI is just a tool. Only a developer can make anything with an AI. You may be overestimating the normies.
  • Adaption [despair.com] explained.

    For more inconvenient truths, visit https://despair.com/collection... [despair.com]

"Pok pok pok, P'kok!" -- Superchicken

Working...