Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI

'The Tech Industry was Deflating. Then Came ChatGPT' (msn.com) 92

An anonymous reader shared this report from the Washington Post: A year ago, the mood in Silicon Valley was dour. Big Tech stocks were falling, the cryptocurrency bubble had popped, and a wave of layoffs was beginning to sweep through the industry.

Then the artificial intelligence boom hit.

Since then, venture capitalists have been throwing money at AI start-ups, investing over $11 billion in May alone, according to data firm PitchBook, an increase of 86 percent over the same month last year. Companies from Moderna to Heinz have mentioned AI initiatives on recent earnings calls... AI is one of the only fields here still hiring, and firms are paying huge salaries for the expertise. Workers here are retraining to specialize in the field...

Tech stocks have rallied across the board, a whiplash return to growth after analysts declared the 10-year bull market was finally over. In 2022, the Nasdaq 100, a stock market index dominated by the biggest tech companies, lost an entire third of its value, falling 33 percent in a massive erasure of wealth that had been built up over the past decade. So far in 2023, the Nasdaq 100 is already up 31 percent... The start-up ecosystem is rebounding back to optimism as well, at least for those focused on AI...

"VC firms compete for access to hot AI deals while eschewing unprofitable conventional software companies," said Brendan Burke, an analyst with PitchBook. "AI start-ups experience founder-friendly conditions not extended to the rest of the tech ecosystem." Around $12.5 billion in investments have gone into generative AI start-ups this year so far, compared with only $4.5 billion invested in the field in all of 2022, Burke said.

Calling NVIDIA an "AI chipmaker," the article points out that Friday NVIDIA's valuation on the stock market was $971.4 billion, "within spitting range of Amazon, which is worth $1.26 trillion."

NVIDIA is now "one of only a handful of companies in the world to hit $1 trillion in value."
This discussion has been archived. No new comments can be posted.

'The Tech Industry was Deflating. Then Came ChatGPT'

Comments Filter:
  • by backslashdot ( 95548 ) on Monday June 05, 2023 @03:25AM (#63576535)

    Everyone talks about nVidia .. what about AMD .. are they doing anything in AI? Sleeping?

    • Re:AMD (Score:5, Interesting)

      by sg_oneill ( 159032 ) on Monday June 05, 2023 @03:49AM (#63576547)

      It would seem. AMD just haven't seemed to give a shit about CUDA and Tensor Cores and the sort of things that ML needs.

      Theres a huge opportunity for them right there if they can work that out, since ML capable GPUs (Hint: Those 40xx cards wont cut it, what ML needs is memory, more so than even performance. When you budget about 3-4x the amount of parameters for GPU ram, you end up with absolute nonsense requirements like "GPU with Terabyte of ram" for monsters like GPT4. In theory you can split them up so its just a layer and its associated attention heads per card that needs to be kept together, so something like GPT4 just needs, say, 10-12 cards to run per instance, but then you have a billion people hammering away at it, and you can see where this gets ridiculous) are absurdly expensive. Get the driver people on the case to make up that CUDA gap, get the hardware people figuring out the Tensor core stuff, and put out cheap cards with buckloads of memory and they will have an in.

      Right now its likely the big impediment to going larger (Ie GPT5/etc) is quite simply economics. Thats where competition is really needed in this space.

      • by Rei ( 128717 )

        It would seem. AMD just haven't seemed to give a shit about CUDA and Tensor Cores and the sort of things that ML needs.

        Yeah, AMD massively shot itself in the foot. Not by bad hardware, but software. ROCm support is just nothing like CUDA support; developers sometimes try to work it into projects as an afterthought, but CUDA is where the development is at, and that's because NVIDIA specifically focused on it.

        Hint: Those 40xx cards wont cut it,

        Ehh... don't overplay the situation. The AI market also contains

        • I could upgrade my Threadripper Pro to have 2TB of RAM if I had €65,000 + import taxes to spare. Surely looking at a way to have the GPU more efficiently access that RAM, much like what Apple is doing with unified memory, would be a better approach?

          • The large memory pools on Threadripper are not unified, they are NUMA with quite restricted inter die bandwidth. Supposedly Genoa is silicon on interposer, so it might be a little closer to behaving as SMP, dunno.

          • by Junta ( 36770 )

            There are things being done.

            One is leveraging resizeable PCI BAR. So the entirety of a GPU's RAM gets mapped into memory, instead of a relatively smaller window of VRAM that has to be moved between dealing with different regions.

            Also an alternative, cache coherent protocol over PCIe is coming/here depending on the market segment.

            Of course, there are fundamental limitations with the realities of discrete CPU and GPU. Even imagining that nVidia bothered to support Gen5 this generation (they didn't), they wo

          • by gweihir ( 88907 )

            Likely not possible without a fundamental redesign. GPUs are _not_ intended to access main memory fast. For that you need real cores because they are designed for it.

        • Yep.

          They didn't even need CUDA. Just making sure that pytorch and tensor flow worked works have made a huge difference. Oh and also didn't they have desktop cards not supporting one of their compute initiatives? So no easy way of testing stuff either. Genius move.

          Basically AMD never put the legwork in to make the software work. Some randos got it working on apple silicon using compute shaders. AMD with actual budgets should have done that.

      • Hmmmm.... but will there be time to design and build actual hardware before the whole house of cards falls into the quicksand? Maybe some smart programmers can come up with a less computationally intensive way to synthesize innacurate but plausible documents?

        • Geoff Hinton is confident the hallucination problem will be solved soon. Pretty good source.
          • by gweihir ( 88907 )

            Unlikely. Hallucinations are fundamental to this approach. Cut them and you lose 95% of what a ChatAI can do.

            This guy has a lot invested, so he would claim this can be solved.

            • Anything with zero "hallucination" would just be a search engine - it wouldn't be generative. For image generation, you want something that never existed - that's the whole point. But when you're writing a legal or academic document, what you want is actual existing facts and citations, woven together with some level of originality/creativity/hallucination. So, using the right amount in the right ways is a lot harder than using all or none. It requires inferring intent.
              • by gweihir ( 88907 )

                So, using the right amount in the right ways is a lot harder than using all or none. It requires inferring intent.

                Indeed, it does. And it requires specifically to find out which parts can be creative or hallucinations and which parts have to be factually correct. Inferring intent is something that is generally expected to require insight. Something "AI" cannot do at all at this time. Whether we will eventually get there or whether a catalog of common intentions can at least provide some sort of half-solution reminds to be seen.

      • Inference benefits from large amounts of memory to keep latency low but for training it is more for convenience more than necessity, latency is almost irrelevant and pipeline scaling is good enough.

      • by narcc ( 412956 )

        Going bigger is probably the wrong move. Performance doesn't increase linearly with the size of the model. Typically, the gain you get from doubling the size of the model decays exponentially. It seems unlikely a model twice or even four times the size of GPT-4 is going to to perform noticeably better. Let's also not forget about the phenomenal expense of training and operating a model of that size.

        If we're going to see real progress, it isn't going to be from a bigger version of what we have now, it's g

        • by Rei ( 128717 )

          Performance doesn't increase linearly with the size of the model. Typically, the gain you get from doubling the size of the model decays exponentially

          My understanding is that performance corresponds roughly linearly to the combination of (A) model size, (B) training data size, and (C) training data quality, but only if the problem is well aligned; poorly-aligned problems start to increase in performance, plateau, then decline, and can even end up worse than the starting point with a small model.

          Smaller, h

          • by narcc ( 412956 )

            My understanding is that performance corresponds roughly linearly to

            It is not at all linear, it is exponential, as I explained. Do a quick search for a graph showing perplexity vs parameters. Hopefully, you'll notice that perplexity drops sharply, but quickly levels off. This is as true for transformers as it is for RRNs, n-grams, or anything else. Don't like perplexity? Pick a different metric. You'll get a similar curve. As it happens, you'll also see this curve plotting performance against training data size. It shows up in a lot of places.

            poorly-aligned problems start to increase in performance, plateau, then decline

            "Alignment" has absolute

        • Re:AMD (Score:4, Insightful)

          by gweihir ( 88907 ) on Monday June 05, 2023 @09:40AM (#63576949)

          I've been very openly bearish on AI, so a company like AMD focusing on their core business instead of dumping money into the latest fad makes a lot of sense to me. There's a lot of hype around AI at the moment and I fully expect the bubble to pop in the near term as reality begins to push out the fantasy.

          Indeed. Currently people are just pouring money into what they fantasize this thing can do. At some point the blood-drug levels will come down and then things will not look nearly as good. In fact, it is already quite obvious that what we have and will have for the foreseeable future is a well-spoken semi-moron with delusions and hallucinations. Of course, some people deeply invested claim this can all be fixed. That is wishful thinking though. The very approach is that ChatAI fantasizes with a lose connection to things it saw. Fact-checking is completely outside of what it can do. GIGO applies on very much, but also the connection it makes are driven not by any level of insight or understanding, but instead statistics. Statistics, by their very nature, cannot go deep into a subject without excessive (infeasible) effort. That is not what they are for and they cannot deliver.
           

      • It would seem. AMD just haven't seemed to give a shit about CUDA and Tensor Cores and the sort of things that ML needs.

        There's a huge opportunity for them right there if they can work that out...

        They know. [tomshardware.com] They created the MI300 chip more than a year ago. 24-core Zen 4 CPU (In three 8-core dies), a CDNA3 GPU (14080 shading units and 880 texture mapping units)(in six GPU dies) and 128GB of HBM3. It's a total of 9 logic chiplets plus a heap of memory chiplets totalling 146 billion transistors in 1017 square millimeters. It uses 600 watts.

        The El Capitan supercomputer at Lawrence Livermore National Lab is being upgraded to use them, 4 liquid cooled chips per 1U chassis.

        AMD is perfectly happy to sel

    • by gweihir ( 88907 )

      AMD, as usual, watches the idiotic rush and looks for actual substance. At this time, it is not clear what is actually needed and there is very little substance. Nvidia is not above providing generic stuff that may well turn out to be useless or too narrow in application. But hey, they are raking in the dollars. For a time. And at some point they may well crash hard.

      When the dust clears, there are two scenarios. In both, AMD is doing well. In one Nvidia is massively rich, in the other Nvidia is massively in

    • by CEC-P ( 10248912 )
      The short version, as far as my knowledge, is they quietly took cryptomining money selling GPUs to no-name card-makers making custom mining RX580's for example and then when that dried up, they're just chilling and Ryzen CPU money and not aggressively pursuing new hot tech because TSMC doesn't have the fab room anyway.
  • by greytree ( 7124971 ) on Monday June 05, 2023 @03:34AM (#63576539)
    ...but we nerds just kept on coding and making cool stuff.

    What the media hypes is not what the industry is.
  • by Askmum ( 1038780 ) on Monday June 05, 2023 @03:36AM (#63576541)
    Mark my words. Not 5 years from now it has gone back to zero.
    • by Junta ( 36770 ) on Monday June 05, 2023 @09:38AM (#63576941)

      It won't be back to zero, it will find it's place. The impetus for the current hype will have much less applicability than imagined, but it will leave a mark on the industry.

      Generally the same has held true for many waves of fad. The dot-com bubble popped, but *clearly* internet remained at some reduced capacity (and in time, superseded even the lofty hype of late 90s/early 2k). 3D printing bubble popped and no one cares at large, but 3D printing is a tool popularly used still, just not like people imagined.

      Image generation will remain a tool for artists to extrapolate 'don't care, but needed content, but "stock" won't quite cut it". An artist needs a vaguely appropriate background, poof, done. Enhanced chatbots will likely be a much bigger presence of support (which already drives things toward automation heavily, this represents a chance to steer a somewhat larger share away from human agent). It will not likely live up to the hype, but it will have a sustained presence.

      • by ranton ( 36917 )

        It will not likely live up to the hype, but it will have a sustained presence.

        I think it will live up to the hype, although perhaps not the most aggressive hype you see out there. The Internet is a good example of how I believe generative AI will progress. Very useful right away, but taking a couple decades to be an indispensable part of almost all human life. Most investors aren't expecting to see 10x returns in two years. They just want to be part of the companies who are dominating the AI industry in 10-15 years.

        The hype they are buying into is that AI is ready to revolutionize al

        • Right now the FAANG (or whatever the acronym is nowadays) companies with massive cloud services/storage have a major advantage, as they hold all the data needed to create new datasets for AI to be trained on. That will be a very hard moat to cross for many smaller businesses/startups.

          Likewise they can continue to rent out said cloud services to businesses, effectively keeping them dependent like junkies. If the business fails, whatever they made money. If they succeed they can consider buying them out. They

      • by g01d4 ( 888748 )

        current hype will have much less applicability than imagined

        I'm not so sure. Blockchain was an algorithm with growth limited by application domain. Large Language Models are a technology, with inherent growth potential that's been expanding exponentially over the last few years, demonstrating impressive performance which in turn opens additional application domains. 'Hallucination' issues are currently putting the brakes on hype and sure, LLMs are stupid, but what code isn't? So, another challenge will be l

    • by gweihir ( 88907 )

      If it takes that long. The first real tests to have it do actual work already resulted in tons of failures except for the most simplistic tasks. And even the most simplistic taks run into bizarre "hallucinations" and bad result quality. That does not make this tech useless. But it is very far removed from what many people expect it can deliver.

    • Mark my words. Not 5 years from now it has gone back to zero.

      I hope you're right. Because AI, actual AI or a reasonably useful brain-damaged version, is going to wreck society. Things are going to change fast and some people aren't going to be able to adapt. I suspect we'll have millions of people who are side-lined in the economy and we'll just act like they just need to be "retrained". In practice they people won't re-enter the workforce and they won't be spending money or interacting with the economy in any meaningful way. This will lead to a more fragile system e

    • The most impressive results I've seen created by AI are the ones that cheat. The raw, stable diffusion stuff is a technical curiosity but the results are extremely poor. The AI everyone wants is the type that analyzes a few key sources and alters them to your specifications. Regardless of whether it's morally questionable, that's what delivers acceptable results and that's what people want and will use.

      AI really is the future, but it'll be a race to see who can rip off the most stuff from existing source

  • by Anonymous Coward on Monday June 05, 2023 @03:37AM (#63576543)

    Nevermind that especially "big tech" is nothing of the sort: It's all advertising, dressed up in "tech" and "online" but advertising for all that.

    This is really about "VCs" with Way Too Much Money sloshing around, meaning there's no investing in things we actually need, but in unimaginative chasing after the latest hype and a lot of crap nobody needs or wants. In short, they've rigged themselves for low returns for society chasing short-term "unicorn" profits for themselves.

    That means that capital allocation is off kilter, and that's pretty bad.

  • There was nothing else controversial to talk about.

  • Next Act! (Score:5, Funny)

    by oldgraybeard ( 2939809 ) on Monday June 05, 2023 @03:56AM (#63576557)
    OMG there is no I(intelligence) in Artificial Intelligence. They got the Artificial part done and the marketers ran with it. Wonder when they are planning to get to the Intelligence part?
    • by gweihir ( 88907 )

      Not needed, at least now. There are enough people with money that do not have the "I" either and that are willing to put that money in on being given vague promises. Of course, that may look different tomorrow, but marketeers do not do long-term planning.

  • by tonytins ( 10331799 ) on Monday June 05, 2023 @04:23AM (#63576571)
    Last time it was slapping "blockchain" onto a company's name was a sure guarantee for stocks rising, now it's going to be AI. In the short time I've been alive, that makes the dot-com bubble, the blockchain bubble, and now the AI bubble is on the horizon. I'm sensing a pattern, and it's accelerating.
  • The Tech industry were fine, they had over hired and so they had to lay off some people ...

    The "value" of the tech industry had little if anything to do with what they were doing

  • Another Gold Rush (Score:5, Interesting)

    by Bobknobber ( 10314401 ) on Monday June 05, 2023 @05:30AM (#63576621)

    For all the money being thrown around, AI is just not something you can directly commercialize, at least afaik. Most of the startups you see using AI are either absolute gimmicks (AI-powered skates anyone?) or merely using a GPT API wrapped up in a shiny looking UI. The latter are seeing immense churn levels as they struggle to maintain user retention and margins.

    Most of them will likely not make it by the end of this year I reckon.

    And this is before we get into the fair share of controversies and legal issues that have arisen with the use of this tech. Most of those startups likely lack the legal resources/PR needed to bail themselves out should their products run afoul of the law.

    Like the gold rush of yesteryear, the real winners will once again be the general stores and train companies, not the prospectors or miners themselves. In this case, it is the hardware sellers and cloud service providers. Talking Microsoft, Google, Amazon, Nvidia, etc. Even the LLM developers, the ones who made this market even exist, are struggling financially due to a combination of massive fixed costs and poorly defined business models. Hence why even the big models effectively sold their soul out. Those that have not, like SD will likely end up getting pressured into selling out eventually once the investor cash runs dry.

    Tl;dr sell shovels, not gold in a gold rush.

  • Apple is still in the metaverse fad, watch for the laughs coming out of WWDC later today.
  • It's really annoying how ChatGPT is being equated to AI, end of it all... so much more to AI than chatbots. Sure a lot of work has gone into it but so much hype for a chatbot that kind of answers to questions when it straigh doesn't make stuff up

    I am not even talking about tech in general which has innovated in so many ways. If you read the original article, nothing else is happening

    • Re: (Score:2, Informative)

      by Visarga ( 1071662 )
      > so much hype for a chatbot that kind of answers to questions when it straigh doesn't make stuff up

      You're not holding it right. It's a tool, use it with discernment. It can do some things and can't do other things. You should learn your tools instead of complaining about lack of perfection.
      • by youn ( 1516637 )

        > It's a tool, use it with discernment. It can do some things and can't do other things.

        Arguably a chatbot is a type of a tool. to quote you, "I am not sure you're holding it right" because nothing in what I said contradicts what you said.

        I have used it enough to know a lot of its limitations and uses. You can't contradict the fact still makes stuff up, that for some uses it actually gets in the way and unfortunately too many people use it without taking in account those limitations... lawyers in courts,

    • What I wonder, is what is the upper limit on those LLMs. We can all agree this is not AI, this is a model trying to replicate what it has seen in text/images. Not descriptive, and definitely not understanding what it is doing. Still, imagine you give ChatGPT 4 more orders of magnitude in computation speed. Will it be a bit more believable when producing output? Make fewer mistakes? By what amount? Will it get multiplications of 6 digits numbers right, unlike today?
      • by Junta ( 36770 )

        I recall seeing some article saying that we may have pretty much already hit that. Despite feeding more and more data storage/memory, the methodologies are not showing signs of getting better results, being plagued roughly by the same problems we see today.

        So the 'easy' answer of throwing more resource at it seems to be showing we are beyond the point of diminishing returns. So the industry on the whole has to get something very different going to keep up the speculative hype train. There's still time for

      • by Falos ( 2905315 )

        Math, no.

        But many things can achieve Good Enough, architects and engineers technically do math but anything in the meatspace is an approximation, a corner is 89.999 degrees.

        The chinese room will probably never know actual chinese, but at some point (hopefully) it will become good enough that for our intents and purposes it doesn't matter, "it might as well know".

        Some applications may achieve sufficient "nines" in the near future (images) some may take decades or centuries, some may never quite suffice.

        Note

    • by Junta ( 36770 )

      Because, for better or worse, ChatGPT is most readily available for people to 'play with' and produces 'conversation' which is something that is novel to those evaluating it. Other advances are either more niche, have less exposure, or are some 'boring' evolution of the same sorts of things we can already do.

      We are currently in the hype cycle part where a bunch of business leaders and press, fueled by marketing efforts, are seeing something markedly distinct and resembles something that they could imagine

  • ...now it's collapsing.

    Frankly, I don't care about the tech industry. Why should I bother when the tech industry doesn't even give a fuck itself?

    The problem with ChatGPT and the like is that they're "dumb" AIs. They can generate shit based on prior art. What they can do is take stuff, mash it together and create something "new" out of it. This will certainly push content that gets generated by taking existing tech, jumbling it about and spitting out something that this technology didn't produce so far but w

    • by vadim_t ( 324782 )

      I think there's plenty for AI to do that's dumb.

      You're thinking of the end stage, where we build U.S. Robots and Mechanical Men Corporation or Skynet.

      But there's a lot of non-creative grunt work to be done, and a lot of work where AI can polish up a creative person's output. Eg, AI image generation when combined with an actual artist is amazing. You can have enormous amount of exquisite detail generated from a few rough sketches and a bit of touching up later. The artist is still the creative driving force,

      • That's another problem, we got very used to half-assed, shoddy work and we settled for it. First because it was cheap, but increasingly because, well, that's all you actually CAN get at affordable prices. Or at all. You can't even get certain things for good money anymore. The first TV we had cost about three times what my dad earned per month. But it lasted us, with a few repairs, almost two decades. I could not even buy such a TV anymore. Not even for a quarter of my annual income. It does not get produce

        • by vadim_t ( 324782 )

          That's a heavy case of nostalgia goggles. Lots of old stuff was crap. Most old stuff that was good was because it's extremely simple, which creates other unpleasant tradeoffs.

          My grandma had a tube TV. I don't think I ever remember it being not broken. Tubes aren't that reliable when you have lots of them I guess. Very old cars are unpleasant, like those with the crank that can break your wrist if you're not really careful with them.

          And even had that TV lived forever, who wants it? It was huge, amazingly pow

          • by Junta ( 36770 )

            Very old cars are unpleasant, like those with the crank that can break your wrist if you're not really careful with them.

            Hell, don't have to go that far back. Cars needed so much more maintenance, and punished you hard if, for example, it sat for a few weeks without being started. You pretty much needed to be a bit of a mechanic to live with a car at all, nowadays you don't have much mechanical maintenance needs except every 10,000 miles or so, and even then it's pretty simple.

        • Haha, the 2 decade TV is funny, I had a black and white one too. But AI is different, the same GPU that 3 years ago were only good for games now generate images and text at near human level. Build your app on chatGPT and it will be upgraded automatically as it gets smarter, you don't need even to recompile.
        • by Junta ( 36770 )

          But it lasted us, with a few repairs, almost two decades

          Well a few things in play:
          -Survivorship bias. I remember all sorts of crap breaking back in the day. You tend to forget those over time and focus more on that product in your house that is amazingly still working. I haven't heard one of my friends or family have to deal with a broken modern set, mostly I've heard people replacing for sake of newer tech or bigger screens as the bigger screens get cheaper.
          -"with a few repairs". The cited device didn't even last, it required repairs to keep going. Repairin

        • I don't know. I've had my surround sound, receiver and projector for about a decade and it's still going strong. Sure, it's "only" HD but I'm perfectly happy with it. Unless the bauble burns out (one day it will) I'll be good to go. It's definitely more then paid itself off in value at this point and I still have no real reason to run out and buy a 4k projector.

          My desktop computer is pushing 13 at this point. I've upgraded everything on it I possibly can and am kind of waiting for it to just break as an exc

      • AI image generation when combined with an actual artist is amazing. You can have enormous amount of exquisite detail generated from a few rough sketches and a bit of touching up later. The artist is still the creative driving force, the AI does the grunt work of painting details that wouldn't be economical to do by hand.

        Does this mean we'll have better looking, more realistic, more detailed anime like we had back in the 80s when things were done by hand? The future is awesome!

        • by vadim_t ( 324782 )

          Here's some people experimenting with making a video to anime filter with AI: https://www.youtube.com/watch?... [youtube.com]

          Now it's wonky because it's one of the first attempts to do that. But try and imagine a group of people doing that with traditional animation just for the heck of it. I don't think that video would be doable without modern tech.

          Also check out what people are playing around with: https://www.reddit.com/r/Stabl... [reddit.com]

    • > What it will not do is innovate.

      You're right, it won't if you use it like a text completion or chat app. You need to organise large scale search and evaluation to get new insights. For example AlphaFold was an AI that invented a better way to do matrix multiplication. Better than all the human solutions. But that was caused by searching with AI, not generating one single response. The AlphaGo bots beat humans at Go, but they also run massive search (MCTS). There is a paper called "Evolution thro
    • In a certain way, if we get these chatbots half-decent at data generation where they aren't just spewing bullshit like everybody's weird uncle after the second six pack of the evening, it could free up the real thinkers for coming up with innovations, rather than treading water just trying to keep current systems propped up and sorta/kinda running. That said, we're a long way from that even, so I'm not sure why the hype train accelerated so fast, except for the fact that it's making somebody an insane amoun

  • by jythie ( 914043 ) on Monday June 05, 2023 @06:04AM (#63576675)
    Any time there is pent up demand for investment and expectations about huge Amazon like returns, you'll get some well marketed fad to satisfy it. Since Crypto and NFTs appear to be on the way out, ChatGPT seems to be the next one, with boosters throwing money around hoping SOMEONE will find a killer use for it before the bubble bursts.

    Or ChatGPT will go to the moon I guess.
  • by FridayBob ( 619244 ) on Monday June 05, 2023 @06:34AM (#63576699)
    Soon, all IT jobs would have been lost and and it was going to be back to the stone age for all of us. There was nothing anyone could do to stop it. Just look at the stock market; the investors knew this was coming. The stench of death was in the air! But, luckily that's now all been averted. Phew! And it's all thanks to the rise of ChatGPT and other AI projects that have come just in time to save us. So, let us rejoice and sing the praises of our new AI overlords, for now the future is looking infinitely less bleak than this article was predicting it was surely going to become!
  • Should we call companies that reach 1T Superunicorns?
  • Nobody even cares about my ugly monkey NFT anymore.
    I wish I invested in degenerative AI systems.
    Then everyone would think that I am cool again.

    • The difference is, crypto (blockchain) was a solution looking for a problem to solve. AI has the potential to solve a whole host of problems currently plaguing all kinds of industries. The current AI hype bubble is very frothy, but there is a solid layer at its core that will not go away.

  • by Tablizer ( 95088 ) on Monday June 05, 2023 @08:44AM (#63576861) Journal

    A fad a day keeps the pink slips away.

  • IT is basically an industry that desperately tries to prevent itself from going established and doing regular business, because that means regular profits based on actual merit of their products. As we all know, those merits are often not very impressive, and that includes the really big names. "AI" is another hype where they can make "investors" (a.k.a. "Morons With Money") pump in a lot of money without actually having to deliver anything solid beyond flashy demos and simplistic services. And they all ea

  • the reason for all the excitement is CEOs are salivating at the prospect of replacing us with "AI".

    Something I don't think we've realized is that LLMs have taught CEOs that anything can be automated. Now, maybe they're wrong, but they're going to try. And there's a *ton* of jobs in IT that could be automated but the CEOs just haven't got around to it. They're going to go top to bottom across the entire enterprise and look for jobs they can automate. They in come the bean counters for the layoffs.
  • I stopped reading after the first paragraph which is contradicting the premise of the artcile which as if AI will stop layoffs in tech and elsewhere... Stock market will save us, ya right!
  • Deflated, probably. Way too much hype, way too little results.
  • "...a wave of layoffs was beginning to sweep through the industry. Then the artificial intelligence boom hit."

    Considering the next revolution is targeting the human mind which won't just make a human temporarily unemployed, but permanently unemployable, I'd love to know what the latest definition of "boom" is, because I can already confirm the definition of "gullible" hasn't changed.

    Whatever minor pause you might see with layoffs in the tech sector, is temporary until every CEO in every sector can figure out how to fire twice as many when good-enough AI becomes available to replace that good-enough human worker. T

  • Retraining as what? How many jobs are there 'in AI'? Maybe I should ask ChatGPT as I have no clue what anyone means by the term - surely it's just data science at the end of the day?
  • /venting Yup, tech industry was deflating. Now it is filled with lots of hot air again.
    All of the sudden decade old control and optimization techniques are relabeled as AI and will revolutionize everything.
  • The only thing AI is useful for is writing tedious formal emails for you.

    It's a waste of resources and doesn't actually do anything beyond reinforcing the dumbing down of the next generation.

To do nothing is to be nothing.

Working...