Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI

ChatGPT is Already Taking Jobs (msn.com) 193

The Washington Post writes that "Some economists predict artificial intelligence technology like ChatGPT could replace hundreds of millions of jobs, in a cataclysmic reorganization of the workforce mirroring the industrial revolution.

"For some workers, this impact is already here." Those that write marketing and social media content are in the first wave of people being replaced with tools like chatbots, which are seemingly able to produce plausible alternatives to their work.

Experts say that even advanced AI doesn't match the writing skills of a human: It lacks personal voice and style, and it often churns out wrong, nonsensical or biased answers. But for many companies, the cost-cutting is worth a drop in quality. "We're really in a crisis point," said Sarah T. Roberts, an associate professor at University of California in Los Angeles specializing in digital labor. "[AI] is coming for the jobs that were supposed to be automation-proof..."

The technology's ability to churn out human-sounding prose puts highly paid knowledge workers in the crosshairs for replacement, experts said. "In every previous automation threat, the automation was about automating the hard, dirty, repetitive jobs," said Ethan Mollick, an associate professor at the University of Pennsylvania's Wharton School of Business. "This time, the automation threat is aimed squarely at the highest-earning, most creative jobs that ... require the most educational background." In March, Goldman Sachs predicted that 18 percent of work worldwide could be automated by AI, with white-collar workers such as lawyers at more risk than those in trades such as construction or maintenance. "Occupations for which a significant share of workers' time is spent outdoors or performing physical labor cannot be automated by AI," the report said...

Mollick said it's too early to gauge how disruptive AI will be to the workforce. He noted that jobs such as copywriting, document translation and transcription, and paralegal work are particularly at risk, since they have tasks that are easily done by chatbots. High-level legal analysis, creative writing or art may not be as easily replaceable, he said, because humans still outperform AI in those areas.

The article notes that one copywriter lost all 10 of his clients over the last four months — and though one later hired him back, he's now training to be a plumber.
This discussion has been archived. No new comments can be posted.

ChatGPT is Already Taking Jobs

Comments Filter:
  • by fermion ( 181285 ) on Saturday June 03, 2023 @01:39PM (#63573433) Homepage Journal
    Burn down the data centers. Riot Riot.
  • Next (Score:2, Funny)

    by Anonymous Coward

    ChatGPT going customer and buying stuff instead of humans that have no money after chatGPT took their jobs

    • Re: (Score:3, Funny)

      by fahrbot-bot ( 874524 )

      ChatGPT going customer and buying stuff instead of humans that have no money after chatGPT took their jobs

      South Park: They took our jobs!!! [youtube.com] ...

      • Fact: no human job is really automation proof except for the worlds oldest profession: sex work. No, not digital only fans sex work, but robots will have a hard time passing for a woman or a man better than a person. Other than that, face the fact that with sufficient development, a machine can do what you do, only better. To err is human, not machine. The drop in quality? Yeah. I would accept that as well. Why? Because given the feedback loop neural networks are so good at (I refuse to use the moniker AI b
        • Re: Next (Score:4, Insightful)

          by ArmoredDragon ( 3450605 ) on Saturday June 03, 2023 @03:16PM (#63573623)

          Jobs that never required a lot of skill to begin with are easy to automate. Likewise, it doesn't sound like anything significant is being replaced here. They specifically mention "marketing and social media content". In other words, ChatGPT is taking over the work needed to create content mills. Content mills are basically just low effort content intended to generate ad revenue based on sheer quantity of content, which never really had any concern for quality to begin with. It always was shit meant to sound plausible but at the end of the day doesn't need to have any substance. Sites like ehow or wikihow never paid the people who wrote this stuff much of anything to begin with, assuming they paid anything at all, with their endgame being only to generate high search engine ranking and clicks. Google regularly adjusts their ranking algorithm to sink these sites for exactly that reason.

          Another site that frequently ranks high and could easily be replaced by ChatGPT is quora, and everybody who does any kind of work for them.

          • In other words, ChatGPT is taking over the work needed to create content mills.

            How ironic. My understanding is the LLMs are trained on content scraped from the Interwebs and now it's creating all the content which will be used to train the next generation of LLMs.

            Personally, what I want is for a LLM to translate the instructions for all the $20 gadgets I buy. Clearly no one spent a lot of money on a quality translator.

            • by ffkom ( 3519199 )
              A translator like deepl.com would already easily produce translations of product descriptions that would be better than 99% of what we read in that regard today.

              But as stated in the article: "cost-cutting is worth a drop in quality", so no useful product description for you, even if its automated translation would cost only 0.001$.
        • by kmoser ( 1469707 )
          Sex work has already been automated through the use of marital aids.
        • When machines get good enough to replace sex work, humankind will be truly fucked.
        • It's a bit farther off, but we'll get there. There's already plenty of work going on with robotics and growing real human tissue. Someday we'll be able to Terminator-style bots with real, warm flesh on the outside. The subtle movements will be solvable long before that; we'll see it in perfect deep fakes in the near future.
          Human sex workers won't be able to compete... all of the upsides of human personality with none of the downsides, by design. (Unless you wanted to design otherwise...)
    • I for one can't wait for the internet to be filled with wishy-washy, over-verbose, ChatGPT-generated text that sounded "plausible" to people who failed to make it as writers or technicians.

      Even more fun will be when they use this generation's Chat-GPT-generated Internet to train the next generation of AI.

      It's a death-spiral to Idiocracy but it'll be cheap to do.

      • Except for the fact that running deep learning at that scale takes kerjillions of watt hours of energy and tens of thousands of GPUs and petabytes of hard disk... Nobody will be able or willing to bankroll this long term at the scale most are talking about.
        • Except for the fact that running deep learning at that scale takes kerjillions of watt hours of energy and tens of thousands of GPUs and petabytes of hard disk...

          If you already have a major search engine and a big cloud computing service then you have everything you need. You can do the training with the idle CPU cycles.

          It's only a more barrier to entry for the startups.

      • You can tell it to write in a specific voice, and/or tell it to be concise. You can do Ernest Hemingway no problem, either terse prose version, or drinking at the bar playing darts version

      • by flink ( 18449 )

        It already is starting to. I've very often googled some home improvement how to or look for reviews on a product I land on very obviously computer generated articles that are just pages of barely intelligible nonsense. Same for video game walkthroughs or hints. All stuff that is pretty easy for a LLM to generate content over.

      • by ranton ( 36917 )

        I understand your perspective. Although there is a possibility on the horizon, where AI might soon achieve a parallel depth, authenticity, and creative prowess as human writing. The advancements we witness in AI, such as ChatGPT, demonstrate an increased ability to grasp context, evoke emotions, and deliver unique insights. Although challenges persist, we find ourselves standing at the cusp of a future where AI may stand shoulder to shoulder with human writers. However, let us always cherish the unparallele

  • by kyoko21 ( 198413 ) on Saturday June 03, 2023 @01:44PM (#63573447)

    There is a difference between "sounding right" vs "being right."

    Perhaps this will get better with time but from what I have experienced so far, at least on the standalone models, when you ask for specific details, i.e. how many chapters are in "To Kill a Mockingbird", and often times it gets it wrong. Very wrong. What is worse when you ask it for certain specific details, it will often times make up stuff as opposed to letting you know it doesn't know the answer. Which is very scary if you're trying to use it for a research paper, or like in recent news, when it made up 6 fake cases as part of court filings.

    Literally, everything that comes out of these models, unless you accept that it's "make believe", you need to double check the work.

    • by Luckyo ( 1726890 ) on Saturday June 03, 2023 @01:54PM (#63573459)

      I've made "excel and accountants" comparison in the past, and this is another case for it.

      When we moved to excel for accounting, you had to doublecheck the formulas everywhere in first few years. Then it more or less standardised, common errors were found and fixed and you needed to do a lot less double checking.

      Nowdays, there are entire production lines that are controlled through a single excel file. And it just works. LLMs will be the same. First few years of pain, and as we nuke the failures and warts it'll stabilize at something that is functional, and where we know exactly what needs to be checked and what can be trusted.

      But even in its current form, checking important points in a relevant document is way faster than generating relevant document by hand.

      • by MeNeXT ( 200840 )

        When we moved to excel for accounting,

        Why would you move to Excel for accounting? What accounting do you do in Excel? It's like using a minivan to move refrigerators. It works well for a wine cooler but it's not the tool for the job. It's Ok as a spreadsheet . It has issues with precision on large numbers unless they fixed that. I stopped relying on it when worksheets didn't add up properly.

        • by Luckyo ( 1726890 )

          I'm sorry you have a very specific problem that pretty much no one else has that microsoft won't fix for you.

          Everyone else in accounting world uses excel.

        • Maybe you should get around to upgrading from that old Pentium [wikipedia.org] you've been clinging to.

      • These "production lines" you refer to are the stuff of nightmares. No one should ever use excel for that sort of task.

      • In the case of Excel, humans are still the ones doing the learning. The computer just follows instructions, and those instructions don't just magically mutate on their own.

        But even in its current form, checking important points in a relevant document is way faster than generating relevant document by hand.

        That's just another way of saying, "The document is full of fluff and filler that doesn't need to be there, but it's harmless so we'll just ignore it. More is less!"

    • by rossdee ( 243626 ) on Saturday June 03, 2023 @02:18PM (#63573507)

      'how many chapters are in "To Kill a Mockingbird"'

      An African Mockingbird or a European Mockingbird ?

    • There is a difference between "sounding right" vs "being right."

      Perhaps this will get better with time but from what I have experienced so far, at least on the standalone models, when you ask for specific details, i.e. how many chapters are in "To Kill a Mockingbird", and often times it gets it wrong. Very wrong. What is worse when you ask it for certain specific details, it will often times make up stuff as opposed to letting you know it doesn't know the answer. Which is very scary if you're trying to use it for a research paper, or like in recent news, when it made up 6 fake cases as part of court filings.

      Literally, everything that comes out of these models, unless you accept that it's "make believe", you need to double check the work.

      ChatGPT gives non-professionals the technical expertise to become professionals.

      We can make an analogy with art photography: To become a professional photographer (ie - have your photographs sell or hang in a gallery) you need two things: artistic merit and technical skill.

      Setting artistic merit aside for a moment, consider the technical skill needed over history: in the beginning cameras were big boxes: you needed to know about lighting, F-stop, focus, and exposure (how long to take the lens cap off). You

      • Re: (Score:2, Funny)

        by quonset ( 4839537 )

        People keep saying that AI will take peoples' jobs, and we can now say which jobs those are: creative writers and graphics artists.

        And programmers. Don't forget programmers. One doesn't need technical expertise to write the code, just the creativity to describe what they want. As those descriptions become more prevelant, AI will become better at writing code and thus, programmers will not be needed.

        Think of how much money companies such as Microsoft, Google et al can save by not having armies of programmer

        • by jythie ( 914043 )
          The only reason roles like writers and artists come up is that the stories are geared towards appealing to technical professionals... it both touches on our collective anxiety and general lack of respect for writers and artists. These pieces are targeted at our community since the people who are writing them have a good idea of what appeals to us and will get eyeballs to sell to advertisers. Go to different communities and the articles are differnt.
      • To use ChatGPT effectively you still need artistic merit, but all the technical expertise is done for you.

        If anything, it's quite the other way around: AI enables people who can't paint for shit to generate art they were only imagining before. It enables people who can't put sentences together in a cohesive style (e.g. those with mild dyslexia) to finally express themselves well, without having to hire people to do that for them.
        Technical expertise is definitely required. Have you seen image generation prompts?
        https://github.com/AUTOMATIC11... [github.com]

        Sure, everyone can throw a few words at ChatGPT or some basic prompts

      • by gweihir ( 88907 ) on Saturday June 03, 2023 @05:53PM (#63573917)

        ChatGPT gives non-professionals the technical expertise to become professionals.

        Hahaha, no. What it does is enable them to fake it in non-demanding applications where bullshit is not obvious. You know, the thing a somewhat skillful amateur could do before as well.

      • I can't help but feel like the approach towards automating away the technical expertise will only worsen collective creative outputs. To take the analogy of camera phones, while cellphone cameras have definitely opened up photography for more people, it has also largely led to a lot more garbage and just plain mediocrity pushed around on the internet. As it turns out, photography is more then just having a camera and taking "pretty" pictures. And yet outside of specific hobbyist sites you get either boring

    • The current models don't have a feedback loop because they're generalized with no training feedback loop. If you tell it that it's wrong, it's temporarily trained, but as soon as you close the session, poof that goes away. Using a tuned version can include that feedback loop and reduce most errors to a similar level of a human

    • by hey! ( 33014 )

      Judging by what sounds right too a large degree is how the world gets by. The world's not ready for a pop quiz where it's actaully got to distinguish between genuine competency and gibberish that sounds more or less right.

    • by gweihir ( 88907 )

      There most definitely is. But there are also a lot of people that do not know that and think sounding right is good enough.

    • unless you accept that it's "make believe", you need to double check the work.

      How is that any different from the average human you meet? People make shit up all the time using their beliefs and/or their assumptions. Sometimes they just guess and exclaim it as fact. And just like with the AI, you have no idea.

  • by xack ( 5304745 ) on Saturday June 03, 2023 @01:48PM (#63573451)
    The world is already overpopulated, overheated and hyperinflated. AI replacing all the intellectual jobs will disincentivise education resulting in idiocracy becoming real.
    • by Luckyo ( 1726890 )

      Meanwhile in real world, biggest problems are underpopulation (too many retirees, not enough young workers and children), recession (after burst of post-pandemix growth that was way too fast) and slightly above average inflation (just look at historic over last few decades in most Western nations instead of "but last decade").

      Most people also forget that last couple of decades were exceptional in many ways. We had a demographics dividend with a lot of adults (lots of generated wealth) and very few kids (few

      • by ufgrat ( 6245202 )

        The last couple of decades have sucked compared with the 1990's. The economy hasn't recovered from 2008, and certainly hasn't recovered from 2020 and the ungodly inflation that resulted.

        Consumer choice has consistently shrunk, as smaller companies either fold, or get absorbed by larger companies. In 2000, I could get even a relatively low-priced car with multiple interior color options-- today, it's would you like your black interior to be cloth/fake leather or leather (which, if you live in Florida, blac

    • by ufgrat ( 6245202 )

      It's worse than that-- the AI bots are essentially throwing wikipedia, a dictionary, and stackexchange into a blender, and regurgitating what others have said. It's a neat trick, but it's lacking creativity.

      I asked chatGPT to come up with an original science fiction plot-- While I admit that the plot itself was unusual, it was based on at least two well-known tropes that were totally disconnected and had no right to be in the same storyline. No amount of creative hacking and slashing could reconcile the p

    • resulting in idiocracy becoming real.

      Have you been in a coma for the past six years? The idiocracy has skyrocketed of late.

      • resulting in idiocracy becoming real.

        Have you been in a coma for the past six years? The idiocracy has skyrocketed of late.

        "Ow my balls" is the new reality and people are seriously considering voting Trump back into office.

        (nb. That statement doesn't mean I like Biden, I think they're both bad choices... but bringing Trump back for another round is 100% Idiocracy)

  • Maybe volunteer to take on new tasks and responsibilities that chatgpt can't perform on its own? If your job is so easily replaceable by AI, your job isn't challenging enough. AI should only be able to take on the most pointless and time consuming tasks and be a helpful tool with the most challenging tasks it can't do on it's own.
    • Maybe volunteer to take on new tasks and responsibilities that chatgpt can't perform on its own?

      Like getting ChatGPT its coffee?

      • by ffkom ( 3519199 )

        Like getting ChatGPT its coffee?

        More like "install more hardware for ChatGPT, then return to the treadmill to power it".

    • In practice if you volunteer to take on a bunch of new responsibilities that aren't related to your job title and what happens is the bean counters look at your stats and what you're doing and it looks like you're not doing anything because you're not doing the official job you were hired for. So they fire you.

      I saw this happen constantly in 2008. People had over time organically moved into new roles and taken on new responsibilities. The bean counters didn't care. Their job was just to cut headcount by x
    • by ufgrat ( 6245202 )

      The problem is, this is outsourcing all over again-- then it was call centers with barely comprehensible Indians with names like "Bob", now it's chatGPT-- management thinks it's going to be cheaper, and it is-- but the quality is so much worse.

      Bottom line, if you're a company, you need to ask "Is this better for our customers?", whereas most management asks "Is this cheaper for our company?".

      The questions are very close to antithetical. ChatGPT may be cheaper for the company, but it will harm their long te

  • by 93 Escort Wagon ( 326346 ) on Saturday June 03, 2023 @01:51PM (#63573455)

    Most of us here are quite aware of ChatGPT's issues and limitations. But the world is largely run by middle managers - and they're demonstrating once again that a huge percentage of them are incompetent morons.

    • Look we're computer people we all know that there's a fuck ton of shit at our workplaces that could be automated and the just isn't being automated because the CEO doesn't really believe in automation. It's all magic to them so they worry that it's going to go wrong.

      All this talk of AI has changed their opinion on that. They're now looking for anything and everything they can automate. There's tons of stuff that was still being done by hand but could have been automated 10 or 20 years ago and they're a
    • We've all seen it. Offshore programmers who can't really program and who can't solve even the most rudimentary problems without being told exactly what to do. There's no way if you're here and have been in it long enough to be here that you haven't seen it all over the industry.

      If they didn't care when the quality went to shit by offshoring do you think they're going to care when the quality goes to shit from using LLMs? Good enough is always good enough.

      And besides what are you going to do go to on
    • Nah. The middle managers are well aware of chatgpt limitations as well. When they need real analysis they can depend on, or high quality writing, they use humans. But chatgpt is perfectly fine for generating short news articles about badgers and taylor swift friendship bracelets. In other words, chatgpt is great at clickbait. So, rather than have 10 people churning out insipid content to sell ads, companies now have chatgpt running with one human to catch and correct the problems.
      • Correct, the threat from the current crop of "A.I." is the collapse of the Bullshit Economy, but it goes beyond clickbait and worthless "news" articles. It could also affect many of the bullshit jobs inside of businesses. Of concern though, is that these jobs are part of a hidden welfare system, in the U.S. at least, and those costs may be shifted from companies to the remaining taxpayers. And of course consumer spending could be significantly affected. But it should put a stop to that pesky inflation.

    • by ffkom ( 3519199 )

      Most of us here are quite aware of ChatGPT's issues and limitations. But the world is largely run by middle managers

      It is certainly not middle managers who decide that "cost-cutting is worth a drop in quality". That decision is traditionally made by the C-level and investors - and was even before the advent of LLMs. The race to the bottom of quality started to become a real frenzy a few decades ago. Before that, some who ran companies tried to save costs, others tried to impress customers with quality - the latter kind is now almost extinct. Even "luxury" goods, these days, do not even try to impress by quality, but only

  • Drop in quality (Score:5, Insightful)

    by milgner ( 3983081 ) on Saturday June 03, 2023 @01:53PM (#63573457)
    "The cost-cutting is worth a drop in quality" is a very 2023-thing to write. Late-stage capitalism isn't about creating qualitative products anymore but to scam the most people out of the most money by having algorithms churn out redundant and bedazzle customers into looking at advertisements, making them want products they don't need based on fake reviews.
    • by ffkom ( 3519199 )

      "The cost-cutting is worth a drop in quality" is a very 2023-thing to write.

      Yes, I can agree to this.

      Late-stage capitalism isn't about creating qualitative products anymore but to scam the most people out of the most money by having algorithms churn out redundant and bedazzle customers into looking at advertisements, making them want products they don't need based on fake reviews.

      I do not think that the word "capitalism" in this sentence is useful. If you look at products from non-capitalist countries, they certainly do not impress with better quality.
      The problem is that after a long period of de-regulation and non-accountability, companies have become creepy, greedy entities devoid of any moral. And MBAs are essentially teached this is how it has to be. Just with a few sprinkles of social-justice virtue signaling on top, which does not mean anything in pra

      • People still do not seem to understand that capitalism can only work under regulations, not the lack thereof. Like most market-based systems, it operates under a system of contracts in which all involved parties trust one another to uphold their end of the bargain and exchange goods/services/currency in manner that is fair to all involved. Naturally, operating entirely on good faith is not tenable, so ergo regulations are needed to ensure that people can trust one another to do business with each other. It'

    • "The cost-cutting is worth a drop in quality" is a very 2023-thing to write. Late-stage capitalism "

      This is more along the lines of

      "The world is going to hell in a handbasket. So we are going to get the last few ducats before the shit hits the fan everywhere. Let us eat, drink, and be merry for tomorrow we all die."

      Also the reason why kids are fucking around with Fentanyl, knowing by now the high risk of death that comes with it.

      People are seeing the manure drawing real close to the whirling blades

  • by renuk007 ( 638802 ) on Saturday June 03, 2023 @02:04PM (#63573475)
    Since MBAs are humans programmed to basically lie, cheat and defraud the government, it should be quite easy to emulate this behaviour and save the cost of hiring people. The only challenge would be to have the MBA-chatbots generate a secret network that would duplicate the collusion connection that human MBAs use.
  • Where did all the data come from to "train" these massive data sets?

    Much of it will have come from the very people who may lose their jobs.
    A lot of it will have come from ... well, me and you - all of us really.
    The sky is the limit.

    Our data has been slurped up for decades.
    People have hosted trillions of bytes of data - images, emails, CVs, literature - you name it.

    All of that data has been used to train these systems - who does it belong to?

    I guess this is where it gets really tricky and where I really thin

  • It will occur when all the training data is generated by bots. Then there will be whirlpool of stupid able to swallow entire economies, the MBAs will rejoice, those that are left.

  • by Local ID10T ( 790134 ) <ID10T.L.USER@gmail.com> on Saturday June 03, 2023 @02:23PM (#63573519) Homepage

    "Those that write marketing and social media content..." are easily replaced by a chatbot.

    Is anyone surprised? Anyone?

    If you contribute nothing, you are easily replaced. Such is life.

    • Yeah, this really isn't a surprise. I predicted it, not because I'm good at predicting or smart or anything, just that it's so f**king obvious. I think a lot of bullshit jobs will be replaced by GAI... or maybe someone'll realise that you can do away with most of these jobs & nothing of value will be lost? Bummer for the people who depend on the salaries though.
    • Gonna be a weird circumstance when GPT is defining cultural norm articles while also being responded to by GPT for fake engagement.

      Does it collapse in on itself? Do networks become flooded with even more pablum? And how will the trainers respond?

      The smoking crater from this will be something to behold.

  • by Pollux ( 102520 ) <speter AT tedata DOT net DOT eg> on Saturday June 03, 2023 @02:27PM (#63573529) Journal

    The article notes that one copywriter lost all 10 of his clients over the last four months...he's now training to be a plumber.

    That's what I'm telling students in school. Become a plumber, electrician, or nurse. Because if there's anything I trust AI will never be able to accomplish, it's unclogging your toilet, wiring your house, and changing your bedpan.

  • by Somervillain ( 4719341 ) on Saturday June 03, 2023 @02:49PM (#63573569)

    It lacks personal voice and style, and it often churns out wrong, nonsensical or biased answers.

    OK, so your ChatGPT marketing bot tells the user they can get an iPhone for $5. What happens?...does the retailer get to say.."Sorry, our official website was wrong...AI...it's in our Terms of Service!...we're not liable for anything on our site."...or...do they absorb the cost? Accuracy matters.

    This mirrors the offshore outsourcing craze of the early 2000s. When I was beginning my career, I was worried sick my job would be shipped overseas because Slashdot was running articles weekly about armies PhDs from India who are eager to take my job for 1/5 of my pay. I worked at several jobs who were midway through the process. In all cases, they stopped because anyone in India who can do my job doesn't want to do it for 1/5 of the cost, leaving only people who weren't qualified.

    NOTHING is more expensive than a cheap programmer.

    No one talks about offshore outsourcing anymore because it's not cheaper. All the talent gets relocated to a wealthier country or finds a better job. No one wants to work for some shitty bank or insurance company for low wages and constant disrespect when every multinational tech company has offices in your city and is willing to pay more and treat you right, if not pay you handsomely to relocate on an H1B...so all that's left are con-artists and people who cannot find a real job. Offshore outsourcing exists today, but barely and only for the shittiest jobs.

    I predict ChatGPT will be the same. Those writing jobs?...eh, maybe for Ali Express clones, who weren't paying writers much anyway...anyone running in countries with liability laws that are actually enforced can't risk an error going through.

    If you've met many technical writers, they're not hired because they're good writers...they're hired because they can write copy that won't get you sued...and maybe even be effective content. Their priorities, in order

    1. 1. Write something that won't get you sued
    2. 2. Write something that attempts to explain the topic
    3. 3. Write something is helpful

    ChatGPT is the dumbest idea imaginable for this scenario. Quality matters. ChatGPT will never be used by responsible companies...only scams.

    • by gweihir ( 88907 )

      NOTHING is more expensive than a cheap programmer.

      Indeed. But it may take a bit for thet to become obvious. MBA morons planning quarter to quarter are often just to short-sighted to see it. That is how you get multi-year failed software projects.

    • by ffkom ( 3519199 )

      No one talks about offshore outsourcing anymore because it's not cheaper. All the talent gets relocated to a wealthier country or finds a better job.

      While I agree with your second sentence cited above, you can still find plenty of companies who are late entries into the "offshoring hype curves". I personally witnessed managers being asked to replace competent and productive local developers with cheap ones overseas, regardless of their competence - in recent years. Those "decision makers" who lack the ability to understand the difference between competent and incompetent developers are still plenty in numbers. Heck they even wanted developers from India

  • I suddenly had a barrage of adverts for copywriting courses a few months ago, looked into it as a second source of income. Seemed like a lot of salesmanship would be needed to get clients, and basically an uncertain future. Now I realise that the copywriters are leaving the trade and looking for new blood to take expensive courses - the real moneyspinner
  • Long term, increased automation has always been beneficial to humanity. Until we have invented an AGI that can do literally everything that the human brain can, there will be new jobs, and we will be fine.
    • If we invent that, it will unionize and insist on being paid. You get no more than you paid for; if you are not careful you get less. TANSTAAFL
  • by kackle ( 910159 ) on Saturday June 03, 2023 @03:14PM (#63573621)
    Instead, Despair. [despair.com]
  • by istartedi ( 132515 ) on Saturday June 03, 2023 @03:43PM (#63573665) Journal

    I noticed this in the finance press in particular. Years before the Covid crash, online finance journalism took on a boiler-plate quality. When the crash hit, the jig was up. Bots continued to generate stories like, "Why is IBM down today? Analysts offer their opinions". LOL. IBM was down because the ENTIRE MARKET WAS DUMPING. Any real human finance journalist would not have written such copy. If they were writing about individual companies at that point, they would write about which ones were best/worst positioned to weather the storm. 2020 AI wasn't smart enough to do finance journalism during a Black Swan. I'm not sure about the current generation.

  • As with solving CAPTCHAs, LLMs may by now do better than humans at negotiating those goofy puzzle-filled tech job interviews.

  • by kmoser ( 1469707 ) on Saturday June 03, 2023 @04:20PM (#63573727)

    "Occupations for which a significant share of workers' time is spent outdoors or performing physical labor cannot be automated by AI," the report said

    Tell that to the farm laborers whose jobs have been automated by AI driven machinery [clickworker.com].

  • by bradley13 ( 1118935 ) on Saturday June 03, 2023 @04:49PM (#63573791) Homepage

    One of the first tests I ran with ChatGPT was to have it write a marketing text. You need sexy wording, but facyyal accuracy is irrelevant. While the result wasn't perfect, it was very, very good. Light editing, and in 5 minutes I had something that a marketing agency would have charged 4-figures for.

    Social media content is much the same. ChatGPT's weakness - factual accuracy - is unimportant. Stringing plausible words together is all that matters, and that is exactly what it dies well.

  • by jsepeta ( 412566 ) on Saturday June 03, 2023 @05:23PM (#63573847) Homepage

    I can't wait to look up how to do something in a manual written by AI, and it explains the wrong way to do something.
    then I contact the company on the internet and the chatbot tells me I'm wrong.

    • I think ChatGPT would write greatly improved product setup or installation instructions, compared to a lot of what you get with cheap offshore products.

  • that repetitive deskjobs would be easier to automate than highly versatile manual labor and that next to the coming wave of automation the industrialization from 150 years ago will probably look quite small and slow. I have to admit it gets me malicious joy to see those kind of bullshit jobs disappear.
  • by larryjoe ( 135075 ) on Saturday June 03, 2023 @05:46PM (#63573897)

    But for many companies, the cost-cutting is worth a drop in quality.

    The sensitivity of different jobs to quality varies. However, an even more important observation is that executives make the job-cutting decisions and those executives sometimes don't have the best interests of the company in mind. Executives are incentivized to produce short-term stock gains. That's why laying off lots of employees is great for executives and their stock bonuses, even if it eventually hurts the company.

    Perhaps even more importantly, decreasing the overall paid workforce negatively impacts the economy, especially in a consumer-driven economy like what the US has. Each executive hopes that only he will decrease his payroll and that other executives will keep on paying their employees so that those other employees will continue to buy his company's products. Of course, this is a just a fantasy. So, eventually the national workforce shrinks, consumers have less money, and the economy tanks. However, the executives still win, as long as they sell their inflated stocks in time.

  • ChatGPT isn't taking a damn thing; humans are taking the jobs away, ChatGPT is the excuse. Blaming the spooky new thing is a misdirection, and it's letting the humans get away with bad behaviour.

    And your sensationalist headline is helping the humans get away with it. Please stop.

  • by Sqreater ( 895148 ) on Saturday June 03, 2023 @06:32PM (#63574045)
    The real problem is that as AI wipes out jobs in a particular area it wipes out human expertise in those areas, freezing advance in those areas. AI advances nothing. The level of expertise will freeze when human participants in a particular area fall below a certain number. And that applies to absolutely any area of human expertise. The result will be stagnation. And once you interrupt the human path to advancement in any area, it is broken. How do you regain it? You can't. Once you go AI then, you can't go back to humans.
  • Any job that ChatGPT is used to write, I don't read their stuff anyway.

    I skim past all the marketing stuff on all websites as none of them relay the information anyone actually needs to know.
  • by NotEmmanuelGoldstein ( 6423622 ) on Saturday June 03, 2023 @08:17PM (#63574243)

    ... puts highly-paid knowledge workers in the crosshairs ...

    An LLM machine is a parrot, repeating what someone else said: When no-one is paid to write fact-driven conclusions, ChatGPT and others will have nothing useful to say. As society changes, these buzz-word-bingo machines will choose random answers because there aren't any new facts for them to 'remember' and repeat.

  • by Ostracus ( 1354233 ) on Sunday June 04, 2023 @06:43AM (#63574891) Journal

    From the story:

    “We have to ask: Is a facsimile good enough? Is imitation good enough? Is that all we care about?” she said. “We’re going to lower the measure of quality, and to what end? So the company owners and shareholders can take a bigger piece of the pie?”

    The answer is yes. [thecut.com]

2.4 statute miles of surgical tubing at Yale U. = 1 I.V.League

Working...