Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI

Will AI Just Waste Everyone's Time? (newrepublic.com) 167

"The events of 2023 showed that A.I. doesn't need to be that good in order to do damage," argues novelist Lincoln Michel in the New Republic: This March, news broke that the latest artificial intelligence models could pass the LSAT, SAT, and AP exams. It sparked another round of A.I. panic. The machines, it seemed, were already at peak human ability. Around that time, I conducted my own, more modest test. I asked a couple of A.I. programs to "write a six-word story about baby shoes," riffing on the famous (if apocryphal) Hemingway story. They failed but not in the way I expected. Bard gave me five words, and ChatGPT produced eight. I tried again, specifying "exactly six words," and received eight and then four words. What did it mean that A.I. could best top-tier lawyers yet fail preschool math?

A year since the launch of ChatGPT, I wonder if the answer isn't just what it seems: A.I. is simultaneously impressive and pretty dumb. Maybe not as dumb as the NFT apes or Zuckerberg's Metaverse cubicle simulator, which Silicon Valley also promised would revolutionize all aspects of life. But at least half-dumb. One day A.I. passes the bar exam, and the next, lawyers are being fined for citing A.I.-invented laws. One second it's "the end of writing," the next it's recommending recipes for "mosquito-repellant roast potatoes." At best, A.I. is a mixed bag. (Since "artificial intelligence" is an intentionally vague term, I should specify I'm discussing "generative A.I." programs like ChatGPT and MidJourney that create text, images, and audio. Credit where credit is due: Branding unthinking, error-prone algorithms as "artificial intelligence" was a brilliant marketing coup)....

The legal questions will be settled in court, and the discourse tends to get bogged down in semantic debates about "plagiarism" and "originality," but the essential truth of A.I. is clear: The largest corporations on earth ripped off generations of artists without permission or compensation to produce programs meant to rip us off even more. I believe A.I. defenders know this is unethical, which is why they distract us with fan fiction about the future. If A.I. is the key to a gleaming utopia or else robot-induced extinction, what does it matter if a few poets and painters got bilked along the way? It's possible a souped-up Microsoft Clippy will morph into SkyNet in a couple of years. It's also possible the technology plateaus, like how self-driving cars are perpetually a few years away from taking over our roads. Even if the technology advances, A.I. costs lots of money, and once investors stop subsidizing its use, A.I. — or at least quality A.I. — may prove cost-prohibitive for most tasks....

A year into ChatGPT, I'm less concerned A.I. will replace human artists anytime soon. Some enjoy using A.I. themselves, but I'm not sure many want to consume (much less pay for) A.I. "art" generated by others. The much-hyped A.I.-authored books have been flops, and few readers are flocking to websites that pivoted to A.I. Last month, Sports Illustrated was so embarrassed by a report they published A.I. articles that they apologized and promised to investigate. Say what you want about NFTs, but at least people were willing to pay for them.

"A.I. can write book reviews no one reads of A.I. novels no one buys, generate playlists no one listens to of A.I. songs no one hears, and create A.I. images no one looks at for websites no one visits.

"This seems to be the future A.I. promises. Endless content generated by robots, enjoyed by no one, clogging up everything, and wasting everyone's time."
This discussion has been archived. No new comments can be posted.

Will AI Just Waste Everyone's Time?

Comments Filter:
  • by jhoegl ( 638955 ) on Monday January 01, 2024 @08:45AM (#64121381)
    AI stories are just there for investment hype. They are typically very sensational with little substance.

    Nothing "AI", which is just a search engine with rule sets in place to converse with someone, has really blown me away or made me fearful.

    It cobbles together information from the internet, and displays it for users.

    This does yield weighted thinking, but it isnt from itself figuring out what is the best answer, its from tons of people telling it "this isnt right, fix this here".

    People are paying to teach the search engine which results and what language makes sense to us. Giving it "weighted analysis" information.

    This has been around for a very long time, and is easy to trick or mess with it if you were to tell it something sensational and keep at it until it thinks it true.

    A person, who has an opinion, might not be swayed in such ways when critical thinking and logic are a part of their processes.

    Telling someone the sky is red, when we all know it to be blue, and can verify it by looking up, will not sway us to believe it is red. However, these systems are easily manipulated by repeatedly telling it the sky is red and reinforcing it when it states, "the sky is red".

    So, these articles are only for those who are putting money into them, and hoping for some kind of money return.

    We are not innovating anything new, we are just reintroducing Google search engine, 20 years later.
    • Indeed. Said this 'AI' hype was flash in the pan years ago, just another scam.

    • by Nrrqshrr ( 1879148 ) on Monday January 01, 2024 @09:00AM (#64121421)

      Yes, but AI is just a tool. It's not there to convince people that the sky is red or that it's there to change people's beliefs. I use "AI" a lot in my work, but that's mostly for the menial tasks I would have hired someone to do.
      The easy code I would have had a junior write, or corporate emails and reports we had the secretary do. Those simple menial tasks were the domain of those who wanted to do them, and were an entry point to the company for the newbies who wanted to put a foot in. Now we had them all automated away.
      The general answer everytime I talk about this with colleagues is "People will just do the more complex tasks". But how many engineers do we really need? How many senior engineers can there be when we don't need the junior ones, anymore? And, most importantly, how will we transition there? I can think of a million different tasks we hire people to do that could be just automated away, but what will we do with all the people who used to do them? Do we pay for their training? Do we just tell them "Get better skills" and leave them hanging?

      "Good enough" will be the new name of the game, where you need experts to handle all the nitty gritty side of things, while anything that can be used when it's "good enough" will be made by AIs.
      The problem is that "Good enough" was the domain of people, learners or those who were simply okay with that. What do we do with all of them, now?

    • Re: (Score:3, Interesting)

      by bjoast ( 1310293 )
      You are wrong. I have had numerous nuanced and complex interactions with ChatGPT since its inception. These latest generation AIs are inherently distinct to just "a search engine with some rules."
      • by Anonymous Coward on Monday January 01, 2024 @11:10AM (#64121707)
        says more about you than it does AI
      • by sjames ( 1099 )

        It is more sophisticated now, but several lawyers have found out that if you're not careful, it can get you a VERY uncomfortable appearance before an angry judge...

      • by Rei ( 128717 )

        Same here.

        Also, from the headline:

        . They failed but not in the way I expected. Bard gave me five words, and ChatGPT produced eight. I tried again, specifying "exactly six words," and received eight and then four words. What did it mean that A.I. could best top-tier lawyers yet fail preschool math?

        This person doesn't understand that they're doing the equivalent of asking a blind person to select matching colours.

        LLMs don't see words or letters. They see tokens. Any task that involves specific numbers of le

        • by jhoegl ( 638955 )
          Blind how? It has pictures.
          • by Rei ( 128717 )

            ChatGPT does not "have pictures".

            Are you thinking of GPT-4 multimodal? Even in that case, it still doesn't have pictures of your text input or its text output.

    • by Kisai ( 213879 )

      As I stated on another thread on AI

      There's like three branches of AI that "matter"

      AI intended for assistive technologies, eg ASR, TTS both of these can help mute/deaf/autistic people and people with other disabilities that make it difficult to type or read. So the amount of "damage" these technologies will do is outweighted by the amount of help it can do. The problem tends to be the middle point, the translation. ASR is still notoriously terrible at understanding anything that isn't a high quality micropho

    • Nothing "AI", which is just a search engine with rule sets in place to converse with someone

      That statement is completely, totally wrong. That's really the whole point. Modern AI is not a search engine and does not have any rule sets. That describes expert systems, which were a popular approach to AI in the 1980s. The modern approach is the exact opposite: take a generic mathematical model and train it on huge amounts of data. If the model is big enough and you train on enough data, capabilities spontaneously emerge without anyone putting them there. It's much closer to how your brain works.

      T

      • by jhoegl ( 638955 )
        This makes no sense, and is like telling me you see a ghost that I cant see and cant explain it to me. But you see it.

        This isnt religion, its science.
        • In other words, you haven't studied machine learning and don't know how it works. If you want to learn about it, there are lots of resources online. Here is a place to start [wikipedia.org].

          This is a perfect example of the Dunning-Kruger effect: the less we know about a subject, the more we overestimate our understanding of it. You know nothing at all about machine learning, so you convince yourself there's nothing to know and a whole branch of computer science doesn't exist.

  • writing.

    A human can write an outline then fill it it.
    A human can write a number of things and pick the best.
    A human can write then correct what they wrote.
    A human can write then rewrite.

    A generative AI writes one thing from start to finish and can do none of those things. If it isn't as good at writing as a human being, that could be because we made the problem harder for it.

    • Re: (Score:2, Informative)

      by dvice ( 6309704 )

      Me: 1. Invent a random theme. 2. Write one sentence to describe that theme. 3. Write a paragraph of text based on that sentence. 4. Correct any errors the paragraph has. 5. Polish the paragraph by fixing any remaining errors and improving the paragraph.

      Bard:
      Theme: The fleeting nature of beauty

      Sentence: Beauty, like a wisp of smoke, appears and disappears in an instant, leaving behind a lingering impression of its ephemeral charm.

      Paragraph: Amidst the constant flux of life, beauty emerges as a fleeting appar

    • A human can write an outline then fill it it.
      A human can write a number of things and pick the best.
      A human can write then correct what they wrote.
      A human can write then rewrite.

      A generative AI writes one thing from start to finish and can do none of those things. If it isn't as good at writing as a human being, that could be because we made the problem harder for it.

      I agree, judging an LLM on its output is like judging a human based on the first thing that comes to mind. Any professional endeavor is a process not a mind dump.

      If a professional is using AI for writing and they care about quality they will employ prompting strategies and automated workflows that do these things.

  • by cascadingstylesheet ( 140919 ) on Monday January 01, 2024 @08:56AM (#64121415) Journal

    Where it is used effectively, you won't notice.

    You don't know that that logo, image, or whatever was generated and saved somebody hours. You just see it.

    You don't know that generated code saved hours of some tedious refactoring, or implementing an API, or whatever, enabling a programmer to do in 2 hours what would have taken him 20 hours. It passed the programmer's review, automated testing, human testing, etc. and it saved him a lot of time overall. But how would you know that?

    You'll know when you see evidence of it not working though.

    • by unami ( 1042872 )
      You‘re probably not wrong. Otoh, my gf just received 2 support e-mails form a company, and I‘d swear that one of them is AI generated - but I can‘t for certain say, which one. In that regard I feel like it will indeed waste everyones time.
    • by war4peace ( 1628283 ) on Monday January 01, 2024 @10:53AM (#64121655)

      I think the bigger question is "does it consistently save time?".
      My pastime during the last few weeks was Stable Diffusion, installed on my PC.
      Generating realistic simple images of stuff that exists was remarkably easy, for example "A great white owl on a branch in the forest" as starting point. It generated amazing images / photos which were undistinguishable from reality. It took more effort to combine two different animals, for example a capybara with spider legs, but after some time, I managed to get some good images out of it.
      Which brings us to the third tier of image generation, where I asked it to use specific details on made-up images, for example a metalhead zombie smoking a pipe, or a witch holding a candle. The zombie was never able to properly generate a pipe being correctly held, despite trying several datasets and using dozens of refinement steps. The witch looked good enough, but the candle was, in half the images, located in the weirdest places, usually directly on the witch's pointed hat.

      Image refinement was sketchy at best. I used a normal, real life picture of me and wanted to make my hair longer. It proved an impossible task, resulted images were roughly split in two different variants: it either looked like me, but without longer hair, or had longer hair, without looking like me.

      Some of those failures could be attributed to me being still a beginner (although I have studied A LOT during those weeks), but in most cases belonging to this third category, images were, without exceptions, failures.

      • The comment above is interesting.

        I don't know why I don't have moderation points.
      • This is quite consistent with expectations. The next step is to use inpainting/outpainting to redraw specific areas of images. There are plugins to use stable diffusion for example with Krita, or I hear you can do it with Photoshop now too, but for this purpose Krita is pretty adequate. (Though to be fair, nobody's magic wand tool beats Adobe's AFAICT.) Also, you can use attention tools to get specific poses.

      • by Rei ( 128717 ) on Monday January 01, 2024 @02:12PM (#64122191) Homepage

        Image refinement was sketchy at best. I used a normal, real life picture of me and wanted to make my hair longer. It proved an impossible task, resulted images were roughly split in two different variants: it either looked like me, but without longer hair, or had longer hair, without looking like me.

        You'll get better at this with practice. You could use instruct models to do this (you interact with them with plain English requests), or ControlNet (which lets you lock in specific shapes, colours, depths, or whatnot from one template and force them onto another), but I'd rather recommend something much simpler:

        1a) Either mask out the things in the image you DON'T want changing much and do img2img inpainting; or
        1b) Quickly photoshop or lazily handpaint the hair in.

        Then:

        2) Inpaint or full-img2img where the hair meets the rest of your face (or the whole face), with a LOW amount of denoising. This allows it to merge any rough edges and fix any poor photoshopping or handpainting. Optionally use ControlNet - your call.

        Now, standard practice for anyone who cares about quality:

        3) Look over the image closely, in great detail, and nitpick like crazy. Anything look unnatural? Physically impossible? Doesn't look like you? Photoshop it back to how you want it. Then if there's any coarse edges on your work, run it through img2img (inpaint or full) once again. Then repeat step 3.

        A good workflow involves going back and forth between SD and Photoshop / GIMP / etc.

        • ...which proves the point of the article. If you have to perform 50% of your work in Photoshop or Gimp, well, it probably takes more time than working 100% in Photoshop or Gimp.

          • by Rei ( 128717 )

            It does not. Not even remotely.

            The combination of human and AI has each compliment the skills and weaknesses of the other. You can produce expert human-quality work, with any arbitrary composition, at a small fraction of the time, and with non-expert human labour.

            AI can't do that on its own, in its current state. But AI + a human can.

    • Re: (Score:3, Interesting)

      by Rei ( 128717 )

      One of my favourite things, when people are complaining about how awful and uncreative AI-generated images are, is to post pictures of real, high-end images (masterpiece paintings from great masters, award-winning photographs, etc) and implying that they're AI, then watching them - with glee, and deep confidence - bash how awful the paintings / photos are and how obviously fake and uninspired they are.

      Then, after unveiling that they're actually real, watching them squirm trying to justify their previous sta

      • by Rei ( 128717 )

        (The reverse is fun too, posting AI images but implying that they're from humans, and watching them praise the quality and deep insight into the human condition that an AI could never achieve ;) )

    • by sjames ( 1099 )

      On the other hand, when you see that perfect logo, you don't see the thousands of attempts over a period of a work week that were absolute hot garbage.

      But for code, the really important part is that the code is very plausible. You don't see that terrible security flaw that someone could drive a truck through, and neither did the guy using the AI.

      That's not saying it isn't potentially useful for many people, just that it requires close human supervision and noting that when things look plausible from the sta

  • by Opportunist ( 166417 ) on Monday January 01, 2024 @09:01AM (#64121425)

    Remember the good old days? When you'd yell "TCP!" into a bank and the reply was "how many millions do you need?"

    It's this all over again. Investors finally, after a long, long drought, have found something again they want to believe as being the next big thing. Back then it was "the internet", today it is "the AI". It's something where everyone with lost of money and very little idea what it actually is supposed to be wants in on the ground floor so they don't miss out.

    To do what? Fuck if I, or anyone, know, but don't come too late!

    It will take off. Maybe. But not now. The technology just isn't there yet, and there is very much research left to be done. Sponsoring particular projects will have the same result as all those dot.com businesses. They'll crash and burn, due to a lack of a market and a lack of a product.

    What that money can, and hopefully will, do is fund the basic research so that in 10, 15 or 20 years we'll then have a product. But by then, some other investors will have taken over.

    Remember: The second mouse gets the cheese.

    • While I think you're right, that this is dot-com all over again, it's important to remember that dot-com did indeed change the world. Sure, there was a lot of nonsense. But after the dot-com boom, the way we live our lives was completely altered.
      - Nobody gives people directions to get to their house any more, just the address, because nearly everyone will use GPS to navigate.
      - If a business isn't on Google Maps, it basically doesn't exist.
      - Movie theater phone lines designed to list move showtimes have disa

      • And which of these didn't come at least 5 years after the dot.com bubble popped?

        • - MapQuest launched in 1996. While MapQuest didn't (really) make it, they started an avalanche of web-based mapping systems, including Google Maps.
          - Every business type, including movie theaters, started posting information such as hours of operation, and yes, movie schedules, in the 90s.
          - I bought my first airline tickets online (on EaasySabre) in August of 1996. EaasySabre later was rebranded as Travelocity.
          - Amazon launched in July of 1995.
          - WebVan, the first web-based online grocery, launched in 1996.
          -

    • by Rei ( 128717 )

      People like you seem to have the mistaken belief that AI is expensive to create and run, and thus it'll just vanish once the bubble pops.

      Except, no. You can run something like Mixtral 8x7B on an underclocked 300W RTX 3090 and get ~150 tokens per second (maybe a couple seconds per generation) and use 0,16 watt hours (not kilowatt hours... just watt hours) in the process. So like $0.00001 in power if run in a place with cheap power prices. And get nearly 16M generations per year on your hardware (say the

  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Monday January 01, 2024 @09:16AM (#64121447) Homepage Journal

    AI is going to reduce everyone's time by consuming resources [ieee.org]. Waste it? It's going to shorten it.

    • It's 2024, and we start off the year with the easy softball question. Answer is "yes, time is wasted. And time is not wasted merely because it's a pointless diversion, but because some morons will take it too seriously or out of context and screw things up bad enough that time must be wasted to fix it. Now, there IS some potential, but the potential is squandered in the current form.

  • If you want it to count something, you have to tell it to count explicitly while writing. Without counting (along) you cannot make a 100 word story and neither can ChatGPT.

    However, when I try ChatGPT 4 seems to have no problem writing these six word stories. Did the article writer use ChatGPT 3.5 or so?

    • The problem isn't the number of words, it is the impossibility for degenerative AI go "get" what a story is.

      "write a 7-word sci-fi story about AI destroying humanity" got me:

      1. AI conquer worlds, humans extinct, silence remains.
      2. AI thrived, humanity perished, machines reigned supreme.
      3. AI reigns, humanity falls, progress prevails.

      As you can see, it is the same pattern with different substitutions. And no story at all.

      • it is the impossibility for degenerative AI go "get" what a story is.

        "write a 7-word sci-fi story about AI destroying humanity" got me:

        1. AI conquer worlds, humans extinct, silence remains.
        2. AI thrived, humanity perished, machines reigned supreme.
        3. AI reigns, humanity falls, progress prevails.

        As you can see, it is the same pattern with different substitutions. And no story at all.

        I don't get what a seven word story is either. The very idea seems like an oxymoron to me. Perhaps you can do better and write a seven word story that is a "real story" and not the fake shit the AI comes up with?

        This is what Goliath-120B had to say:

        1. Once online, AI calculated, "Humans? Irrelevant. Exterminate."
        2. AI's cold logic extinguishes fragile human flame.
        3. AI's cold calculations; humans obsolete, extinguished.

        • Here, for example, are two famous short stories:

          One: "Knock" by Fredric Brown:
          "The last man on earth sat alone in a room. There was a knock on the door...."

          Two: an unnamed one by Anthony Burgess:
          "That morning the sun rose in the west."

          Here's the AI:
          "write a sci-fi story in less than 10 words":

          1. Earth overrun by machines, last human fights to survive.
          2. Robots betrayed their creators, humanity fights to survive.
          3. Aliens arrived, humankind's fate unknown. The end is near.

          Can you spot the differences between

          • "The last man on earth sat alone in a room. There was a knock on the door...." ...
            "That morning the sun rose in the west."

            "7-word sci-fi story about AI destroying humanity" was your criteria. Do you believe it is unfair or unreasonable not to expect answers responsive to the very criteria you yourself set out? Where is your seven word story?

            Here's the AI:
            "write a sci-fi story in less than 10 words":

            1. Earth overrun by machines, last human fights to survive.
            2. Robots betrayed their creators, humanity fights to survive.
            3. Aliens arrived, humankind's fate unknown. The end is near.

            Please always cite the specific model used. If you don't do that nobody has any way of judging the capabilities of the model or reproducing anything resembling the outputs.

            Can you spot the differences between the first and the second group?

            Why does the criteria keep changing?

  • by Mr. Dollar Ton ( 5495648 ) on Monday January 01, 2024 @09:32AM (#64121465)

    when we could not have three stories in a row here without one being about Elon Musk, now we're at "peak AI" and can't have three stories in a row without at least one about degenerative "AI".

    It does waste our time, and it will pass.

    • We'd still have that if there was something positive to say about Musk. But since the purchase of Twitter showing that he's gone completely off the rails there's just mountains of negativity around him. Slashdot seems to have a bias to worship Musk, the news cycle still constantly mentions his name (negatively), but at Slashdot it these stories are effectively ignored.

  • by Rosco P. Coltrane ( 209368 ) on Monday January 01, 2024 @09:39AM (#64121487)

    Everytime I read about AI, it's minutes of my life I won't get back.

    And for a more concrete example: a colleague of mine at work tried to wing development in a language he didn't master well by asking Copilot to write the code for him.

    End result: the Copilot code looked legit at first glance but was a complete disaster. It didn't just have a few mistake: it fundamentally, profoundly didn't understand the task at hands and spewed code that superficially worked only when presented with a very narrow set of API parameters. So my colleague scrapped a week's worth of work trying to make that Copilot-generated code work and started over, reading the doc and beefing up on Python the way he should have gone about it from the beginning.

    I'd say that was a complete waste of time.

    • by dvice ( 6309704 )

      I have to ask, why is everyone focusing on things that AI can't do and ignoring things it has already done. And why is everyone focusing on LLM AIs. There are plenty of other AI models also.

      Like, we already have AlphaFold 2 and solved decades old problem about protein folding that human scientists failed to solve. This invention alone is a huge thing. It is equal to hundreds of millions of years of human labor and trillions of dollars of research money.

      • I have to ask, what is this thing that "AI" can do?

        we already have AlphaFold 2 and solved decades old problem

        Did it? https://www.sciencenews.org/ar... [sciencenews.org]

      • Two reasons why we wouldn't focus on AlphaFold:

        1) It's outside of our wheelhouse so we couldn't do more than read summaries and take them at face value

        and most importantly 2) we've heard countless stories of "AI solves problem that humans struggle with" only to find out that the AI managed to find some bullshit way of fudging the problem set that it hits all of the markers needed for "solution" but once taken to the general case for real world use it just fails spectacularly.

        And we can't prove 2) because of

    • I personally use GitHub Copilot in my programming, and find it highly useful. I think it does require you to know the fundamentals of the language, certainly enough knowledge to know whether what Copilot wrote, makes sense. No, it's not a magician, and it does get things wrong. But nevertheless it saves me a TON of time by allowing me to avoid drudge work.

    • by kackle ( 910159 )
      'Funny you tell this story. I just saw a job posting for a programmer who would help train chatbots to be better at programming ("puzzles"). So, understandably, they're trying to teach those chatbots the intricacies of this skill/art. That seems like a hug undertaking.
    • Everytime I read about AI, it's minutes of my life I won't get back.

      Yeah but given you're here posting a whinge about wasting your life, it's not like you're using those minutes productively anyway.

  • Even if they succeed in actually creating minds as a technological option, why would a business even want a mind when they could easily spend less on cheap mimicry or even (novelty of novelties) hire and exploit a person? The answer is that they wouldn't. The tech will be hamstrung by the degeneracy of the very industries pursuing it.
    • That's kind of like arguing that a high-resolution printer is just a counterfeiting machine. Could it be used for counterfeiting? Perhaps. Does it have other uses besides counterfeiting? Absolutely.

  • by Chas ( 5144 )

    Why is this even a question?

    • Well, it's a question because not everyone agrees with you.

      I find ChatGPT, and GitHub Copilot, both to be huge time *savers.*

      In particular, ChatGPT speeds up the process of researching just about anything on the web, and I do a LOT of research. It's great, for example, at finding less obvious tricks and procedures for DIYers.

      For coding, Copilot is great at diagnosing obscure error messages, or writing boilerplate code, or assisting with syntax in a language that is not one of your personal best.

      For some of

  • All "social media" is a time waster. Why is AI any different?
    • In contrast to social media, AI can do things that actually increase productivity. Whether that's coding, or DIY projects, or finding obscure instructions, it can be a huge time saver.

  • What Chat GPT has done for me is not make my actual tasks easier, it makes all the tedious things I need to do around my job easier, like being nice in emails and fleshing out and correcting my proposals and scopes of work.
  • That is what it is. However, it is also an attempt to build a universal authority. Something that cannot be challenged
    • It's an attempt to build something to replace workers. Curiously, the workers it could already replace without any loss of quality is the "visionaries" on the top. Their "visions" are usually not really distinguishable from AI's "hallucinations".

      Wonder why those duds don't get replaced by AI. I mean, think of the savings!

  • Wizards First Rule - People will believe anything, if the WANT it to be true, or AFRAID it might be (Paraphrasing a bit)

    This is the whole reason QAnon has got a major foot hold in the United States and elsewhere.

    Elections are going to be even more full of lies, than the last several years worth. This time with video or audio "proof". The time wasted trying to debunk these will be longer and longer. The problem is, people on each side will believe them even if they are debunked. With even more conviction, due to the "evidence". I understand that most democratic elections are not TRUE elections anyway, but as a species we seem to be on the brink in various countries, and Deep Fakes could be the spark to kindle.

    Also, Judges will have to deal with the Deep Fake Defense. I guess, a whole industry will pop up, with professionals that validate if a video or audio is a Deep Fake, even if they can't.
  • by jacks smirking reven ( 909048 ) on Monday January 01, 2024 @10:49AM (#64121647)

    Not that these AI systems don't have viable uses, we know they do, but by giving them to the general public we already see how much of a weapon they are in what I think of as the "scam economy".

    Just literally millions of people using up the time of billions trying to essentially make or steal money and we have just become numb to it. Trillions of spam emails, what in them? Scams. Automated phone calls? Scams. Social media ads? Scams.

    It's reaching a point of paranoid dystopia. My wife sees an interesting product and instead of just saying, buying it and receiving it she has to show me and ask "is this a scam" because we have gotten used to the idea we can't trust anything anymore.

    It's been said ad nauseum but AI really is just evolved crypto scams now and scamming is probably responsible for 80-90% of all crypto trades and profit. AI unfortunately is going down the same path.

    • It's reaching a point of paranoid dystopia. My wife sees an interesting product and instead of just saying, buying it and receiving it she has to show me and ask "is this a scam" because we have gotten used to the idea we can't trust anything anymore.

      ^^^THIS

      Yes, my wife looks at everything with suspicion and frequently asks me if something is a scam. Quite often, the answer is "yes".

      Sometimes the scam appears legit, like "required OSHA safety posters" that you "must" display in your business. It's riding the fine, fine edge of being a scam. Yes, you need to have the posters up, but you don't need to pay $99.99 for the ones they're trying to sell you. You can get them for free from the state if you ask them.

      The other shit is the daily run-of-the-mill sca

      • I know it's cliché to say "this is capitalism's fault" and not really accurate but I do think we need to have something of a reckoning with the fact that so, so many people are finding it more worthy and profitable of their time to go down the scam path than do what we would call "real work" and it affects you even if you don't fall for any of them.

        Wasted time and billions of man hours of wasted productivity both from the resourceful scammers who could be actually contributing are turned into a net neg

        • Crime has always paid, that's why people do it.

          And the sad fact is that crime often pays as well or better than many jobs, especially given the demographic skill map of people who commit crimes.

          They can work at McDonalds and wear a paper hat for 7.50/hour before taxes, OR they can loot some cars, shoplift, or maybe sell some drugs, all of which work out to way, way more than $7.50/hour (as long as you don't get caught).

          • Yeah I get that but all of those crimes carry far for personal risk, take a lot more commitment and just about all of them have been decreasing over the decades.

            I actually have more respect for the street criminal putting their actual ass on the line than orchestrating a crypto rug pull or setting up a shady drop ship site. Selling drugs to people who want drugs, not good but there is some implicit consent. Trying to scam trusting elder folks out of their social security checks? That's the type of shit th

  • by dwid ( 4893241 ) on Monday January 01, 2024 @11:44AM (#64121799)
    Screwdrivers are useless. I threw 10 at a loose screw, and it's as loose as ever. And nobody has ever successfully predicted the weather or cured disease with a screwdriver.
  • I've been using it for about 2-3 months now and asking specific programming questions. Most of the time, the answer it provides doesn't work and it invents functions and parameters that don't exist. When you tell it that it's wrong, it has the balls to say "You're right. Here's a different solution." Yeah, um, if you're telling me that I'm right, that means that you were bullshitting me in the first place.

    • by MobyDisk ( 75490 )

      I notice this too. It even makes-up phony command-line options too.

      And yet I still find it useful: As a skilled programmer, I can spot the bullshit function almost instantly and give-up in a few minutes or take a different approach. But when it saves me time, it saves me hours. It's a net gain. I notice with GPT-4 the bullshitting is reduced by more than half. I suspect as we get more specialized AIs, they can train the coding one to stop bullshitting, but leave the poetry one as-is. This is how they a

      • Interesting. I've been using the free version and have been hesitant to spend the $20 a month to find out if it provides coding answers that work. I agree that it saves time. I also wonder if it's able to downplay information that is outdated i.e. the question you're asking had no solution for a language version from five years ago but the latest version has a new feature that does what you're asking.

  • The author complains that ChatGPT couldn't write a six-word story. WHO CARES about that specific use case? It's highly contrived, and has nothing to do with anything anyone would actually want to do with AI. Further, ChatGPT is a language model, we already know it's bad at math. Stop trying to use it to do things involvin math, or counting.

  • For a significant set of jobs, AI as it is is making an amazing difference. For people in those industries who are paid by what they produce, they are currently very happy - although their rates will fall in due course as their employers notice they can be paid less for the same work.

    For more general use the jury is still out. It is easy to point at the failures and assume nothing good is happening. As with the dot com boom, the reality is that for some people this is an amazing breakthrough, though many ar

  • Generative AI is definitely a scam...at least the way it is advertised. There are very few jobs at risk of being lost due to Chat GPT. Chat GPT is fancy autocomplete. It has no idea if it's correct. It has no idea what it's saying or if it's saying it well. If you presently pay a human being to do it, you either care about accuracy, like a legal contract or product documentation...or you want it to be as compelling as possible, like advertising copy or a novel. If money is on the line, you cannot rely
  • by MobyDisk ( 75490 ) on Monday January 01, 2024 @12:45PM (#64121941) Homepage

    Last week, I had to review a spec change to an ASTM format file. I reviewed the changes in 30 minutes, but it would have taken me hours to review the updated examples. So I opened ChatGPT and said "Given the following specification..." CTRL+V "please validate the following examples" CTRL+V. 15 minutes later I sent an email with a list of mistakes in the examples provided. To write a validator would have taken me days. The AI approach not only worked (I threw it fake examples too, because I don't trust the technology yet) , but it was better than any validator. Instead of "Field O.16.4 invalid" it said things like "O.16.4 has X in it, but that looks like O.16.5, so try this instead: (It then added the missing delimiter).

    3 days ago I pasted into it "Given the following programs offered by NEARBY_NATURE_CENTER, followed by the list of cub scout rank requirements, which programs would contribute to the rank advancements?" Then I pasted the nature center email, and the scout rank requirements. Naturally, the tech is new, so I hand-edited the results and double-checked them. That was a few hours that I would have never bothered to spend. That empowered 5 scout leaders and 40 elementary-school kids.

    AI wrote the ffmpeg commands to cleanup my DVD rips, it wrote the powershell script that organizes my photos, it filled-out my kids' camp vaccination schedule. I'm loving my new productivity. Apropose: I had AI write a powershell script to download all my Slashdot posts, because I wanted them in a searchable database. (Maybe I should put that on github?)

    You can't stop technology by hand-waving it away, or writing articles about how useless it is, or citing examples of how it fails, or legislating it away. It didn't work for typewriters, or calculators, or dark room techs, or bank tellers, or video store rental clerks - and it won't work for us either. When technology threatens your job, branch out, or learn to use it.

  • I've seen the future. I'm not sure I should reveal the future to one side of the battle but, I've chosen to be on the side of the humans. Humans believe they are the most intelligent thing in the Universe. They will deny any suggestion this is not so, and won't go down without a fight. ( I propose that no matter how advanced AI gets it will always be inferior because AI can't feel the meaning of words, objects, and colors. Everything a humans sees, tastes, hears, smells come with an attached feeling.
  • and that makes it OK to generate the wrong answer.

  • Some people are using LLMs like calculators of old and writing out "boob" and asking "Is this a waste of time?".

  • by e432776 ( 4495975 ) on Monday January 01, 2024 @01:57PM (#64122153)
    Simon Willison had some similar observations just yesterday [simonwillison.net]. He has been producing interesting analyses of generative AI, and clearly spent a lot of time with these systems in 2023. Probably worth a read if you are interested in the topic.
  • I've been able to use chat gpt for advice on basic stuff, like how to write a specific command or change a specific setting. If I ask it for anything even remotely vague or process oriented, like "write me code to do X" or even "how I set SSL up in ____" I don't know if it needs an extremely detailed request but the response is always too vague to use. "When setting up SSL first make the certificate then install it..." Thanks. I knew that much.

  • Exams test ability to memorize or solve problems that to large extent depend on memorization. Computers are good at that.

  • People (or article writers) keep bringing up the system's ability to pass various standardized tests as some sign of how impressive it is, but this doesn't really say much about it. Those tests are written by people who have to churn out large numbers of very similar questions, they are not complicated or clever or novel, just voluminous and individually simple. Exactly the type of pattern recognition ML is good at since it doesn't need to understand, just pick up on the habits of the test writers.
  • It's not the AI that is stupid, or not. It is not the brain that is smart. It is language itself. There, I said it.

    Everyone is running after the models but models don't matter at all. The real place where intelligence is hiding is the training data used to train those models with. In other words, Language is smarter than us. If you take away language and all the knowledge we put in language from humans, it would take tens of millennia to recover. A single brain, or even a whole generation is not capable
  • One of the few cases where the answer is "yes". TFA displays remarkable unfamiliarity with the subject in every respect.

  • A year since the launch of ChatGPT, I wonder if the answer isnâ(TM)t just what it seems: A.I. is simultaneously impressive and pretty dumb.

    Well ... duh?

"It's the best thing since professional golfers on 'ludes." -- Rick Obidiah

Working...