Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI

Will AI Really Perform Transformative, Disruptive Miracles? (theatlantic.com) 154

"An encounter with the superhuman is at hand," argues Canadian novelist, essayist, and cultural commentator Stephen Marche in an article in the Atlantic titled "Of Gods and Machines". He argues that GPT-3's 175 billion parameters give it interpretive power "far beyond human understanding, far beyond what our little animal brains can comprehend. Machine learning has capacities that are real, but which transcend human understanding: the definition of magic."

But despite being a technology where inscrutability "is an industrial by-product of the process," we may still not see what's coming, Marche argue — that AI is "every bit as important and transformative as the other great tech disruptions, but more obscure, tucked largely out of view." Science fiction, and our own imagination, add to the confusion. We just can't help thinking of AI in terms of the technologies depicted in Ex Machina, Her, or Blade Runner — people-machines that remain pure fantasy. Then there's the distortion of Silicon Valley hype, the general fake-it-'til-you-make-it atmosphere that gave the world WeWork and Theranos: People who want to sound cutting-edge end up calling any automated process "artificial intelligence." And at the bottom of all of this bewilderment sits the mystery inherent to the technology itself, its direct thrust at the unfathomable. The most advanced NLP programs operate at a level that not even the engineers constructing them fully understand.

But the confusion surrounding the miracles of AI doesn't mean that the miracles aren't happening. It just means that they won't look how anybody has imagined them. Arthur C. Clarke famously said that "technology sufficiently advanced is indistinguishable from magic." Magic is coming, and it's coming for all of us....

And if AI harnesses the power promised by quantum computing, everything I'm describing here would be the first dulcet breezes of a hurricane. Ersatz humans are going to be one of the least interesting aspects of the new technology. This is not an inhuman intelligence but an inhuman capacity for digital intelligence. An artificial general intelligence will probably look more like a whole series of exponentially improving tools than a single thing. It will be a whole series of increasingly powerful and semi-invisible assistants, a whole series of increasingly powerful and semi-invisible surveillance states, a whole series of increasingly powerful and semi-invisible weapons systems. The world would change; we shouldn't expect it to change in any kind of way that you would recognize.

Our AI future will be weird and sublime and perhaps we won't even notice it happening to us. The paragraph above was composed by GPT-3. I wrote up to "And if AI harnesses the power promised by quantum computing"; machines did the rest.

Stephen Hawking once said that "the development of full artificial intelligence could spell the end of the human race." Experts in AI, even the men and women building it, commonly describe the technology as an existential threat. But we are shockingly bad at predicting the long-term effects of technology. (Remember when everybody believed that the internet was going to improve the quality of information in the world?) So perhaps, in the case of artificial intelligence, fear is as misplaced as that earlier optimism was.

AI is not the beginning of the world, nor the end. It's a continuation. The imagination tends to be utopian or dystopian, but the future is human — an extension of what we already are.... Artificial intelligence is returning us, through the most advanced technology, to somewhere primitive, original: an encounter with the permanent incompleteness of consciousness.... They will do things we never thought possible, and sooner than we think. They will give answers that we ourselves could never have provided.

But they will also reveal that our understanding, no matter how great, is always and forever negligible. Our role is not to answer but to question, and to let our questioning run headlong, reckless, into the inarticulate.

This discussion has been archived. No new comments can be posted.

Will AI Really Perform Transformative, Disruptive Miracles?

Comments Filter:
  • Maybe. (Score:5, Insightful)

    by Lohrno ( 670867 ) on Saturday September 17, 2022 @02:47PM (#62890017)

    But this is all just speculation until you show me something.

    • Re:Maybe. (Score:5, Interesting)

      by timeOday ( 582209 ) on Saturday September 17, 2022 @02:55PM (#62890033)
      So, how about this snarky assertion about the present in the article - "Remember when everybody believed that the internet was going to improve the quality of information in the world?"

      That bugs me. How do you make a blanket judgment about something like that? Could I have maintained my cars and motorcycles and house the same way without all the information on the Internet just using some DIY book from the library? I don't think so. Or go ahead and look at politics, you think people were so enlightened? The why does anything actually written at the time blow your mind with how offensive it is? We had a civil war ffs.

      It's very hard to perceive the reality you live in and make meaningful comparisons to how other people felt about the reality they lived in - and that's when you're looking into history and the facts are known.

      Now try to do the same for the future, which is almost entirely unknown.

      I think AI will wipe out humanity, but by choice, into some transhuman hybrid that still feels human to them, but wouldn't to us if we were transported ahead to that time.

      • Re:Maybe. (Score:5, Insightful)

        by phantomfive ( 622387 ) on Saturday September 17, 2022 @03:55PM (#62890141) Journal

        How do you make a blanket judgment about something like that? Could I have maintained my cars and motorcycles and house the same way without all the information on the Internet just using some DIY book from the library?

        I had to go to a library the other day to search for some stuff that isn't online. Finding data was so slow, people have no idea.

        For modern things, like the Ukraine special operation, I know what is going on pretty near immediately after it happens.

      • The Internet absolutely did improve the quality (and accessibility) of the information in the world.

        It also increased the reach, quantity, and speed of the disinformation in the world, but that's a separate issue...

      • > Now try to do the same for the future, which is almost entirely unknown.

        This has a technical name - counterfactual reasoning - and is one of the hardest things to accomplish by AI and humans alike. See Judea Pearl for more.
    • If it's science, it's not a miracle.

      • by caseih ( 160668 )

        To me what science and technology can do is miraculous. I understand the underlying principles and natural laws that the fruits of science and technology are based on, yet I choose to maintain my sense of wonder and awe, and also gratitude. For example, even though I have a pretty good understanding of the principles of thrust and lift, it's still deeply moving to look at a big airplane that's carried me across oceans in relative comfort and marvel at it all. It's possible to not be completely jaded in t

        • "Wonder and awe" are different than miraculous. It's a different definition of the word.
          However, flying is really great.

    • by Kisai ( 213879 )

      AI will improve, but it will also hit a wall.
      a) To improve, it needs constant sources of new data
      b) You can't just create a blackbox with no new input. GPT-3, and various computer vision, text-to-speech and NLP will not recognize new inputs if left isolated. For example, let's say I invented a new device and called it "The Gecko", without the AI having learned of it, it will use a definition of "a gecko" that it has learned. Which means that you will be re-training NLP AI's every year, and constantly adding

      • > You can't just create a blackbox with no new input.

        Well, actually you can, but you got to have a simulator to learn from sim outcomes. It goes like this - the model takes actions in the environment, then we measure the result. It learns to improve the result. That's how AlphaGo trained by playing itself - the simulator was a go board an a clone, other AIs can solve math by verifying which of its proposed solutions was correct, then using that knowledge in training. You can do the same with code as l
  • No (Score:5, Insightful)

    by lucasnate1 ( 4682951 ) on Saturday September 17, 2022 @02:50PM (#62890025) Homepage

    No

    • applications.
      how about average joe a i applications.
      credit card applications.
      home loan applications.
      medicare.
      insurance.
      of course the only folks using a i are now billionaires.
      and they have publicly stated that a i is bad

  • AI has great potential to transform our lives for the better. It also has just as much potential to be used to control us and restrict us. It will be mostly annoying to the smarter people and downright seductive to the idiots.

    Mostly though, it will probably lead to more efficiencies. With enough connectivity, awareness and control AI could do some amazing things such as better food production. More efficient routing of data. Better shipping coordination. Better ways of sharing energy and water.

    An AI could g

    • In July, OpenAI launched with great fanfare the paid version of its image generation model Dall-E. Two months later the Stable Diffusion model was released, runs on your machine, costs only as much as a bit of electricity. To train SD they used 2 billion image-text pairs but the final model size is 4GB - that comes about 1:100,000 compression, 2 bytes for each example. But it can generate anything we can think of, so you got an internet's worth of imagery in a model the size of a DVD. That makes AI capable
  • by cats-paw ( 34890 ) on Saturday September 17, 2022 @03:01PM (#62890043) Homepage

    My cat is far , far smarter than the best "AI" we've developed.

    We have created amazing, large problem space, fitting algorithms that can pick the fitting to coefficients of enormous systems of equations with large data sets and relatively unintended.

    It is kind of amazing as they can "notice" things they weren't really told to look for.

    However, i'm extraordinarily suspicious that some smart person is going to figure out that we can do a similar thing without using ML to do it, i.e. it seems like it may be an unnecessary step. i'm probably totally wrong, but it just seems like it's solving a big correlation problem. it even uses techiques from that problem space.

    meanwhile , back to the point. there's no AI. True AI, something that can do what my cat can do, i.e. jump on the kitchen counters because it knows i prepare its meals up there, is still an increibly long way off.

    Hell, let's see if they can make something as smart as a bumblebee in the next 10 years.

    I bet not.

    • Re: (Score:2, Funny)

      by Anonymous Coward

      My cat is far , far smarter

      Username checks out.

    • by Rei ( 128717 ) on Saturday September 17, 2022 @03:42PM (#62890117) Homepage

      Today's high-end AI is generally, compared to humans:

        * Better at imagination, but...
        * Bad at logic (and lacking life experiences on which to base logical decisions)
        * Significantly underpowered in terms of compute capability

      Also, for most advanced "AI", it's better to just think: "Extreme linear algebra solvers for finding solutions to extreme multivariate statistics problems". Takes the metaphysics out of it. The real question is not what AIs are doing, but what we are doing when we think.

      I like to think about the comparison vs. reverse diffusion image generators like StableDiffusion, DALL-E, MidJourney, etc. They, like us, don't work in image space, but in embedding space, shared between both image and text. Latent space. A space of concepts. Where logical operations can apply to the embeddings - where the embedding for "woman" plus the embedding for "king" minus the embedding for "man" resembles the embedding for "queen". A good example was when someone in StableDiffusion showed that if you start with an embedding of Monet's "Bridge Over a Pond of Water Lilies", add in "photography" and subtract out "impressionism", you get what looks like a photograph of the same scene upon converting the embedding back to image space, without ever telling it to draw a bridge or pond of water lilies. And just like us the process of converting back from the (far smaller) latent space to the (far larger) image space involves extensive use of imagination to fill in the gaps, based on what it - or we - was trained to.

      And the results can be truly stunning. Yet on anything that requires logic, it's a massive failure. Spelling. Relationships. Ensuring conceptual uniqueness. It's working on single embeddings trained to the *existence* of objects in the scene (CLIP), and unless it was trained to the *specific* thing you asked for (like "a red box on a blue box"), it won't understand the logical implication there. You have to wrench it into getting the right answer in complex scenes by providing hand-drawn templates for it to diffuse or with postprocessing.

      There's a lot of work on improving that, but just using it as an example, we're currently in an era where AIs can be stunningly imaginative (atop deep breadths of knowledge) and yet trounced by an infant when it comes to logic.

      • by phantomfive ( 622387 ) on Saturday September 17, 2022 @03:57PM (#62890145) Journal

        Another way of looking at it that you might find interesting: current AI is good at interpolation, but horrid at extrapolation. That is essentially what a neural network is at the mathematical level: a heuristic interpolater.

        • Humans too. We're horrible at extrapolation. In 1315 the Black Death was killing large swaths of society and we couldn't figure out and extrapolate to the theory of germs. Not even when our lives depended on it.

          We're assuming all discoveries we make as if they come from a place of deep intelligence, but in reality we stumble into discoveries by accident and then in hindsight we think we're so great.

          If you give AI access to the physical world to the same extent we have had, it would become more intelli
      • * Better at imagination, but...

        Don't think so.

        Imagination requires spontaneity. There is not a single thing that AI does that is equivalent to a flight of fancy.

        • by Rei ( 128717 )

          Humans are TERRIBLE at randomness compared to computers. Ask a random person to write down 100 random numbers, then hand it over to a statistician to asses how random it is. I guarantee you, it won't be random at all.

          • Humans are worse than computers on all but a few tasks, starting with addition and randomness. For now AI can equal humans on any task where there is large training data and examples fit into a sequence of no more than 4000 tokens, or where we can simulate millions of episodes and learn from outcomes. We still hold the better overall picture, we have longer context for now. Copilot can only write small snippets of code, not whole apps. yet
      • > It's working on single embeddings trained to the *existence* of objects in the scene (CLIP)

        The solution is right in your words. It's because it uses CLIP embeddings as representations that it lacks ability to properly assign properties to objects, they are all mixed up in the single embedding. But if you give it a bunch of embeddings, like an array of 128 embeds instead of just one - the attention mechanism allows is actually great for compositionality by concatenation - it does all-to-all interact
    • When I was taking an AI comp sci class in undergrad at a top ~10 or so department 20 years ago, chess was still a big focus of game playing AI and search research. Go was given as an example of a monstrously large search space and a game that we just didn't even have any conception of how to tackle.

      ~2011/11 I remember having a conversation with a computer scientist friend who works at Johns Hopkins APL. He was also an amateur dan Go player and worked on AI. He was firmly in the camp that no Go program would

    • Your cat interacts with you mostly to say things like "bring me my dinner" and "I pooped in the bedroom, go clean it up". Given cats have basically enslaved humans, I don't know that "not as smart as my cat" says much.

    • by Jeremi ( 14640 )

      Hell, let's see if they can make something as smart as a bumblebee in the next 10 years.

      An artificial bumblebee would be quite an accomplishment, but what words would you use to describe somebody who taught himself to play chess at grandmaster level in four hours -- and then went on to earn a 3500+ Elo rating (22% higher than the world's reigning chess champion), and introduce the world's chess players to entirely new strategies and tactics that nobody had considered before?

      I'd call that person pretty smart; the fact that it's actually an AI [theatlantic.com] and not a person that did that makes the accomplishm

      • An artificial bumblebee would be quite an accomplishment, but what words would you use to describe somebody who taught himself to play chess at grandmaster level in four hours -- and then went on to earn a 3500+ Elo rating (22% higher than the world's reigning chess champion), and introduce the world's chess players to entirely new strategies and tactics that nobody had considered before?

        It's impressive, but so are calculators. What would you say to someone who can multiply two 7 digit numbers in 5 seconds? A person who could do it would be intelligent, a computer would not.

        That is, computers are lacking other things considered necessary for intelligence.

        • by Jeremi ( 14640 )

          It's impressive, but so are calculators. What would you say to someone who can multiply two 7 digit numbers in 5 seconds? A person who could do it would be intelligent, a computer would not.

          If you were talking about IBM's DeepMind (which, after having been programmed by a team of chess experts to play chess at a high level, was able to beat Kasparov), I'd agree with the calculator analogy.

          But AlphaZero was never programmed with any chess strategy -- it figured it out by itself, simply by playing chess against itself. If that isn't a form of intelligence, I don't know what is. (Note that intelligence and sentience are two different things -- I'm not claiming sentience here)

          • But AlphaZero was never programmed with any chess strategy -- it figured it out by itself, simply by playing chess against itself. If that isn't a form of intelligence,

            It's still a calculator, just a really good one. Basically what it is doing is collecting positions. After playing millions of games, when it comes across a new position, it says something like, "based on the previous games that looked like this one, in 90% of the games, moving R-d4 was the best candidate move."

            Furthermore, it still "cheats" by being able to look through many many moves every second. It's still calculating through the move tree, it was programmed to play that way: like a computer, not a hum

            • What the fuck do you think a human brain does, bro? I am seeing less and less of your point the more you post it.

              The brain is a highly optimized biological pattern matcher with a bunch of weird shit we don't understand also going on. You don't need to know all that weird shit to develop other artificial pattern matchers that work differently but far, far better in specific areas.

              • by phantomfive ( 622387 ) on Sunday September 18, 2022 @01:32AM (#62891271) Journal

                What the fuck do you think a human brain does, bro?

                If I knew, I'd have won a Turing prize. Speaking of Alan Turing, one of the things a human can do that AlphaZero can't do is simulate a Turing machine.

                The brain is a highly optimized biological pattern matcher with a bunch of weird shit we don't understand also going on.

                The weird shit is pretty crucial.

                • Again, I think you're missing the point and you're hung up on this "like a person!" strawman. Turing prize or biological drive or "creativity" as defined by the human experience aren't required here. What is required is the ability to look at absurd amounts of data and to find patterns a human couldn't.

                  You're moving the goalposts. The point isn't that an AI will be like a human anytime soon (we aren't even close). The point is that AIs will discover amazing things we aren't even prepared for over the next d

                  • If your point is that AI can still be useful even if it can't think, then you are making the same point as Dijkstra when he said, "The question of whether machines can think is about as relevant as the question of whether submarines can swim." If that is your point, then I concede you are correct. Neural networks are really cool.

                    However, I will claim that people who call that AI are morons for calling it AI. They should call it, "cool algorithms we invented while searching for AI" or something like that. So

          • by jvkjvk ( 102057 )

            >But AlphaZero was never programmed with any chess strategy -- it figured it out by itself, simply by playing chess against itself.

            Yes. It played a lot of games against itself using *random moves* and developed a statistical model of what moves in what positions yielded the highest percent of winning endpoints.

            Then, when playing, it simply can do the same, starting from the current position, for the length of the turn, filling out more probabilities.

            Then, it picks the highest probability answer.

            It doesn

            • by Jeremi ( 14640 )

              It doesn't know how to play chess. It doesn't know about attack or defense, position, influence or anything else. It knows the probability that each move will result in a win. That's it.

              One could make similar criticisms about an anthill -- an anthill isn't sentient and doesn't know anything about anything.

              And yet, the anthill is nevertheless able to solve complex and novel problems in intelligent ways. If that makes the anthill "a mindless calculator", then so be it -- but it's an intelligent calculator. I submit that AIs can also exhibit this sort of mindless intelligence.

        • You keep moving those goalposts, lol. Really put your shoulder into it!

          Find me a person, cat, or bumblebee who can analyze how a drug interacts with billions of combinations of proteins and devise other drugs that would behave similarly and have fewer side-effects.

          But, but, but that's just like a calculator maaaan! Fucking bullshit it is. You are inventing some scenario where to create new things one must be "intelligent" with some ineffable "something" that we can't simulate. That scenario doesn't exist.

    • by narcc ( 412956 )

      back to the point. there's no AI. True AI,

      What you call "true AI", and what I suppose is the common understanding of the term, philosophers call 'strong AI'. I've taken to calling it 'science fiction'. You are absolutely right. No such thing exists. While it would be nice if we could reserve the term 'AI' for the common meaning, that battle was lost before it even began. Pamela McCorduck, who was there at the time, explains the origin on the term "AI" in her book Machines Who Think. It's well-worth a read.

      The term AI, as it is now, is pretty

    • No it isn't. Can your cat put his little paws on a steering wheel and drive down to the store, peeping his little eyes over the steering wheel and using his other little paw to push the accelerator (I'll even allow for alternative brake/accel pedals)? Can your cat look at 500 simultaneous streams of video and put names to faces for large swathes of the people in the videos?

      Sorry, your cat is a dumbass. There are things your cat can do better than a computer, hell there are things it can do better than a hum

      • Yes. A tool. All tools do things that unaided humans cannot. AI can analyze masses of data that a human couldn't, a crane lifts objects that humans, even a lot of humans trying to work together, couldn't.

    • Well a bumblebee is smarter than a TikTok er, so theres that.
    • back to the point. there's no AI.

      That is not, in my opinion true. It's just an artefact of ever moving goalposts. What we have now is that by artifice, we can solve things that formerly required human intelligence to solve. Hence artificial intelligence. AI doesn't need to be the same as a complete artificial human mind, any more than artificial flowers need to grow to be called such. We have lots of artificial things, none of them are ever a 1-1 substitute for the original but nonetheless, "artificial" is

    • Does your cat beat you at Go and Chess, fold proteins and paint amazing images on request? Can it implement even a simple Python script on request?
    • Even the "dumbest" AI has the propensity to tell us novel things that a cat cannot, that we don't now know, and all from watching the dang cat.

      That's the crux of it. The informaiton produced is orthagoal to biological neural processing.

  • Machine computing has already performed transformative, disruptive miracles. AI is just more of the same.

  • by crunchygranola ( 1954152 ) on Saturday September 17, 2022 @03:05PM (#62890055)

    "An encounter with the superhuman is at hand," argues Canadian novelist, essayist, and cultural commentator Stephen Marche in an article in the Atlantic titled "Of Gods and Machines". He argues that GPT-3's 175 billion parameters give it interpretive power "far beyond human understanding, far beyond what our little animal brains can comprehend. Machine learning has capacities that are real, but which transcend human understanding: the definition of magic."

    By scraping 175 billion parameters from the web GPT-3, copying the words of hundreds of millions of real intelligences, it can make a convincing replica of human conversing.

    But it understands nothing of what it says. It is simply Eliza on a gigantic scale. It is true that humans cannot explain the internal processes by which it produces particular outputs (a failure of the technology thus far) since it is performing a vast number of statistical pattern matches but this is not "beyond human understanding" it is just that people cannot put into words any meaning when a terabyte core dump is presented to them.

    His fundamental ignorance is proven by this assertion that this is vaster than "our little animal brains". An average human brain dwarfs the puny scale of GPT-3. It is difficult to make a precise comparison, but a single model parameter is most closely similar to a single synaptic connection, which is a single connection strength between neurons. The brain has on the order of 10E15 synapses, and is about 10,000 times larger than GPT-3.

    • But it understands nothing of what it says. It is simply Eliza on a gigantic scale.

      I disagree on that point. The trick of Eliza is how superficially well it can respond without using any information that is specific to what you are actually saying or asking. Eliza could never, EVER play Jeopardy even passably well, whereas modern AI can do so exceedingly well. A big difference, without delving into metaphysics.

      • by jvkjvk ( 102057 )

        >Eliza could never, EVER play Jeopardy even passably well, whereas modern AI can do so exceedingly well. A big difference, without delving into metaphysics.

        It's not really about what they can do. A modern AI is MUCH bigger than Eliza and *should* be able to do more. It's how they do it. And in that, both are the same, without delving into metaphysics.

    • What is understanding, if not the ability to match patterns and string concepts together? Isn't learning a language very much about constructing a web of relationships between words? What is superior about how you "understand" something vs a computer, if both of you can provide an equally useful answer in a given context? The paragraph in the summary written by GPT-3 has a quality to it that exceeds most humans' understanding of the topic. It's both interesting and insightful, and logically consistent. Hand

    • by gweihir ( 88907 )

      But it understands nothing of what it says. It is simply Eliza on a gigantic scale.

      Exactly. The overall process is rather simplistic, if on a massive scale. But it is not more intelligent than, say, a dictionary, that give a word can give you an explanation of the word. Obviously a stack of paper with ink on it is not intelligent in any way.

    • this is not "beyond human understanding" it is just that people cannot put into words any meaning when a terabyte core dump is presented to them.

      It's not a problem of putting it into words, it truly is beyond human understanding. We can train a massive model like GPT-3, but we literally have no idea how it works. Somehow those 175 billion parameters manage to encode a whole universe of concepts, relationships, grammar, and much more. How do they encode it? No one knows. We aren't sure how to even begin figuring out.

      Eliza is totally different. It has a small number of hand coded rules. The author knew exactly what those rules were. Any compet

    • By scraping 175 billion parameters from the web GPT-3, copying the words of hundreds of millions of real intelligences, it can make a convincing replica of human conversing. I tried GPT-3 prompt and it gives dumb answers if you are like me and don't know how to use it, so yes very far from an human. However the thing can tell totally wrong answers with great assurance, like a human being.

  • >> Remember when everybody believed that the internet was going to improve the quality of information in the world? No, no I don’t remember anyone around me that thought the quality of information would improve.
    • by trickyb ( 1092495 ) on Saturday September 17, 2022 @03:36PM (#62890111)
      Well, the internet has improved the quality of information. Wikipedia - for all its faults - is a million times bigger and better than the Encyclopedia Britanicas of old. When I want to carry out minor repairs on my car or my bike or my washing machine, in a few seconds I can bring up a Youtube video. Remember having to keep several maps in your car, most of which would be out of date and leave you unaware of a bypass built 5 years ago? And so on...
      I will concede that the internet has not improved the quality of information in every domain.
      • by darenw ( 74015 )

        "Grampa, what's a paper map? How did it know where you were and tell you when to turn?"

        • Well, son, a paper map is what you use in Cornwall when you have 10 miles with inexplicably poor phone coverage.

          I forgot my paper map last month and did in fact regret that.

    • by gweihir ( 88907 )

      It has improved. I used to have an old high-quality encyclopedia (not in English). The Internet can replace it now, but requires some level of education, honesty and ability to fact-check in the reader. Given what that encyclopedia did cost back when and that you can get something reasonably similar for low cost now, I would say this is massive progress. Even poor people in poor countries can now access the actually known facts with reasonable effort if they so chose. The problem is that most people do not

  • by HiThere ( 15173 ) <[ten.knilhtrae] [ta] [nsxihselrahc]> on Saturday September 17, 2022 @03:26PM (#62890089)

    As currently implemented AI is a transformative technology. It's still getting started, and we have no idea how far it will go, just that it will go a very long way. This is largely because it can search through really huge specialized databases quickly. There are other bits, but that's the main thing. And don't denigrate it's importance.

    OTOH, current AI is not, and I believe cannot be developed into AGI. That going to require a very different approach. It will probably require robot bodies operating in the world to do this. It will include that current idea of AI as a component...but only as one component. (OTOH, other components are either in existence, or currently being developed, so the final step of integrating them may turn out to be a small one.)

    However, even if we don't build an AGI, current AI in conjunction with humans "will really perform transformative, disruptive miracles". I.e., it will enable changes that have not been predicted, and which will cause the lives of humans to change drastically. We still don't understand how much things will change when an automated car is the common vehicle. Just that there will be profound economic disruption...but that's only a part of what the change will be. The automobile resulted in the change of sexual mores in a way that still isn't complete, and this will be something as major, and probably as "not obviously to be expected ahead of time".

  • by 93 Escort Wagon ( 326346 ) on Saturday September 17, 2022 @03:38PM (#62890115)

    - Non-technical people like this author
    - AI researchers asking for more funding

    • Everything's a miracle if you don't understand how anything works - or you get paid for producing miracles.

    • Lol, sure, sure.. Look at how far AI/ML has come in the last 5 years alone, you're a confirmed nut if you think we won't start seeing startling shit out of the field in the next 10 years. I am in neither group and I believe it.

      Being a cynic is easy, it's not like someone will remember and come mock you in 8 years when some AI discovered a drug regimen that cures 60% of known cancers, or devises a new type of battery 40% lighter with 50% more energy density and faster charge capacity than anything we have no

      • Lol, sure, sure.. Look at how far AI/ML has come in the last 5 years alone,

        It's hard to remember that AlexNet was only published in September 2012, almost exactly 10 years ago.

        That was pretty much the watershed paper: that turned ANNs from "old fashioned thing that doesn't work that well that you use if you're not smart enough to use the clever modern techniques like obtuse SVM variants" to "holy shit gradient techniques work". And the design if AlexNet is also vastly simpler than all the techniques it hand

  • That will outpace the rate of job creation. Honestly studies have shown that just regular computer systems have created an automation boom that's outpacing the rate of job creation. It's why you make less with your STEM degree then your granddad did with his high school diploma when adjusting for inflation.

    The trend's been going on since the 1980s. We had two major bubbles, the.com boom in the housing bubble, that kind of disguised what was going on. We also had about a 1 trillion dollar infrastructure
    • Automation makes us richer. That is fact.

      The only question is how the new wealth will be distributed. That can cause problems.

      • Automation makes the few richer. It makes the masses poorer. The 5 board members and 3 C'levels make a boat load, the thousands of former workers without jobs, benefits or a way of supporting themselves have nothing. Teaching millions to "code" isn't the solution. Eventually, there's not enough jobs to buy all the crap that automation is kicking out and the system falls over.
      • Automation makes us richer. That is fact.

        You really think you can simplify the complexities of the entire world into 4 words? Of course you can't. That is a fact. As with all things, it is far, far, far more complex and nuanced.

        The industrial revolution didn't make all the farm hands displaced by steam power richer. It pushed them into cities into poorly paid and much more dangerous jobs. Their great grand children are richer, for sure, though they have substantially less leisure time.

        Automation is inevitab

        • You really think you can simplify the complexities of the entire world into 4 words?

          One word, actually: "Yes."

      • Automation makes us richer. That is fact.

        How are you measuring? Imaginary, fiat money? Or natural capital?

        • Mainly in terms of the amount of "stuff" we can produce and have.

          Automation (like excavators and such machinery) allows us to mine faster, but obviously it doesn't change the amount of minerals on the earth. Chemistry and science can increase the amount of minerals available to us.

      • There is no question about the wealth distribution effect of automation; it will make people with (available) money richer and those without (relative) poorer. That is the big, long-term issue with automation.

        Sure, there will be some short-term, negative effects like lay-offs at a higher rate than new jobs created by the same automation causing lay-offs. But eventually new jobs will be created. If you go back in time 150 years, approximately 90% of the population were working with farming, while just 10%

  • Until we discover HOW wet brains really learn and store useful memories, what we call clever AI will just be faster stupid AI, and will never be really clever.
    • by gweihir ( 88907 )

      And that is if a wet brain actually does it. Nobody knows at this time and there are indications some humans are smarter than is physically possible. We also still have no clue what consciousness is. Known Physics has no mechanism for it and says it does not exist...

    • We don't need any of that. We need something that can pull in more data than a human is capable of and look for patterns and devise solutions based on those patterns. We don't need clever AI or sentient AI.
  • ...to screw over the vast majority of people, like it always has been.
  • 1000 words to say nothing
  • Granted, what laypeople call "AI" isn't true artificial intelligence. Still, it is a technology that is transforming our lives in disruptive ways.

    While it's true that AI won't disrupt the world in ways this author might imagine, the march of computer and internet technology is indeed transforming our world, including through technologies referred to as ML and "AI".

    Consider how Google Assistant, Siri, and Alexa. While some disdain these "smart speakers," others use them frequently to effortlessly get answers

    • by gweihir ( 88907 )

      So "AI" as a better keyboard and monitor? Because I can type a query into a search engine myself, I do not need any "Siri" for that and get pretty much the same or better results. I am also not sure the availability of these assistants is a good idea. May be used as an excuse by people why being functionally illiterate is actually fine. We have too many "dumb and proud of it" people already.

      • On one side, there are people who may be "dumb and proud of it," and on the other, there are people who think they are smarter than everybody else, and look down on those who don't have the same analytic strengths that they have.

        For you and me, Google searches are a skill we have learned over time. We know how to get the best possible results because we know what words matter and what words throw off search engines. Lots of people don't have that level of computer skills, and need a search engine or voice a

        • by gweihir ( 88907 )

          You do realize the massive potential for manipulation a voice assistant has, right?

          • Yes, clearly. Those people who are easily manipulated by the voice assistant, are also easily manipulated by facebook "news" or Parler or talk radio.

    • Or consider Google Translate. It's now possible to go to a country where you don't speak the language, and use your phone as a live, real-time translator. It can translate text, voice, and images, using some technologies called AI.

      Holy shit it is fucking amazing. Like legit sci-fi amazing. When you find yourself whinging that it's not really as good as the universal translators on ST:TNG, well, I mean the fact it's even comparable to speculative far future fantasy is just incredible.

      I don't care at all that

  • "The world will be something we can't imagine", seems to be the gist of the article. And that people are bad at guessing. Which is a cop-out for a lack of imagination and no confidence in the little you do have.

    Information has improved in quality with the internet (if you pay), but the quantity of information has increased too. And the quality of free information can be bad ("you get what you paid for it"). If there is a reason to manipulate someone else then they will try on the internet. Even if it's

  • by gweihir ( 88907 )

    At this time that should actually be enough said. But there is a residual, very small chance we might actually get AGI (and not the "flying car" way, i.e. technically exists, but is completely meaningless), and if we do, it is completely unclear what that would look like. May be anything from artificial stupidity to pretty smart. Will very likely _not_ be smarter than the smartest humans, because there is a real possibility Physics does not support anything smarter in this universe. Then there is the questi

  • > Magic is coming, and it's coming for all of us....

    No, its only magic when you do not understand it.

    What does the author gain through ignorance?

  • Not some general AI mind you, but DALL-E and Midjourney and especially Stable Diffusion (free and locally installable and hackable) are right this very moment disrupting the graphics art world. These AI tools are very much performing a disruptive, transformative miracle, and they are just getting started. Lots of cool motion video stuff about to explode out of this next.
    Now, imagine the exact same type of technology, but with music instead of graphics(I pick music because it's another artistic endeavour tha

  • Answer is no, probably. Advances will be incremental and AI can help, sure. The idea of "AI explosion", be it for good or bad (superhuman AI trying to wipe out humanity) is based either on the idea that we'll make an AI that can increase its capabilities with no limit, or it can design a better AI than itself. I don't know where these ideas come from, as there is no experience of either. We are all intelligent beings and cannot increase our capabilities, or design more intelligent human beings. In all proba

  • And this article only adds to the confusion. It complains about hype and then assumes that two very hyped technologies will combine. Quantum and AI is a common fantasy among people who haven't thought about it. We struggle to get hundreds of Q bits and maybe that doubles every few years. But we are a long ways from the 100's of billions of numbers needed for GPT-3. And quantum entanglement at that scale will have problems of it's own probably slowing progress in doubling. On the other hand GPT-3 is no
  • Do not listen to idiots. Yeah, he knows all about AI, except he knows nothing about AI, but he does know how to write for the Atlantic for other idiots to read.

  • It's junk.

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...