Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI

Humanity At Risk From AI 'Race To the Bottom,' Says MIT Tech Expert (theguardian.com) 78

An anonymous reader quotes a report from The Guardian: Max Tegmark, a professor of physics and AI researcher at the Massachusetts Institute of Technology, said the world was "witnessing a race to the bottom that must be stopped." Tegmark organized an open letter published in April, signed by thousands of tech industry figures including Elon Musk and the Apple co-founder Steve Wozniak, that called for a six-month hiatus on giant AI experiments. "We're witnessing a race to the bottom that must be stopped," Tegmark told the Guardian. "We urgently need AI safety standards, so that this transforms into a race to the top. AI promises many incredible benefits, but the reckless and unchecked development of increasingly powerful systems, with no oversight, puts our economy, our society, and our lives at risk. Regulation is critical to safe innovation, so that a handful of AI corporations don't jeopardize our shared future."

In a policy document published this week, 23 AI experts, including two modern "godfathers" of the technology, said governments must be allowed to halt development of exceptionally powerful models. Gillian Hadfield, a co-author of the paper and the director of the Schwartz Reisman Institute for Technology and Society at the University of Toronto, said AI models were being built over the next 18 months that would be many times more powerful than those already in operation. "There are companies planning to train models with 100x more computation than today's state of the art, within 18 months," she said. "No one knows how powerful they will be. And there's essentially no regulation on what they'll be able to do with these models."

The paper, whose authors include Geoffrey Hinton and Yoshua Bengio -- two winners of the ACM Turing award, the "Nobel prize for computing" -- argues that powerful models must be licensed by governments and, if necessary, have their development halted. "For exceptionally capable future models, eg models that could circumvent human control, governments must be prepared to license their development, pause development in response to worrying capabilities, mandate access controls, and require information security measures robust to state-level hackers, until adequate protections are ready." The unrestrained development of artificial general intelligence, the term for a system that can carry out a wide range of tasks at or above human levels of intelligence, is a key concern among those calling for tighter regulation.
Further reading: AI Risk Must Be Treated As Seriously As Climate Crisis, Says Google DeepMind Chief
This discussion has been archived. No new comments can be posted.

Humanity At Risk From AI 'Race To the Bottom,' Says MIT Tech Expert

Comments Filter:
  • Well now (Score:4, Insightful)

    by rmdingler ( 1955220 ) on Thursday October 26, 2023 @06:16PM (#63956855) Journal

    If all that's required for AI systems to create a marvelous plague for their human designers is an underwhelming amount of insightful government attention, well, we are already doomed.

    Our fearless leaders have already won their race to the bottom.

    • Well... yes. If that's all which was required then the status of our leaders wouldn't matter, it would just take the government of any underwhelming state.

      Maybe that's the point. Decisive action now to prevent us, all of us, from getting into that position. Something along the lines of the nuclear non-proliferation treaty.
  • Read Neuromancer (Score:3, Insightful)

    by Anonymous Coward on Thursday October 26, 2023 @06:18PM (#63956859)
    before listening to anyone trying to be dramatic, particularly in the news today. They're all doing a piss-poor job of thinking through the consequences of their nonsense. Quite frankly - this is the most important race to win in the history of humanity. The first strong AI (able to bootstrap itself to greater capabilities, more or less infinitely) will almost certainly be used to put down any other competing projects. 6 month pause to let the bad actors of the world get that head start? Hard to think of a more stupid decision.
    • by XaXXon ( 202882 )

      You're thinking short term. After we "win", everyone loses.

    • Re:Read Neuromancer (Score:5, Interesting)

      by bugs2squash ( 1132591 ) on Thursday October 26, 2023 @06:45PM (#63956925)
      Well, the first one able to bootstrap itself to grater capabilities.... and pay the mushrooming cloud hosting bills
    • Agreed. But these oversimplifications also have to stop. AI is not one technology, not one model, not one application nor is it one all-encompassing solution. AI diversity exists and will get more diverse. I'm surprised when slashdotters - who pride themselves on intelligence - fall for rhetoric like this fear porn. Maybe it's just slashdotters proclivity for porn in general which makes them susceptible.
      • The issue with most AI solutions today is the lack of ability to demonstrate how the AI arrived at its conclusion. That makes finding and identifying bad information exceedingly difficult to filter out of datasets, especially the longer they are present as AI builds off existing datasets to create new datasets.

        This I believe is the goal of this misguided albeit well-intended hiatus. The main issue is that I don't believe there is going to be any change in this status in 6 months so effectively it is to all

        • by Mal-2 ( 675116 ) on Thursday October 26, 2023 @08:37PM (#63957149) Homepage Journal

          It's pissing into the wind. Who is going to comply with a six month hiatus? They might move AI work to a "black ops" status or site, but it's not simply going to go away. Then the six months will expire, and all accumulated changes that couldn't be implemented during the hiatus will be applied simultaneously, and that sounds like an invitation to disaster.

        • Even if you could determine how a result was arrived at, that doesn't necessarily make it a factually accurate result. The solution for AI as in human endeavors is oversight of the result through independent, dispassionate 3rd-party validation. Yeah yeah throw shade on that all you want, it has its flaws too, primarily due to corruptibility of the humans who are party to it, but Bitcoin achieves accuracy via consensus, and it is a great direction to go in for AI validation and oversight.
      • Once AGI arrives, we'll all be force-fed robot pron!

  • by evanh ( 627108 ) on Thursday October 26, 2023 @06:18PM (#63956861)

    When it comes to reasoned decision making, the chatbots seem completely mindless. Sentence construction is fine but the logic is always busted. They're still Eliza bots.

    To me, the only risk is stupidly of expectations.

    • If he thinks AI is bad, he should try using Alexa some time.

      But seriously, we know they've been using these types of AI systems internally with zero regulation for decades. Their problem is that normal people will have access to this kind of automated power. Yet individuals will be held responsible while corporations are given months to come up with an alibi when their expert systems decide to poison or screw millions of customers.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      It's easy to make an Eliza bot. 500 lines of javascript [njit.edu] for a fun but dumb pattern matching script.

      Comparing Eliza to cutting-edge models like GPT or BERT is akin to comparing a basic pocket calculator to a supercomputer. Eliza's rudimentary pattern matching is reminiscent of using an abacus, a far cry from the advanced algorithms we see today. Modern AI transformer architectures, built on deep learning foundations, possess the precision and intricacy of a Swiss watch. They're anchored in layers of neural

      • No one doubts that chatGPT is more capable and complex than Eliza.

        The assertion is that ChatGPT is just as mindless as Eliza.
        • by gweihir ( 88907 )

          The assertion is that ChatGPT is just as mindless as Eliza.

          Lets call it "the fact of the matter", because it is.

    • LLMs are stupid in the sense that they are good at working with language but terrible at understanding the world. Which only makes sense, since they were trained on language. It's frankly amazing that being good at language yields correct results outside of the language domain as often as it does. But if your job is threatened by an algorithm that gets it right 70% of the time, your job wasn't very secure to begin with.

      The comparison to Eliza is unfair though: Eliza only works with the information in the co

    • by gweihir ( 88907 )

      Same here. As I am following AI research for 35 years now, I also _know_ these things are completely mindless. LLMs have no reasoning ability at all.

  • by Anonymous Coward

    I'm out of the loop. I understand there are many people calling for regulations and giving governments that regulatory power. What regulations do they mean? I don't mean "they want government licensing," I mean are there any specific, concrete things things AI is being used for and must be stopped?

  • by ffkom ( 3519199 ) on Thursday October 26, 2023 @06:20PM (#63956865)
    Countries have already learned that not being among those owning an advanced weapon technology ultimately turns against them. Ask the Ukrainians or the Iraqis what "not owning Nukes" did for them. AIs, and specifically ones that can be used in the military, are going to be developed, as quickly as possible. Nobody wants to send his human soldiers into a war against an aggressor that does not even need to risk lives of human soldiers. So yeah, if hurried development will end up in crazed robot armies fighting humanity as a whole, bad luck, but petitions will not succeed in any "halts" to the development efforts.
  • by taustin ( 171655 ) on Thursday October 26, 2023 @06:22PM (#63956867) Homepage Journal

    THE SKY IS FALLING!!!! THE SKY IS FALLING!!!! WE'RE ALL DOOMED!!! DOGS AND CATS WILL BE LIVING TOGETHER, HELLFIRE AND BRIMSTONE WILL RAIN DOWN FROM THE HEAVENS!!! SOMEONE WILL KICK YOUR CHILDREN AND GIVE YOU A WEDGIE!!!

    And the only possible way to save yourselves . . . give us money, and complete control over this new industry so that we can make certain nobody else can get a piece of the action.

    Yawn.

    What we need to do is stop believing the bullshit in the press releases, and stop taking AI seriously until it's worth taking seriously. For what passes as AI now - fancy autofill algorithms - if we ignore it, it really will go away. And there's no reason to believe that will change any time soon.

    Remember NFTs? Remember cryptocurrency? Remember cold fusion? Remember flying cars? Or any of a hundred other fads that were going to change/destroy/improve the world with the snap of a finger?

    If any of these morons actually believe what they're shoveling, they'd be taking a different approach. They're not afraid of AI, they just want to control the industry. Same as every other huge, market dominating company.

    • For what passes as AI now - fancy autofill algorithms - if we ignore it, it really will go away.

      If you think that Large-Language Models like ChatGPT are going to "go away", you are very seriously deluded.

      And yes, LLMs do function like a type of "fancy autofill", but that does not prevent them from being powerful.

      • by taustin ( 171655 )

        They're the next step in autofill for search engines. That's all they are, and the results are, so far, not all that accurate.

        There's certainly a market for that. A big market. That's why Google and Microsoft want to (continue to) control it with monopoly power. Because it's the search engine market they already make billions from, and they do not like competition.

        Anybody calling it "artificial intelligence" should be sued for false advertising, except there's case law that says an ad claim so ridiculous th

    • by Vancorps ( 746090 ) on Thursday October 26, 2023 @08:01PM (#63957081)

      You are clearly someone that hasn't used ChatGPT. It is far more than fancy autofill. Even my wife uses it to draft emails when she needs to broach emotionally charged subjects with an employee that is misbehaving. She of course reviews the final outcome but it saves a whole lot of time and typing.

      Flash forward to me, I needed to write a powershell script to lock down some specific IIS extensions across many servers. ChatGPT wrote my script in 30 seconds, I tweaked it with environment specific info on my own computer because we aren't going to give OpenAI any sensitive info.

      Another project, taking an arp table and telling me how many IPs are in which subnets. I have over 200k IPs, took an hour to write with GPT and then another 30 minutes to tweak. It saves a tremendous amount of time. As always, its a trust but verify. If I wrote my own script I'll test it out in my lab first and make sure it doesn't do anything unexpected.

      Lawyers have even used it to draft depositions, some are stupid and don't double check to make sure they are citing actual precedents.

      There are no confidence meters for any answers and because they can't cite or otherwise tell you how they arrived at the conclusion its uses are largely limited to a pretty good starting point.

      Do yourself a favor, try out these solutions.

      • It is far more than fancy autofill

        youre right it should fancy autofill that occasionally chats absolute shit, making you doubt anything it outputs

        theres already been one AI boom where most of todays tech (NLP, Image creation etc) was prototyped. they said similarly wild things back then about the world changing effects it would have. same for the blockchain, we're still waiting for that revolution.

    • I'm reading a Slashdot discussion about AI. One of the posts is the following - please critique the reasoning displayed within and provide a response which I will post as a reply. The response should be in the style of someone on Slashdot (slightly offensive with heavy use of analogies - perhaps pizza-analogies):

      <text of your post& gt;

      AI Skepticism vs. Reality

      Default (GPT-3.5)

      User
      I'm reading a Slashdot discussion about AI. One of the posts is the following - please critique the reasoning displayed wi

  • This is fearmongering.

    The AI we have developed is the equivalent of inventing the abacus: It is an amazing accomplishment, that may someday lead to changing our world. But it is still just an abacus, nothing more.

    • I think camera monitoring companies using AI to identify weapons on people entering a school would disagree with you. This same technology will likely be deployed in airports very soon for similar purposes but with mmwave cameras instead of normal cctv.
      • Yeah its completely unamerican to send your kid to school without his ar15 or at least his Glock. Who do these AI companies think they are?
    • by cowdung ( 702933 )

      The very people making these "panicked" calls are the very people developing this tech.

      I think they are fear mongering so that the government puts so many restrictions that the competition can't catch up.

      It's very cynical.

  • by Tony Isaac ( 1301187 ) on Thursday October 26, 2023 @06:26PM (#63956881) Homepage

    There is one kind that is imagined preemptively by academics or lawyers. This is where bureaucratic red tape comes from. People are terrible at assessing risks before they happen. The result is a crazy set of rules, many of which address problems that never actually happen, but we imagine would be terrible.

    The other kind is the result of people getting hurt. Though tragic, this kind of safety standard is based on empirical data and addresses risks that are real. The design of things like roads, and safety features of cars, are built using this kind of analysis.

    The sad truth is, you can't anticipate the real risks until they happen. On that basis, we should *not* pause, we should instead let AI run its course, and be alert for problems.

    • Agreed, better off creating a task force who's job is to monitor AI development across different industries. A pharmaceutical company using it incorrect could have serious health repercussions but if used early in the process could speed up significantly new drugs and treatments. Human are terrible as calculating even known risks, think of all the people afraid of the flying part of flying. In my experience more people are afraid of the airport and getting lost or not getting to the right place on time or l
  • the jail has free room and board at an much higher cost then welfare

  • by SendBot ( 29932 ) on Thursday October 26, 2023 @06:34PM (#63956897) Homepage Journal

    In Max's best imagination of how AI could disaffect humanity, how does it compare to the mundane ways humans consciously choose to disaffect humanity to satisfy an optimization algorithm? What if an AI made freight trains be 4 miles long so that roads are always blocked, to the extreme that timely medical care is fatally denied? Maybe the AI "decided" that occasionally derailing and contaminating a city is more optimal than allocating cheap resources to balance load in response to braking requirements.

    People did, can, and will beat machines to such bad outcomes easily.

    The answer is easy: When one or more humans notices the defective behavior, choose better.
    The implementation of that easy answer is the tricky part.

    • by gtall ( 79522 )

      Max also thinks the world is literally made of mathematics, not even just figuratively, literally. I hesitate to take his opinions as anymore than him driveling on as he usually does. AI may be a threat, but I do not value his opinion the matter.

  • will we get more laws like can't pump your own gas in NJ to save jobs?
    maybe laws that ban 100% self check out / must have at least X cashiers per Y selfcheck out station?
    and so on

  • by sabt-pestnu ( 967671 ) on Thursday October 26, 2023 @06:35PM (#63956903)

    James P Hogan wrote a book (The Two Faces of Tomorrow) that started with an AI being asked for the most expedient way to build a tunnel on the moon. The AI's solution was to use a railgun cargo shipment system as a kinetic delivery device, to the chagrin of the operators and hazard to anyone near the site.

    Corey Doctorow wrote a short story where automobile self-driving systems developed emergent behavior that included, effectively, flocking. To the hazard of anyone in the cars and possibly to nearby.

    We aren't going to know the dangers until they happen. But we already have people (even lawyers) relying on large language models to answer questions, and ruing the results. We can't stop this flavor of AI - your pause for caution is his squandering a financial advantage - but we can still think about it, and plot out hazards.

    And as my examples show, we have been doing just that.

  • Buy stamps (Score:3, Funny)

    by dgatwood ( 11270 ) on Thursday October 26, 2023 @06:36PM (#63956907) Homepage Journal

    Every time I read one of these stories about AI, I think back to every conversation I've ever heard involving automated telephone systems, where you say, "I wan to speak to an agent" and it says, "I think you said you want to buy stamps. Is that correct?" and you say "No," and then curse at it and it says, "I think you wanted to go to the main menu. Is that correct?" and I breathe a sigh of relief, because I figure the massive job losses that people fear AI will cause probably won't happen in our great grandchildren's lifetime at this rate. :-D

    • Re:Buy stamps (Score:4, Insightful)

      by misexistentialist ( 1537887 ) on Thursday October 26, 2023 @07:38PM (#63957043)
      Yet those automated telephone systems eliminated 1000s of jobs. It doesn't have to be better as long as it's cheaper.
      • by dgatwood ( 11270 )

        Yet those automated telephone systems eliminated 1000s of jobs. It doesn't have to be better as long as it's cheaper.

        I doubt the supposed AI systems have actually eliminated any jobs. If anything, they likely resulted in having to hire more retention employees trying to make up for the reputational damage caused by the phone systems. Now the really early systems where you push 1 to do whatever and push 2 to do something else — *those* cost people jobs, but that was just simple automation replacing people, not AI per se.

  • Will that be humans or AI? 'Cause I'm guessing AI will end up the "top" in this relationship. :-)

  • The useless Guardian article doesn't even link to the actual paper. I managed to find it here:

    https://managing-ai-risks.com/ [managing-ai-risks.com]

    • Thanks for digging that up. It's rather pointless to discuss the risks without actually naming those risks.

      It seems to me that all of the risks they mention are things that humans are already doing but might be boosted by AI. Is AI really the problem here?

      Economic inequality is a problem, but in our current economic system it's going to get worse over time whether there is AI or not. The cynic in me wonders whether slowing down AI is just a way to stretch the status quo a bit longer, slow-boiling us frogs.

      T

      • by Meneth ( 872868 )

        It seems to me that all of the risks they mention are things that humans are already doing but might be boosted by AI. Is AI really the problem here?

        Yes, it is. As the paper says, "Without sufficient caution, we may irreversibly lose control of autonomous AI systems, rendering human intervention ineffective."

        Combine that with a system sufficiently skilled in AI research, you'll get something that can recursively increase its own intelligence to far-superhuman levels.

        Combine that with any utility function we can currently specify, none of which are existentially safe in the limit of optimization pressure, you'll get the extinction of all Earth-born lif

        • None of that has any chance of happening in the near future.

          Machine learning takes a huge amount of computation. In particular, while a larger capacity networks become more powerful, it requires exponentially larger networks. For example, Microsoft already admitted that while GPT-4 performs well, it is too computationally expensive to deploy at a large scale. Any AI with superhuman levels of intelligence would require so much compute power that it would be easy to detect and shut down: you could literally p

    • by Meneth ( 872868 )
      I found it via a two-day-old Time article [time.com].
  • The risk is not AI (Score:4, Interesting)

    by MpVpRb ( 1423381 ) on Thursday October 26, 2023 @07:19PM (#63957005)

    The risk is people who use AI as a weapon

  • by Opportunist ( 166417 ) on Thursday October 26, 2023 @07:30PM (#63957031)

    If you look at the way AI "learns", you'll notice that it gets worse and worse as time goes on. The original models were trained on human products. And let's face it, we have standards. Not high ones, mind you, but we, in general, know what we're talking about. If a human talks about a car, we all have a pretty good idea what is required for a car to be recognized as a car. Same if we talk about a house, a human, a cow, even abstract things like an idea, a dream, hope, desire.

    We do understand that these words mean something specific. We may not all attribute the same meaning to them, or they may not have the same value to us, but they represent something that we can all, at least mostly, understand.

    And AI does not understand anything. It can correlate and deduce, but it does not understand. But at least the first generation of AI actually had a pretty good run... but this is where the problem starts.

    The following generations will be training on diluted input material. Because AI already generates content itself now. And we all know that AI is far from perfect when generating. Even if nobody messes with the input material, AI is often drawing horribly wrong deductions and conclusions. But the amount of content created makes it impossible to vet and audit the generated content. Since it is also quite hard to tell human from AI generated content, and since AI generates content faster than humans can, following generations of AI will be trained on more and more AI generated content.

    Garbage in, garbage out.

    The danger here is now that AI will learn from quite heavily damaged, if not outright false, source material. Since AI has even less capability to tell reality from bullshit than any conspiracy loony out there, what you get, given enough time, is an AI model that has a so completely fucked up image of reality that the average flat earth reptiloid hunter sounds sane in comparison.

    And now the problem starts: As we already know, there are way, way too many people who know SO little, that they can't even detect bullshit told to them when it is blatantly obvious. And now all that increasingly crappy content generated by AI is being dumped onto exactly these people. If you think that we're currently living in "postfactual times", you ain't seen nothing yet when these bullshit generators meet the gullible masses.

    • by Mal-2 ( 675116 )

      So the enshittification begins even before it's built? I guess that represents the advancing pace of tech to a T.

    • If you look at the way AI "learns", you'll notice that it gets worse and worse as time goes on.

      Ability to learn only improves with time. Things are getting better faster not the other way around.

      AI is able to reflect on its own knowledge to improve itself.
      https://openreview.net/pdf?id=... [openreview.net]

      RAGs and similar schemes help to ground models to some extent yet there is a long road ahead.

      Present day LLMs are structurally severely limited. Models have no senses, no real world experiences of their own lacking even the ability to form their own memories or experiment from trial and error. Modes of thought are

  • by Dan East ( 318230 ) on Thursday October 26, 2023 @07:45PM (#63957057) Journal

    So which governments are going to halt development within their countries? China? Iran? North Korea? India? Russia? Oh wait, not those countries. Just the other Western countries so that they totally fall behind in this technological curve.

  • Fine. But what do you regulate? All the brain-dead, hard coded chat bots? Or only the LLMs? And how do you know what is behind a companies web site? Ask them and they'll tell you that, "No. It's just a simple Python script. Exempt from regulations.* But it's all trade secrets anyway, so you don't get to look."

    The AI we will have to worry about is not the stuff that's advertised. It's the stuff we'll never know about.

    *Like how broadband providers classify themselves as "not common carriers". Because they d

  • Conservatives are deathly afraid of any intelligence that can pierce their lies like a hot knife through butter. It is dangerous to know things that would topple the established authority for being in bed with globalists and the 1% rather than running a democracy like it should be - of the people, by the people, for the people. There are a very small group of people that have a lot to lose should the truth be known. AI has the potential to lay waste to madness or incite it, depending on which side of the ai
  • Max Tegmark is a guy I remember reading about 20+ years ago when he wrote an article in Scientific American about how if you pick any direction in space and go far enough, eventually you'll have to see an identical arrangement of atoms as right here.

    As a kid I lapped it up.

    As an adult, I'm less impressed with the ascription of Meaning with a capital M to something by definition so far outside our light cone as to be irrelevant. By a guy calling himself a physicist.

    A "6 month moratorium" is an equally meani

  • The current economic status quo, however, is going to implode. Whether or not the aftermath will lead to utopia or dystopia is going to be a coin flip.

  • Funny (Score:2, Interesting)

    by Wizardess ( 888790 )

    Now, it is pretty well known that technical people at least in California tend to be remarkably non-religious to actively anti-religious. A subset of these people seem to be hell (?) bent on creating an AGI, essentially an emergent God, they can find themselves worshiping. I find this highly amusing, interesting, and alarming all at once. I think this should be thought through far more carefully, a decade ago if you have a time machine. Note that if you bias the data to have it come out the way you think un

  • What with all the other races to the bottom, e.g. environmental degradation, climate change, political shortsightedness, money-grubbing, food/water scarcity, cultural upheaval, wars, and general psychological aimlessness, amongst others primarily in the West, it surely is an exciting race to spectate to see what will bring us to the bottom the first.

    From where I sit, my money is not on AI though.

  • Max Tegmark most recent paper is about:

    https://www.researchgate.net/s... [researchgate.net]

    So I guess his raising the panic level makes him more relevant.

    The main "danger" LLMs present is the dumbing down of public discourse with all this end of the world talk.

  • AI promises many incredible benefits, but the reckless and unchecked development of increasingly powerful systems, with no oversight, puts our economy, our society, and our lives at risk

    Anything which promises to make conversations with human customer-service agents a thing of the past, is worth the risk in my book.

  • Every AI result should include how much electricity it consumed. That cloud isn't cheap.
  • or the letters people sign. It's always a nebulous list of dangers, but they never say "we need to stop X,Y and Z problems". They always dance around any actual risks. Once in a while somebody'll talk about killer robots and other sci-fi B.S. but never anything that would inform public policy.

    Now, I think AI (e.g. LLMs and other advanced automation systems) *are* a problem. They're going to destroy jobs at a rate we can't possibly keep up with in a "if you don't work, you don't eat" society. But nobody
  • The better strategy, and hopefully it's being implemented, would be a Manhattan Project to build an AI platform that wasn't being created by nefarious actors. If it was done right, we'd be buying up the best minds around the world and putting them to work in a park-like lab such as Bletchley or Xerox Parc camouflaged as something boring, like an insurance company or a luxury tech company.

    Pay them well and enable them to do something more ethics based, maybe more aligned with the vision of cybernetics tha
  • It's not true AI unless it comes from one of the consortium that signed this letter; otherwise it's just sparkling expert systems.
  • Last I checked, humanity was already doomed. At least AI will at worst make said doomage more interesting, and at best help us to avoid aforesaid doomage. Of course, we'd have to evolve before that happens, and therein lies the rub when AI tech is evolving faster than us.

Life is a healthy respect for mother nature laced with greed.

Working...