Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Was SARS-CoV-2 an example of "harms"? (Score 1) 38

What do you think of the idea that SARS-CoV-2 was perhaps engineered in the USA as a DARPA-funded self-spreading vaccine intended for bats to prevent zoonotic outbreaks in China but then accidentally leaked out when being tested by a partner in Wuhan who had a colony of the bats the vaccine was intended for? More details on the possible "who what when how and why" of all that:
https://dailysceptic.org/2024/...

If true, it provides an example of how dangerous this sort of gain-of-function bio-engineering of viruses can be (even if perhaps well-intended by the people involved). Also on that theme from 2014:
"Threatened pandemics and laboratory escapes: Self-fulfilling prophecies"
https://thebulletin.org/2014/0...
"Looking at the problem pragmatically, the question is not if such escapes will result in a major civilian outbreak, but rather what the pathogen will be and how such an escape may be contained, if indeed it can be contained at all."

My worry with SARS-CoV-2 from the start (working in the biotech industry at the time, including by chance helping track the evolution of SARS-CoV-2) was that so much effort would go into researching the virus and understanding why it was so transmissible and dangerous (mainly to older people) that such knowledge could be misused by humans to make worse viruses. Sadly, AI now accelerates that risk (as in the video I linked to).

Here is Eric Schmidt recently saying essentially the same thing as far as the risk of AI being used to create pathogens for nefarious purposes and how he and others are very worried about it:
"Dr. Eric Schmidt: Special Competitive Studies Project"
https://www.youtube.com/watch?...

Comment AI and job replacement (Score 1) 74

I don't know if "enthusiastic about the tech" completely describes my feelings about a short-term employment threat and a longer-term existential threat, but, yeah, neat stuff. Kind of maybe like respecting the cleverness and potential dangerous of a nuclear bomb or a black widow spider?

A related story I submitted: "The Workers Who Lost Their Jobs To AI "
https://news.slashdot.org/stor...

Sure maybe there is some braggadocio in Hinton somewhere which we all have. But he just does not come across much that way to me. He seems genuinely somewhat penitent in how he talks. Hinton at age 77 sounds to me more like someone who had an enjoyable career building neat things no one thought he could working in academia who just wants to retire from the limelight (as he says in the video) but feels compelled to spread a message of concern. I could be wrong about his motivation perhaps.

On whether AI exists, I've seen about four decades of things that were once "AI" no longer being considered AI once computers can do the thing (as others before me have commented on first). That has been everything from answering text questions about moon rocks, to playing chess, to composing music, to reading printed characters in books, to recognizing the shape of 3D objects, to driving a car, to now generating videos now, and more.

Example of the last of a video which soon will probably no longer being thought of as involving "AI":
""it's over, we're cooked!" -- says girl that literally does not exist..."
https://www.reddit.com/r/singu...

On robots vs chimps, robots have come a long way since, say, I saw one of the first hopping robots in Marc Raibert's Lab at CMU in 1986 (and then later saw it visiting the MIT Museum with my kid). Example for what Marc Raibert and associates (now Boston Dynamics) has since achieved after forty years of development:
"Boston Dynamics New Atlas Robot Feels TOO Real and It's Terrifying!"
https://www.youtube.com/watch?...

Some examples from this search:
https://duckduckgo.com/?q=robo...

"I Witnessed the MOST ADVANCED Robotic Hand at CES 2025"
https://www.youtube.com/watch?...

"Newest Robotic Hand is Sensitive as Fingertips"
https://www.youtube.com/watch?...

"ORCA: An Open-Source, Reliable, Cost-Effective, Anthropomorphic Robotic Hand - IROS 2025 Paper Video"
https://www.youtube.com/watch?...

What's going on in China:
"China's First Robot With Human Brain SHOCKED The World at FAIR Plus Exhibition"
https://www.youtube.com/watch?...

Lots more stuff out there. So if by "long time" on achieving fine motor control you mean by last year, well maybe. :-) I agree with you though that the self-replicating part is still quite a ways off. Inspired by James P. Hogan's "Two faces of Tomorrow" (1978) and NASA's Advanced Automation for Space Missions (1980) I tried (grandiosely) to help with that self-replicating part -- to little success so far:
https://pdfernhout.net/princet...
https://pdfernhout.net/sunrise...
https://kurtz-fernhout.com/osc...

On who will buy stuff, it is perhaps a capitalist version of "tragedy of the commons". Every company thinks they will get the first mover advantage by firing most of their workers and replacing them with AI and robots. They don't think past the next quarter or at best year. Who will pay for products or who will pay unemployed workers to survive for decades is someone else's problem.

See the 1950s sci-fi story "The Midas Plague" for some related humor on dealing with the resulting economic imbalance. :-)
https://en.wikipedia.org/wiki/...
""The Midas Plague" (originally published in Galaxy in 1954). In a world of cheap energy, robots are overproducing the commodities enjoyed by humankind. The lower-class "poor" must spend their lives in frantic consumption, trying to keep up with the robots' extravagant production, while the upper-class "rich" can live lives of simplicity. Property crime is nonexistent, and the government Ration Board enforces the use of ration stamps to ensure that everyone consumes their quotas. ..."

Related on how in the past the Commons were surprisingly well-managed anyway:
https://en.wikipedia.org/wiki/...
"In Governing the Commons, Ostrom summarized eight design principles that were present in the sustainable common pool resource institutions she studied ... Ostrom and her many co-researchers have developed a comprehensive "Social-Ecological Systems (SES) framework", within which much of the still-evolving theory of common-pool resources and collective self-governance is now located."

Marc Andreessen might disagree with some of those principles and have his own?
https://en.wikipedia.org/wiki/...
"The "Techno-Optimist Manifesto" is a 2023 self-published essay by venture capitalist Marc Andreessen. The essay argues that many significant problems of humanity have been solved with the development of technology, particularly technology without any constraints, and that we should do everything possible to accelerate technology development and advancement. Technology, according to Andreessen, is what drives wealth and happiness.[1] The essay is considered a manifesto for effective accelerationism."

I actually like most of what Marc has to say -- except he probably fundamentally misses "The Case Against Competition" and why AI produced through capitalist competition will likely doom us all:
https://www.alfiekohn.org/arti...
"Children succeed in spite of competition, not because of it. Most of us were raised to believe that we do our best work when weâ(TM)re in a race -- that without competition we would all become fat, lazy, and mediocre. Itâ(TM)s a belief that our society takes on faith. Itâ(TM)s also false."

I agree AI can be overhyped. But then I read somewhere so were the early industrial power looms -- that were used more as a threat to keep wages down and working conditions poor using the threat that otherwise the factory owners would bring in the (expensive-to-install) looms.

Good luck and have fun with your project! Pessimistically, it perhaps it may have already "succeeded" if just knowing about it has made company workers nervous enough that they are afraid to ask for raises or more benefits? Optimistically though, it may instead mean the company will be more successful and can afford to pay more to retain skilled workers who work well with AI? I hope for you it is more of the later than the former.

Something I posted a while back though on how AI and robotics and provide an illusion of increasing employment by helping one company grow while its competitors shrink: https://slashdot.org/comments....

Or from 2021:
https://www.forbes.com/sites/j...
"According to a new academic research study, automation technology has been the primary driver in U.S. income inequality over the past 40 years. The report, published by the National Bureau of Economic Research, claims that 50% to 70% of changes in U.S. wages, since 1980, can be attributed to wage declines among blue-collar workers who were replaced or degraded by automation."

But in an (unregulated, mostly-non-unionized) capitalist system, what choice do most owners or employees (e.g. you) really have but to embrace AI and robotics -- to the extent it is not hype -- and race ahead?

Well, ignoring owners and employees could also expand their participation in subsistence, gift, and planned transactions as fallbacks -- but that is a big conceptual leap and still does not change the exchange economy imperative. A video and an essay I made on that:
"Five Interwoven Economies: Subsistence, Gift, Exchange, Planned, and Theft"
https://www.youtube.com/watch?...
"Beyond a Jobless Recovery: A heterodox perspective on 21st century economics"
https://pdfernhout.net/beyond-...

Anyway, I'm probably just starting to repeat myself here.

Comment Re:Could we "pull the plug" on networked computers (Score 1) 74

Thanks for the conversation. On your point "And why want it in the first place?" That is a insightful point, and I wish more people would think about that. Frankly, I don't think we need AI of any great sort right now, even if it is hard to argue with the value of some current AI systems like machine vision for parts inspection. Most of the "benefits" AI advocates trot out (e.g. solving world hunger, or global climate change, or cancer ,or whatever) are generally issues that have to do with politics and economics (e.g. there is enough food to go around but poor people can't pay for its broad distribution, renewable energy can power all our needs and is cheaper if fossil fuels had to pay true costs up front including defense costs, most cancer is from diet and lifestyle and toxins which we are problems because of mis-incentives like subsidizing ultraprocessed foods, etc). I am hard-pressed to think of any significant benefits from AI that could not be replaced by just having better social policies (including ones that fund more human-involved R&D).

=== Some other random thoughts on all this

I just finished watching this interview of Geoffrey Hinton which touches on some of the points discussed here:
"Godfather of AI: I Tried to Warn Them, But We've Already Lost Control! Geoffrey Hinton"
https://www.youtube.com/watch?...

It is a fantastic interview that anyone interested in AI should watch.

Some nuances missed there though:

* My sig is a huge piece of his message on AI safety: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity." (As a self-professed socialist Hinton at least asks for governments to be responsible to public opinion on AI safety and social safety nets -- but that still does not quite capture the idea in my sig which concerns a more fundamental issue than just prudence or charity)

* That you have to assume finite demand in a finite world by finite beings over a finite time for all goods and services for their to be job loss (which I think is true, but was not stated, as mentioned by me here: https://pdfernhout.net/beyond-... ).

* That quality demands on products might go up with AI and absorb much more labor (i.e. doctors who might before reliably cure 99% of patients might be expected to cure 99.9%, where that extra 0.9% might take ten or a hundred times as much work)

* That his niece he mentioned who used AI to go from answering one medical complaint in 35 minutes to only 5 minutes could in theory now be be paid 5 times more but probably isn't -- so who got the benefits (Marshall Brain;s point on wealth concentration) -- unless quality was increased.

* That while people like him and the interviewer may thrive on a "work" purpose (and will suffer in that sense if ASI can do everything better), for most people the purpose of raising children and being a good friend and neighbor and having hobbies and spending time making health choices might be purpose enough.

* (Hinton touches on this, but to amplify) That right now there is room for many good enough workers in any business because there is only one of the best worker and that one person can't be everywhere doing everything. But (ignoring value in diversity) if you can just copy the best worker, and not pay the copies, then there is no room for any but the best worker. And worse, there is not room for even the best human worker if you can just employ the copies without employing the original once you have copied them. As Hinton said, digital intelligence means you can make (inexpensive) copies of systems that have already learned what they need to know -- and digital intelligence can share information a billion times faster than humans.

My undergrad advisor in cognitive psychology (George A. Miller) passed around one of Hinton's early papers circa 1984. And George (who like puns) joked "What are you hintin' at?" when I used "hinton" as part of a password for a shared computer account. Hinton and I must have overlapped when I was visiting CMU circa 1985-1986 but I don't recall offhand talking with him then. I think I would have enjoyed talking to him though (more so in a way than Herbert Simon who as Nobel Prize winner then was hard to get a few short meetings with -- one reason winning the Nobel Prize tends to destroy productivity). Hinton seems like a very nice guy -- even if he worries his work might (unintentionally) spell doom for us all. Although he does say how raising two kids as a single parent changed him and made him essentially more compassionate and so on -- so maybe he is a nicer guy now than back then? In any case, I can be glad my AI career (such as it was) took a different path than his, with me spending more time thinking about the social implications than the technical implementations (in part out of concerns of robots replacing humans that arose from talking with people in Hans' labs -- where I could see that, say, self-replicating robotic cockroaches deployed for military purposes could potentially wipe out humanity and then perhaps collapse themselves, instead of our successors being Hans' idealized "mind children" exploring the universe in our stead, like in his writings mentioned below).

While Hinton does not go into it in detail in that interview, there is why his intuition on neural networks was ultimately productive -- because of a Moore's Laws increase of capacity that made statistical approaches to AI more feasible:
https://www.datasciencecentral...
" Recently I came across an explanation by John Launchbury, the Director of DARPA's Information Innovation Office who has a broader and longer term view. He divides the history and the future of AI into three ages:
1. The Age of Handcrafted Knowledge
2. The Age of Statistical Learning
3. The Age of Contextual Adaptation."

Also related to that by Hans Moravec from 1999 (whose lab I was a visitor at CMU over a decade earlier):
https://faculty.umb.edu/gary_z...
"By 2050 robot "brains" based on computers that execute 100 trillion instructions per second will start rivaling human intelligence"
"In light of what I have just described as a history of largely unfulfilled goals in robotics, why do I believe that rapid progress and stunning accomplishments are in the offing? My confidence is based on recent developments in electronics and software, as well as on my own observations of robots, computers and even insects, reptiles and other living things over the past 30 years.
        The single best reason for optimism is the soaring performance in recent years of mass-produced computers. Through the 1970s and 1980s, the computers readily available to robotics researchers were capable of executing about one million instructions per second (MIPS). Each of these instructions represented a very basic task, like adding two 10-digit numbers or storing the result in a specified location in memory. In the 1990s computer power suitable for controlling a research robot shot through 10 MIPS, 100 MIPS and has lately reached 1,000 in high-end desktop machines. Apple's new iBook laptop computer, with a retail price at the time of this writing of $1,600, achieves more than 500 MIPS. Thus, functions far beyond the capabilities of robots in the 1970s and 1980s are now coming close to commercial viability. ...
      One thousand MIPS is only now appearing in high-end desktop PCs. In a few years it will be found in laptops and similar smaller, cheaper computers fit for robots. To prepare for that day, we recently began an intensive [DARPA-funded] three-year project to develop a prototype for commercial products based on such a computer. We plan to automate learning processes to optimize hundreds of evidence-weighing parameters and to write programs to find clear paths, locations, floors, walls, doors and other objects in the three-dimensional maps. We will also test programs that orchestrate the basic capabilities into larger tasks, such as delivery, floor cleaning and security patrol. ...
        Fourth-generation universal robots with a humanlike 100 million MIPS will be able to abstract and generalize. They will result from melding powerful reasoning programs to third-generation machines. These reasoning programs will be the far more sophisticated descendants of today's theorem provers and expert systems, which mimic human reasoning to make medical diagnoses, schedule routes, make financial decisions, configure computer systems, analyze seismic data to locate oil deposits and so on.
        Properly educated, the resulting robots will be come quite formidable. In fact, I am sure they will outperform us in any conceivable area of endeavor, intellectual or physical. Inevitably, such a development will lead to a fundamental restructuring of our society. Entire corporations will exist without any human employees or investors at all. Humans will play a pivotal role in formulating the intricate complex of laws that will govern corporate behavior. Ultimately, though, it is likely that our descendants will cease to work in the sense that we do now. They will probably occupy their days with a variety of social, recreational and artistic pursuits, not unlike today's comfortable retirees or the wealthy leisure classes.
        The path I've outlined roughly recapitulates the evolution of human intelligence -- but 10 million times more rapidly. It suggests that robot intelligence will surpass our own well before 2050. In that case, mass-produced, fully educated robot scientists working diligently, cheaply, rapidly and increasingly effectively will ensure that most of what science knows in 2050 will have been discovered by our artificial progeny!"

Comment Follow the money (Score 0) 167

Interesting FP branch, but I think it mostly missed the boat. I think it's mostly about the money and that's how we got fscked.

There was a kind of "golden age" of journalism, but I would argue that was an aberration linked to a weird financial model. Started with radio where the frequencies created a temporary monopoly and the government licensed the monopoly with mandates for "free" news. That created the illusion that news could be separated from who was paying for it. Two interesting contrasts: (1) In newspapers before radio the ads were fully visible so the readers could assess the money flows. (2) TV started with the radio model of journalism, but cable and the Internet broke the funding model.

I largely blame "60 Minutes" for the first successful "for-profit" model of journalism. CNN showed where that leads and FAUX figured out how to disguise the real advertisers behind fake ones...

I used to fantasize about solution-oriented journalism as a new funding approach, but now I think "We can't get there from here", where "here" is any substantially better state of journalism.

Comment Re:Bullshit task for the bullshit generator (Score 0) 29

Mod grandparent funny, though the parent Subject seems better for the joke? Generally misfiring of the Funny on Slashdot these years?

On the story, I think the "use" should be in the more active sense, but 'I don't have to work no more' [sic], so that part of the story flies over my head, but I sometimes find AI a useful tool and use it accordingly. About two days ago an ancient website crashed and I couldn't get some information I wanted. So I used an AI to quickly create a little tool to generate the information locally. For what it's worth, the new local version runs faster than the website version, though I see the website has recovered again...

I actually tried to create the tool myself, but my first attempt failed, so I invoked the AI. It's response clarified the security risk of my initial solution approach, so I can't complain about the AI's recommended approach.

Comment Re:What? [does it profit the fool?] (Score 0) 284

Sure it's legal. Just read about it in the newspaper. You still subscribe to a newspaper, right?

No dead tree? Okay, so look it up on your smartphone. Oh wait. You don't have the right smartphone. Yet.

I would like to see some historical research correlating mentions of national leaders with various job-related metrics and divided up by geography.

Easy example: Little Kim gets LOTS of extremely favorable public mentions in North Korea. From near zero to HUGE in a few days (when he became established as the successor)--and at some point his mentions will approach zero again. But outside of North Korea? Not so many and not so favorable. Ever.

The YOB case is (relatively) interesting because he had quite a lot of name recognition even before 2015. Media and even book references that still surprise me. Like ghosts from the past? (To be compared with books written before and after perpetual September? Circa 1995?)

Comment Re:So [Captain obvious is calling] (Score 1) 73

My reaction to the story was "Tell us something we didn't know." News is supposed to have some element of novelty in it. You know, novelty as in new.

However, I think the phishing scams disguised as fake upgrades are more annoying, and probably more dangerous, since the sucker is primed to expect something to get installed. As regards this story I thought there might be an element of novelty in it. Perhaps a new scammer's pitch to enter your credit card number to validate the unsubscribe request? Something along those lines.

Solutions time? Why do I persist in hoping the direction of criminal change in the Web can be shifted?

I keep imagining a website that helps potential suckers aggregate the targeting data so the scammers can be found and stopped more quickly. Hopefully definitively, too, as in throw them into that lovely prison in El Salvador. Get some good out of it?

So now to flog that dead horse!

The basic idea would be an iterative website where you would paste the scam and then help parse the meaning to guide the response. Of course these days it would be enhanced with AI, but the key idea is that each iteration would clarify what is going on and what should be done about it. Per this specific story, that so-called unsubscribe link would be studied to see how malicious it is and the human being in the loop would confirm the threat or provide feedback about what the website got wrong. And of course the website would be amalgamating the results to provide stats that guide the prioritization of the responses. A dangerous new threat that is producing lots of reports needs to be dealt with ASAP, though I doubt the "new threat" of this story would merit much priority.

More details available if someone is interested. NOT a new idea. Or let's hear your better solution approach. I'm sure you have a big wad of better ideas stuffed in a pocket somewhere.

(But actually my primary focus right now was provoked by that awful book Science Fictions by Stuart Ritchie... Linkage is complicated, but now I want to see some exploratory research on how much and in what ways each nation's top leader is mentioned in the media over time. Easy example: Little Kim of North Korea. LOTS of favorable coverage inside and not much mention outside, with what there is being not so favorable. Any leads?)

Comment Re:Could we "pull the plug" on networked computers (Score 1) 74

Good point on the "benevolent dictator fantasy". :-) The EarthCent Ambassador Series by E.M. Foner delves into that big time with the benevolent "Stryx" AIs.

I guess most of these examples from this search fall into some variation of your last point on "scared fool with a gun" (where for "gun" substitute some social process that harms someone, with AI being part of a system):
https://duckduckgo.com/?q=exam...

Example top result:
"8 Times AI Bias Caused Real-World Harm"
https://www.techopedia.com/tim...

Or something else I saw the other day:
"'I was misidentified as shoplifter by facial recognition tech'"
https://www.bbc.co.uk/news/tec...

Or: "10 Nightmare Things AI And Robots Have Done To Humans"
https://www.buzzfeed.com/mikes...

Sure, these are not quite the same as "AI-powered robots shooting everyone. The fact that "AI" of some sort is involved is incidental compared to just computer-supported-or-even-not algorithms as have been in use for decades like to redline sections of cities to prevent issuing mortgages.

Of course there are example of robots killing people with guns, but they are still unusual:
https://theconversation.com/an...
https://www.npr.org/2021/06/01...
https://www.reddit.com/r/Futur...
https://slashdot.org/story/07/...

These automated machine guns have potential to go wrong, but I have not heard yet that one has:
https://en.wikipedia.org/wiki/...
"The SGR-A1 is a type of autonomous sentry gun that was jointly developed by Samsung Techwin (now Hanwha Aerospace) and Korea University to assist South Korean troops in the Korean Demilitarized Zone. It is widely considered as the first unit of its kind to have an integrated system that includes surveillance, tracking, firing, and voice recognition. While units of the SGR-A1 have been reportedly deployed, their number is unknown due to the project being "highly classified"."

But a lot of people can still get hurt by AI acting as a dysfunctional part of a dysfunctional system (the first items).

Is there money to be made by fear mongering? Yes, I have to agree you are right on that.

Is *all* the worry about AI profit-driven fear mongering -- especially about concentration of wealth and power by what people using AI do to other people (like Marshall Brain wrote about in "Robotic Nation" etc)?

I think there are legitimate (and increasing concerns) similar and worse than the ones, say, James P. Hogan wrote about. Hogan emphasized accidental issues of a system protecting itself -- and generally not issues from malice or social bias things implemented in part intentionally by humans. Although one ending of a "Giants" book (Entoverse I think, been a long time) does involve AI in league with the heroes doing unexpected stuff by providing misleading synthetic information to humorous effect.

Of course, our lives in the USA have been totally dependent for decades on 1970s era Soviet "Dead Hand" technology that the US intelligence agencies tried to sabotage with counterfeit chips -- so who knows how well it really works. So if you have a nice day today not involving mushroom clouds, you can (in part) thank a 1970s Soviet engineer for safeguarding your life. :-)
https://en.wikipedia.org/wiki/...

It's common to think the US Military somehow defends the USA, and while there is some truth to that, it leaves out a bigger part of the picture of much of human survival being dependent on a multi-party global system working as expected to avoid accidents...

Two other USSR citizens we can thank for our current life in the USA: :-)

https://en.wikipedia.org/wiki/...
"a senior Soviet Naval officer who prevented a Soviet submarine from launching a nuclear torpedo against ships of the United States Navy at a crucial moment in the Cuban Missile Crisis of October 1962. The course of events that would have followed such an action cannot be known, but speculations have been advanced, up to and including global thermonuclear war."

https://en.wikipedia.org/wiki/...
"These missile attack warnings were suspected to be false alarms by Stanislav Petrov, an engineer of the Soviet Air Defence Forces on duty at the command center of the early-warning system. He decided to wait for corroborating evidence--of which none arrived--rather than immediately relaying the warning up the chain of command. This decision is seen as having prevented a retaliatory nuclear strike against the United States and its NATO allies, which would likely have resulted in a full-scale nuclear war. Investigation of the satellite warning system later determined that the system had indeed malfunctioned."

There is even a catchy pop tune related to the last item: :-)
https://en.wikipedia.org/wiki/...
"The English version retains the spirit of the original narrative, but many of the lyrics are translated poetically rather than being directly translated: red helium balloons are casually released by the civilian singer (narrator) with her unnamed friend into the sky and are mistakenly registered by a faulty early warning system as enemy contacts, resulting in panic and eventually nuclear war, with the end of the song near-identical to the end of the original German version."

If we replaced people like Stanislav Petrov and Vasily Arkhipov with AI will we as a global society be better off?

Here is a professor (Alain Kornhauser) I worked with on AI and robots and self-driving cars in the second half of the 1980s commenting recently on how self-driving cars are already safer than human-operated cars by a factor of 10X in many situations based on Tesla data:
https://www.youtube.com/watch?...

But one difference is that there is a lot of training data based on car accidents and safe driving to make reliable (at least better than human) self-driving cars. We don't have much training data -- thankfully -- on avoiding accidental nuclear wars.

In general, AI is a complex unpredictable thing (especially now) and "simple" seems like a prerequisite for reliability (for all of military, social, and financial systems):
https://www.infoq.com/presenta...
"Rich Hickey emphasizes simplicityâ(TM)s virtues over easinessâ(TM), showing that while many choose easiness they may end up with complexity, and the better way is to choose easiness along the simplicity path."

Given that we as a society are pursuing a path of increasing complexity and related risk (including of global war with nukes and bioweapons, but also other risks), that's one reason (among others) that I have advocated for at least part of our society adopting simpler better-understood locally-focused resilient infrastructures (to little success, sigh).
https://pdfernhout.net/princet...
https://pdfernhout.net/sunrise...
https://kurtz-fernhout.com/osc...
https://pdfernhout.net/recogni...

Example of related fears from my reading too much sci-fi: :-)
https://kurtz-fernhout.com/osc...
"The race is on to make the human world a better (and more resilient) place before one of these overwhelms us:
Autonomous military robots out of control
Nanotechnology virus / gray slime
Ethnically targeted virus
Sterility virus
Computer virus
Asteroid impact
Y2K
Other unforseen computer failure mode
Global warming / climate change / flooding
Nuclear / biological war
Unexpected economic collapse from Chaos effects
Terrorism w/ unforseen wide effects
Out of control bureaucracy (1984)
Religious / philosophical warfare
Economic imbalance leading to world war
Arms race leading to world war
Zero-point energy tap out of control
Time-space information system spreading failure effect (Chalker's Zinder Nullifier)
Unforseen consequences of research (energy, weapons, informational, biological)"

So, AI out of control is just one of those concerns...

So, can I point to multiple examples of AI taking over planets to the harm of their biological inhabitants (outside of sci-fi). I have to admit the answer is no. But then I can't point to realized examples of accidental global nuclear war either (thankfully, so far).

Comment How many websites are the AI spiders killing? (Score 1) 57

Kind of a new Slashdot effect? I think I'm actually seeing some evidence of higher than usual mortality among old websites and I've been wondering if the cause might be AI spiders seeking more training data. Latest victim might be Tripod? But that one was already a ghost zombie website...

Slashdot Top Deals

The secret of success is sincerity. Once you can fake that, you've got it made. -- Jean Giraudoux

Working...