Comment Re:Could we "pull the plug" on networked computers (Score 1) 72
"huge piece" should be "huge missing piece".
I saw that type when reposting some of that content here:
https://www.reddit.com/r/singu...
"huge piece" should be "huge missing piece".
I saw that type when reposting some of that content here:
https://www.reddit.com/r/singu...
Thanks for the conversation. On your point "And why want it in the first place?" That is a insightful point, and I wish more people would think about that. Frankly, I don't think we need AI of any great sort right now, even if it is hard to argue with the value of some current AI systems like machine vision for parts inspection. Most of the "benefits" AI advocates trot out (e.g. solving world hunger, or global climate change, or cancer
=== Some other random thoughts on all this
I just finished watching this interview of Geoffrey Hinton which touches on some of the points discussed here:
"Godfather of AI: I Tried to Warn Them, But We've Already Lost Control! Geoffrey Hinton"
https://www.youtube.com/watch?...
It is a fantastic interview that anyone interested in AI should watch.
Some nuances missed there though:
* My sig is a huge piece of his message on AI safety: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity." (As a self-professed socialist Hinton at least asks for governments to be responsible to public opinion on AI safety and social safety nets -- but that still does not quite capture the idea in my sig which concerns a more fundamental issue than just prudence or charity)
* That you have to assume finite demand in a finite world by finite beings over a finite time for all goods and services for their to be job loss (which I think is true, but was not stated, as mentioned by me here: https://pdfernhout.net/beyond-... ).
* That quality demands on products might go up with AI and absorb much more labor (i.e. doctors who might before reliably cure 99% of patients might be expected to cure 99.9%, where that extra 0.9% might take ten or a hundred times as much work)
* That his niece he mentioned who used AI to go from answering one medical complaint in 35 minutes to only 5 minutes could in theory now be be paid 5 times more but probably isn't -- so who got the benefits (Marshall Brain;s point on wealth concentration) -- unless quality was increased.
* That while people like him and the interviewer may thrive on a "work" purpose (and will suffer in that sense if ASI can do everything better), for most people the purpose of raising children and being a good friend and neighbor and having hobbies and spending time making health choices might be purpose enough.
* (Hinton touches on this, but to amplify) That right now there is room for many good enough workers in any business because there is only one of the best worker and that one person can't be everywhere doing everything. But (ignoring value in diversity) if you can just copy the best worker, and not pay the copies, then there is no room for any but the best worker. And worse, there is not room for even the best human worker if you can just employ the copies without employing the original once you have copied them. As Hinton said, digital intelligence means you can make (inexpensive) copies of systems that have already learned what they need to know -- and digital intelligence can share information a billion times faster than humans.
My undergrad advisor in cognitive psychology (George A. Miller) passed around one of Hinton's early papers circa 1984. And George (who like puns) joked "What are you hintin' at?" when I used "hinton" as part of a password for a shared computer account. Hinton and I must have overlapped when I was visiting CMU circa 1985-1986 but I don't recall offhand talking with him then. I think I would have enjoyed talking to him though (more so in a way than Herbert Simon who as Nobel Prize winner then was hard to get a few short meetings with -- one reason winning the Nobel Prize tends to destroy productivity). Hinton seems like a very nice guy -- even if he worries his work might (unintentionally) spell doom for us all. Although he does say how raising two kids as a single parent changed him and made him essentially more compassionate and so on -- so maybe he is a nicer guy now than back then? In any case, I can be glad my AI career (such as it was) took a different path than his, with me spending more time thinking about the social implications than the technical implementations (in part out of concerns of robots replacing humans that arose from talking with people in Hans' labs -- where I could see that, say, self-replicating robotic cockroaches deployed for military purposes could potentially wipe out humanity and then perhaps collapse themselves, instead of our successors being Hans' idealized "mind children" exploring the universe in our stead, like in his writings mentioned below).
While Hinton does not go into it in detail in that interview, there is why his intuition on neural networks was ultimately productive -- because of a Moore's Laws increase of capacity that made statistical approaches to AI more feasible:
https://www.datasciencecentral...
" Recently I came across an explanation by John Launchbury, the Director of DARPA's Information Innovation Office who has a broader and longer term view. He divides the history and the future of AI into three ages:
1. The Age of Handcrafted Knowledge
2. The Age of Statistical Learning
3. The Age of Contextual Adaptation."
Also related to that by Hans Moravec from 1999 (whose lab I was a visitor at CMU over a decade earlier):
https://faculty.umb.edu/gary_z...
"By 2050 robot "brains" based on computers that execute 100 trillion instructions per second will start rivaling human intelligence"
"In light of what I have just described as a history of largely unfulfilled goals in robotics, why do I believe that rapid progress and stunning accomplishments are in the offing? My confidence is based on recent developments in electronics and software, as well as on my own observations of robots, computers and even insects, reptiles and other living things over the past 30 years.
The single best reason for optimism is the soaring performance in recent years of mass-produced computers. Through the 1970s and 1980s, the computers readily available to robotics researchers were capable of executing about one million instructions per second (MIPS). Each of these instructions represented a very basic task, like adding two 10-digit numbers or storing the result in a specified location in memory. In the 1990s computer power suitable for controlling a research robot shot through 10 MIPS, 100 MIPS and has lately reached 1,000 in high-end desktop machines. Apple's new iBook laptop computer, with a retail price at the time of this writing of $1,600, achieves more than 500 MIPS. Thus, functions far beyond the capabilities of robots in the 1970s and 1980s are now coming close to commercial viability.
One thousand MIPS is only now appearing in high-end desktop PCs. In a few years it will be found in laptops and similar smaller, cheaper computers fit for robots. To prepare for that day, we recently began an intensive [DARPA-funded] three-year project to develop a prototype for commercial products based on such a computer. We plan to automate learning processes to optimize hundreds of evidence-weighing parameters and to write programs to find clear paths, locations, floors, walls, doors and other objects in the three-dimensional maps. We will also test programs that orchestrate the basic capabilities into larger tasks, such as delivery, floor cleaning and security patrol.
Fourth-generation universal robots with a humanlike 100 million MIPS will be able to abstract and generalize. They will result from melding powerful reasoning programs to third-generation machines. These reasoning programs will be the far more sophisticated descendants of today's theorem provers and expert systems, which mimic human reasoning to make medical diagnoses, schedule routes, make financial decisions, configure computer systems, analyze seismic data to locate oil deposits and so on.
Properly educated, the resulting robots will be come quite formidable. In fact, I am sure they will outperform us in any conceivable area of endeavor, intellectual or physical. Inevitably, such a development will lead to a fundamental restructuring of our society. Entire corporations will exist without any human employees or investors at all. Humans will play a pivotal role in formulating the intricate complex of laws that will govern corporate behavior. Ultimately, though, it is likely that our descendants will cease to work in the sense that we do now. They will probably occupy their days with a variety of social, recreational and artistic pursuits, not unlike today's comfortable retirees or the wealthy leisure classes.
The path I've outlined roughly recapitulates the evolution of human intelligence -- but 10 million times more rapidly. It suggests that robot intelligence will surpass our own well before 2050. In that case, mass-produced, fully educated robot scientists working diligently, cheaply, rapidly and increasingly effectively will ensure that most of what science knows in 2050 will have been discovered by our artificial progeny!"
Good point on the "benevolent dictator fantasy".
I guess most of these examples from this search fall into some variation of your last point on "scared fool with a gun" (where for "gun" substitute some social process that harms someone, with AI being part of a system):
https://duckduckgo.com/?q=exam...
Example top result:
"8 Times AI Bias Caused Real-World Harm"
https://www.techopedia.com/tim...
Or something else I saw the other day:
"'I was misidentified as shoplifter by facial recognition tech'"
https://www.bbc.co.uk/news/tec...
Or: "10 Nightmare Things AI And Robots Have Done To Humans"
https://www.buzzfeed.com/mikes...
Sure, these are not quite the same as "AI-powered robots shooting everyone. The fact that "AI" of some sort is involved is incidental compared to just computer-supported-or-even-not algorithms as have been in use for decades like to redline sections of cities to prevent issuing mortgages.
Of course there are example of robots killing people with guns, but they are still unusual:
https://theconversation.com/an...
https://www.npr.org/2021/06/01...
https://www.reddit.com/r/Futur...
https://slashdot.org/story/07/...
These automated machine guns have potential to go wrong, but I have not heard yet that one has:
https://en.wikipedia.org/wiki/...
"The SGR-A1 is a type of autonomous sentry gun that was jointly developed by Samsung Techwin (now Hanwha Aerospace) and Korea University to assist South Korean troops in the Korean Demilitarized Zone. It is widely considered as the first unit of its kind to have an integrated system that includes surveillance, tracking, firing, and voice recognition. While units of the SGR-A1 have been reportedly deployed, their number is unknown due to the project being "highly classified"."
But a lot of people can still get hurt by AI acting as a dysfunctional part of a dysfunctional system (the first items).
Is there money to be made by fear mongering? Yes, I have to agree you are right on that.
Is *all* the worry about AI profit-driven fear mongering -- especially about concentration of wealth and power by what people using AI do to other people (like Marshall Brain wrote about in "Robotic Nation" etc)?
I think there are legitimate (and increasing concerns) similar and worse than the ones, say, James P. Hogan wrote about. Hogan emphasized accidental issues of a system protecting itself -- and generally not issues from malice or social bias things implemented in part intentionally by humans. Although one ending of a "Giants" book (Entoverse I think, been a long time) does involve AI in league with the heroes doing unexpected stuff by providing misleading synthetic information to humorous effect.
Of course, our lives in the USA have been totally dependent for decades on 1970s era Soviet "Dead Hand" technology that the US intelligence agencies tried to sabotage with counterfeit chips -- so who knows how well it really works. So if you have a nice day today not involving mushroom clouds, you can (in part) thank a 1970s Soviet engineer for safeguarding your life.
https://en.wikipedia.org/wiki/...
It's common to think the US Military somehow defends the USA, and while there is some truth to that, it leaves out a bigger part of the picture of much of human survival being dependent on a multi-party global system working as expected to avoid accidents...
Two other USSR citizens we can thank for our current life in the USA:
https://en.wikipedia.org/wiki/...
"a senior Soviet Naval officer who prevented a Soviet submarine from launching a nuclear torpedo against ships of the United States Navy at a crucial moment in the Cuban Missile Crisis of October 1962. The course of events that would have followed such an action cannot be known, but speculations have been advanced, up to and including global thermonuclear war."
https://en.wikipedia.org/wiki/...
"These missile attack warnings were suspected to be false alarms by Stanislav Petrov, an engineer of the Soviet Air Defence Forces on duty at the command center of the early-warning system. He decided to wait for corroborating evidence--of which none arrived--rather than immediately relaying the warning up the chain of command. This decision is seen as having prevented a retaliatory nuclear strike against the United States and its NATO allies, which would likely have resulted in a full-scale nuclear war. Investigation of the satellite warning system later determined that the system had indeed malfunctioned."
There is even a catchy pop tune related to the last item:
https://en.wikipedia.org/wiki/...
"The English version retains the spirit of the original narrative, but many of the lyrics are translated poetically rather than being directly translated: red helium balloons are casually released by the civilian singer (narrator) with her unnamed friend into the sky and are mistakenly registered by a faulty early warning system as enemy contacts, resulting in panic and eventually nuclear war, with the end of the song near-identical to the end of the original German version."
If we replaced people like Stanislav Petrov and Vasily Arkhipov with AI will we as a global society be better off?
Here is a professor (Alain Kornhauser) I worked with on AI and robots and self-driving cars in the second half of the 1980s commenting recently on how self-driving cars are already safer than human-operated cars by a factor of 10X in many situations based on Tesla data:
https://www.youtube.com/watch?...
But one difference is that there is a lot of training data based on car accidents and safe driving to make reliable (at least better than human) self-driving cars. We don't have much training data -- thankfully -- on avoiding accidental nuclear wars.
In general, AI is a complex unpredictable thing (especially now) and "simple" seems like a prerequisite for reliability (for all of military, social, and financial systems):
https://www.infoq.com/presenta...
"Rich Hickey emphasizes simplicityâ(TM)s virtues over easinessâ(TM), showing that while many choose easiness they may end up with complexity, and the better way is to choose easiness along the simplicity path."
Given that we as a society are pursuing a path of increasing complexity and related risk (including of global war with nukes and bioweapons, but also other risks), that's one reason (among others) that I have advocated for at least part of our society adopting simpler better-understood locally-focused resilient infrastructures (to little success, sigh).
https://pdfernhout.net/princet...
https://pdfernhout.net/sunrise...
https://kurtz-fernhout.com/osc...
https://pdfernhout.net/recogni...
Example of related fears from my reading too much sci-fi:
https://kurtz-fernhout.com/osc...
"The race is on to make the human world a better (and more resilient) place before one of these overwhelms us:
Autonomous military robots out of control
Nanotechnology virus / gray slime
Ethnically targeted virus
Sterility virus
Computer virus
Asteroid impact
Y2K
Other unforseen computer failure mode
Global warming / climate change / flooding
Nuclear / biological war
Unexpected economic collapse from Chaos effects
Terrorism w/ unforseen wide effects
Out of control bureaucracy (1984)
Religious / philosophical warfare
Economic imbalance leading to world war
Arms race leading to world war
Zero-point energy tap out of control
Time-space information system spreading failure effect (Chalker's Zinder Nullifier)
Unforseen consequences of research (energy, weapons, informational, biological)"
So, AI out of control is just one of those concerns...
So, can I point to multiple examples of AI taking over planets to the harm of their biological inhabitants (outside of sci-fi). I have to admit the answer is no. But then I can't point to realized examples of accidental global nuclear war either (thankfully, so far).
Thanks for the insightful replies. You're right that fiction can bee to optimistic. Still, it can be full of interesting ideas -- especially when someone like James P. Hogan with a technical background and also in contact with AI luminaries (like Marvin Minsky) writes about AI and robotics.
From the Manga version of "The Two Faces of Tomorrow":
"The Two Faces of Tomorrow: Battle Plan" where engineers and scientists see how hard it is to turn off a networked production system that has active repair drones:
https://mangadex.org/chapter/3...
"Pulling the Plug: Chapter 6, Volume 1, The Two Faces of Tomorrow" where something similar happens during an attempt to shut down a networked distributed supercomputer:
https://mangadex.org/chapter/4...
Granted, those are systems that have control of robots. But even without drones, consider:
"AI system resorts to blackmail if told it will be removed"
https://www.bbc.com/news/artic...
I first saw a related idea in "The Great Time Machine Hoax" from around 1963, where a supercomputer uses only printed letters with enclosed checks sent to companies to change the world to its preferences. It was insightful even back then to see how a computer could just hijack our social-economic system to its own benefit.
Arguably, modern corporation are a form of machine intelligence even if some of their components are human. I wrote about this in 2000:
https://dougengelbart.org/coll...
"These corporate machine intelligences are already driving for better machine intelligences -- faster, more efficient, cheaper, and more resilient. People forget that corporate charters used to be routinely revoked for behavior outside the immediate public good, and that corporations were not considered persons until around 1886 (that decision perhaps being the first major example of a machine using the political/social process of its own ends). Corporate charters are granted supposedly because society believe it is in the best interest of *society* for corporations to exist. But, when was the last time people were able to pull the "charter" plug on a corporation not acting in the public interest? It's hard, and it will get harder when corporations don't need people to run themselves."
So, as another question, how easily can we-the-people "pull the plug" on corporations these days? I guess there are examples (Theranos?) but they seem to have more to do with fraud -- rather than a company found pursuing the ideal of capitalism of privatizing gains while socializing risks and costs.
It's not like, say, OpenAI is going to suffer any more consequences than the rest of us if AI kills everyone. And meanwhile, the people involved in OpenAI may get a lot of money and have a lot of "fun". From "You Have No Idea How Terrified AI Scientists Actually Are" at 2:25 (for some reason that part is missing from the YouTube automatic transcript):
https://www.youtube.com/watch?...
"Sam Altman: AI will probably lead to the end of the world but in the meantime there will be great companies created with serious machine learning."
Maybe we don't have an AI issue as much as a corporate governance issue? Which circles around to my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."
I truly wish you are right about all the fear mongering about AI being a scam.
It's something I have been concerned about for decades, similar to the risk of nuclear war or biowarfare. One difference is that nukes and to a lesser extent plagues are more clearly distinguished as weapons of war and generally monopolized by nation-states -- whereas AI is seeing gradual adoption by everyone everywhere (and with a risk unexpected things might happen overnight if a computer network "wakes up" or is otherwise directed by humans to problematical ends). It's kind of like cars -- a generally useful tool -- could be turned turn into nukes overnight by a network software update (which they can't, thankfully). But how do you "pull the plug" on all cars -- especially if a transition from acting as a faithful companion to a "Christine" killer car happens overnight? Or even just all home routers or all networked smartphones get compromised? ISPs could put in place filtering in such cases, but how long could such filters last or be effective if the AI (or malevolent humans) responds?
If you drive a car with high-tech features, you are "trusting AI" in a sense. From 2019 on how AI was then already so much in our lives:
"The 10 Best Examples Of How AI Is Already Used In Our Everyday Life"
https://www.forbes.com/sites/b...
A self-aware AI doing nasty stuff is likely more of a mid-to-long-term issue though. The bigger short-term issue is what people using AI do to other people with it (especially for economic disruption and wealth concentration, like Marshall Brain wrote about).
Turning off aspects of a broad network of modern technology have been explored in books like "The Two Faces of Tomorrow" (from 1979 by James P. Hogan). He suggests that turning off a global superintelligence network (a network that most people have come to depend on, and which embodies AI being used to do many tasks) may be a huge challenge (if not an impossible one). He suggested a network can gets smarter over time and unintentionally develop a survival instinct as a natural aspect of it trying to remain operation to do its purported primary function in the face of random power outages (like from lightning strikes).
But even if we wanted to turn off AI, would we? As a (poor) analogy, while there have been brief periods where the global internet supporting the world wide web has been restricted in some specific places, and also there is some selective filtering of the internet in various nations continuously ongoing (usually to give preference to local national web applications), could we be likely to turn off the global internet at this point even if it was proven somehow to greatly produce harms? We are so dependent on the internet for day-to-day commerce as well as, sigh, entertainment (i.e. so much "news") that I can wonder if such is even possible now collectively. The issue there is not technical (yes, IT server farm administrators and individual consumers with home PCs and smartphones could turn off every networked computer in theory) but social (would people do it).
Personally, I see value in many of the points Michael Greer makes in "Retrotopia" (especially about computer security, and also about chosen levels of technology as a form of technological "zoning"):
https://theworthyhouse.com/202...
"To maintain autarky, and for practical and philosophical reasons we will turn to in a minute, Lakeland rejects public funding of any technology past 1940, and imposes cultural strictures discouraging much private use of such technology. Even 1940s technology is not necessarily the standard; each county chooses to implement public infrastructure in one of five technological tiers, going back to 1820. The more retro, the lower the taxes.
But Greer's novel still seems like a bit of a fantasy to suggest that a big part of the USA would willingly abandon networked computers in the future (even in the face of technological disasters) -- and even if it indeed might produce a better life. There was a Simpson's episode where everyone abandons TV for an afternoon and loves it, and then goes back to watching TV. It's a bit like saying a drug addict would willingly abandon a drug; some do of course, especially if the rest of the life improves in various ways for whatever reasons.
Also some of the benefit in Greer's novel comes from choosing decentralized technologies (whatever the form) in preference to more-easily centralized technologies (which is a concentration-of-wealth point in some ways rather than a strictly technological point). Contrast with the independent high-tech self-maintaining AI cybertanks in the Old Guy Cybertank novels who have built a sort-of freedom-emphasizing yet cooperative democracy (in the absence of humans).
In any case, we are talking about broad social changes with the adoption of AI. There is no single off switch for a network composed of billions of individual computers distributed across the planet -- especially if everyone has networked AI in their cars and smartphones (which is increasingly the case).
Yoshua Bengio is at least trying to do better (if one believe such systems need to be rushed out in any case):
"Godfather of AI Alarmed as Advanced Systems Quickly Learning to Lie, Deceive, Blackmail and Hack
"I'm deeply concerned by the behaviors that unrestrained agentic AI systems are already beginning to exhibit.""
https://futurism.com/ai-godfat...
"In a blog post announcing LawZero, the new nonprofit venture, "AI godfather" Yoshua Bengio said that he has grown "deeply concerned" as AI models become ever more powerful and deceptive.
"This organization has been created in response to evidence that today's frontier AI models have growing dangerous capabilities and [behaviors]," the world's most-cited computer scientist wrote, "including deception, cheating, lying, hacking, self-preservation, and more generally, goal misalignment."
A pre-peer-review paper Bengio and his colleagues published earlier this year explains it a bit more simply.
"This system is designed to explain the world from observations," the paper reads, "as opposed to taking actions in it to imitate or please humans."
The concept of building "safe" AI is far from new, of course -- it's quite literally why several OpenAI researchers left OpenAI and founded Anthropic as a rival research lab.
This one seems to be different because, unlike Anthropic, OpenAI, or any other companies that pay lip service to AI safety while still bringing in gobs of cash, Bengio's is a nonprofit -- though that hasn't stopped him from raising $30 million from the likes of ex-Google CEO Eric Schmidt, among others."
Yoshua Bengio seems like someone at least trying to make AI (scientists) from a cooperative abundance perspective rather than to create more competitive AI agents.
Of course, even that could go horribly wrong if the AI misleads people subtly.
From 1957: "A ten-year-old boy and Robby the Robot team up to prevent a Super Computer [which provided misleading outputs] from controlling the Earth from a satellite."
https://www.imdb.com/title/tt0...
Form 1992: "A Fire Upon the Deep" on an AI that misleads people exploring an old archive who though their exploratory AI work was airgapped and firewalled as they built advanced automation the AI suggested:
https://en.wikipedia.org/wiki/...
Lots of other sc-fi examples of deceptive AI (like in the Old Guy Cybertank series, and more). The worst being along the lines of a human (e.g. Dr. Smith of "Lost in Space") intentionally programming the AI (or Ai-powered Robot) to be harmful to others to that person's intended benefit.
Or sometimes (like in a Bobiverse novel, spoiler) a human may bypass a firewall and unleash an AI out of a sense of worshipful goodwill, to unknown consequences.
But at least the AI Scientist approach of Yoshua Bengio is not *totally* stupid in the way a reckless race to create competitive commercial super-intelligent AIs otherwise is for sure.
Some dark humor on that (with some links fixed up):
https://slashdot.org/comments....
====
[People are] right to be skeptical on AI. But I can also see that it is so seductive as a "supernormal stimuli" it will have to be dealt with one way or another. Some AI-related dark humor by me.
* Contrast Sergey Brin this year:
https://finance.yahoo.com/news...
""Competition has accelerated immensely and the final race to AGI is afoot," he said in the memo. "I think we have all the ingredients to win this race, but we are going to have to turbocharge our efforts." Brin added that Gemini staff can boost their coding efficiency by using the company's own AI technology.
* With a Monty Python sketch from decades ago:
https://genius.com/Monty-pytho...
https://www.youtube.com/watch?...
"Well, you join us here in Paris, just a few minutes before the start of today's big event: the final of the Mens' Being Eaten By A Crocodile event.
Gavin, does it ever worry you that you're actually going to be chewed up by a bloody great crocodile?
(The only thing that worries me, Jim, is being being the first one down that gullet.)"
====
If people shift their perspective to align with the idea in my sig or similar ideas from Albert Einstein, Buckminster Fuller, Ursula K Le Guin, James P. Hogan, Lewis Mumford, Donald Pet, and many others, there might be a chance for a positive outcome from AI: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."
That is because our direction out of any singularity may have something to do with our moral direction going into it. So we desperately need to build a more inclusive, joyful, and healthy society right now.
But if we just continue extreme competition as usual between businesses and nations (especially for creating super-intelligent AI), then we are likely "cooked":
"it's over, we're cooked!" -- says [AI-generated] girl that literally does not exist (and she's right!)
https://www.reddit.com/r/singu...
As just as one example, here is Eric Schmidt essentially saying that we are probably doomed if AI is used to create biowarfare agents (which it almost certainly will be if we don't change our scarcity-based perspective on using these tools of abundance):
"Dr. Eric Schmidt: Special Competitive Studies Project"
https://www.youtube.com/watch?...
Alternatives: https://pdfernhout.net/recogni...
"There is a fundamental mismatch between 21st century reality and 20th century security [and economic] thinking. Those "security" [and economic] agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all."
And also: https://pdfernhout.net/beyond-...
"This article explores the issue of a "Jobless Recovery" mainly from a heterodox economic perspective. It emphasizes the implications of ideas by Marshall Brain and others that improvements in robotics, automation, design, and voluntary social networks are fundamentally changing the structure of the economic landscape. It outlines towards the end four major alternatives to mainstream economic practice (a basic income, a gift economy, stronger local subsistence economies, and resource-based planning). These alternatives could be used in combination to address what, even as far back as 1964, has been described as a breaking "income-through-jobs link". This link between jobs and income is breaking because of the declining value of most paid human labor relative to capital investments in automation and better design. Or, as is now the case, the value of paid human labor like at some newspapers or universities is also declining relative to the output of voluntary social networks such as for digital content production (like represented by this document). It is suggested that we will need to fundamentally reevaluate our economic theories and practices to adjust to these new realities emerging from exponential trends in technology and society."
See also "The Case Against Competition" by Alfie Kohn:
https://www.alfiekohn.org/arti...
"This is not to say that children shouldn't learn discipline and tenacity, that they shouldn't be encouraged to succeed or even have a nodding acquaintance with failure. But none of these requires winning and losing -- that is, having to beat other children and worry about being beaten. When classrooms and playing fields are based on cooperation rather than competition, children feel better about themselves. They work with others instead of against them, and their self-esteem doesn't depend on winning a spelling bee or a Little League game."
https://www.youtube.com/watch?...
From the video: (14:50): "Because in a race to build superintelligent AI, there is only one winner: the AI itself."
From: https://www.vice.com/en/articl...
""The billionaires understand that they're playing a dangerous game," Rushkoff said. "They are running out of room to externalize the damage of the way that their companies operate. Eventually, there's going to be the social unrest that leads to your undoing."
Like the gated communities of the past, their biggest concern was to find ways to protect themselves from the "unruly masses," Rushkoff said. "The question we ended up spending the majority of time on was: 'How do I maintain control of my security force after my money is worthless?'"
That is, if their money is no longer worth anything -- if money no longer means power--how and why would a Navy Seal agree to guard a bunker for them?
"Once they start talking in those terms, it's really easy to start puncturing a hole in their plan," Rushkoff said. "The most powerful people in the world see themselves as utterly incapable of actually creating a future in which everything's gonna be OK."
What I put together circa 2010 is becoming more and more relevant: https://pdfernhout.net/beyond-... "This article explores the issue of a "Jobless Recovery" mainly from a heterodox economic perspective. It emphasizes the implications of ideas by Marshall Brain and others that improvements in robotics, automation, design, and voluntary social networks are fundamentally changing the structure of the economic landscape. It outlines towards the end four major alternatives to mainstream economic practice (a basic income, a gift economy, stronger local subsistence economies, and resource-based planning). These alternatives could be used in combination to address what, even as far back as 1964, has been described as a breaking "income-through-jobs link". This link between jobs and income is breaking because of the declining value of most paid human labor relative to capital investments in automation and better design. Or, as is now the case, the value of paid human labor like at some newspapers or universities is also declining relative to the output of voluntary social networks such as for digital content production (like represented by this document). It is suggested that we will need to fundamentally reevaluate our economic theories and practices to adjust to these new realities emerging from exponential trends in technology and society."
Tangentially, since you mentioned coal, coal plants are discussed there as an example of the complex dynamics of technological and social change both creating and destroying jobs given externalities -- including from the laissez-faire capitalist economic imperative to privatize gains while socializing risks and costs :
"Also, many current industries that employ large numbers of people (ranging from the health insurance industry, the compulsory schooling industry, the defense industry, the fossil fuel industry, conventional agriculture industry, the software industry, the newspaper and media industries, and some consumer products industries) are coming under pressure from various movements from both the left and the right of the political spectrum in ways that might reduce the need for much paid work in various ways. Such changes might either directly eliminate jobs or, by increasing jobs temporarily eliminate subsequent problems in other areas and the jobs that go with them (as reflected in projections of overall cost savings by such transitions); for example building new wind farms instead of new coal plants might reduce medical expenses from asthma or from mercury poisoning. A single-payer health care movement, a homeschooling and alternative education movement, a global peace movement, a renewable energy movement, an organic agriculture movement, a free software movement, a peer-to-peer movement, a small government movement, an environmental movement, and a voluntary simplicity movement, taken together as a global mindshift of the collective imagination, have the potential to eliminate the need for many millions of paid jobs in the USA while providing enormous direct and indirect cost savings. This would make the unemployment situation much worse than it currently is, while paradoxically possibly improving our society and lowering taxes. Many of the current justifications for continuing social policies that may have problematical effects on the health of society, pose global security risks, or may waste prosperity in various ways is that they create vast numbers of paid jobs as a form of make-work.
Increasing mental health issues like depression and autism, and increasing physical health issues like obesity and diabetes and cancer, all possibly linked to poor nutrition, stress, lack of exercise, lack of sunlight and other factors in an industrialized USA (including industrial pollution), have meant many new jobs have been created in the health care field. So, for example, coal plants don't just create jobs for coal miners, construction workers, and plant operators, they also create jobs for doctors treating the results of low-level mercury pollution poisoning people and from smog cutting down sunlight. Television not only creates jobs for media producers, but also for health care workers to treat obesity resulting from sedentary watching behavior (including not enough sunlight and vitamin D) or purchasing unhealthy products that are advertised.
Macroeconomics as a mathematical discipline generally ignores the issue of precisely how physical resources are interchangeable. Before this shift in economic thinking to a more resource-based view, that question of "how" things are transformed had generally been left to other disciplines like engineering or industrial chemistry (the actual physical alchemists of our age). For one thinking in terms of resources and ecology, the question of how nutrients cycle from farm to human to sewage and then back to farm as fertilizer might be as relevant as discussing the pricing of each of those items, like biologist John Todd explores as a form of ecological economics as it relates to mainstream business opportunities. People like Paul Hawken, Amory Lovins, and Hunter Lovins have written related books on the idea of natural capital. For another example, the question of exactly how coal-fired power plants might connect to human health and other natural capital was previously left to the health profession or the engineering profession before this transdisciplinary shift where economists, engineers, ecologists, health professionals, and people with other interests might all work together to understand the interactions. In the process of thinking through the interactions, considerations about creating healthy and enjoyable jobs can be included in the analysis of costs and benefits to various parties including various things that are often ignored as externalities. So, a simple analysis [in the past] might indicate coal was cheaper than solar power, but a more complete analysis, like attempted in the book Brittle Power might indicate the value in shifting economic resources to the green energy sector as ultimately cheaper when all resource costs, human costs, and other opportunities are considered. These sorts of analyses have long happened informally through the political process such as with recent US political decisions moving towards a ban of new coal-fired power plants. Jane Jacobs, in her writings on the economies of cities, is one example of trying to think through the details of how specific ventures in a city affects the overall structure of that city's economy, including the creation of desirable local jobs through import replacement. A big issue of resource-based economics is to formalize this decision making process somehow, where the issue of creating good jobs locally would be weighed as one factor among many.
Informative story. Mod parent up.
I just submitted your link as a Slashdot story: https://slashdot.org/firehose....
What I put together circa 2010 is becoming more and more relevant:
https://pdfernhout.net/beyond-...
"This article explores the issue of a "Jobless Recovery" mainly from a heterodox economic perspective. It emphasizes the implications of ideas by Marshall Brain and others that improvements in robotics, automation, design, and voluntary social networks are fundamentally changing the structure of the economic landscape. It outlines towards the end four major alternatives to mainstream economic practice (a basic income, a gift economy, stronger local subsistence economies, and resource-based planning). These alternatives could be used in combination to address what, even as far back as 1964, has been described as a breaking "income-through-jobs link". This link between jobs and income is breaking because of the declining value of most paid human labor relative to capital investments in automation and better design. Or, as is now the case, the value of paid human labor like at some newspapers or universities is also declining relative to the output of voluntary social networks such as for digital content production (like represented by this document). It is suggested that we will need to fundamentally reevaluate our economic theories and practices to adjust to these new realities emerging from exponential trends in technology and society."
which they posted here: https://it.slashdot.org/commen...
Thanks for the kind words about the back and forth with the by Ol Olsoc. I saw a suggestion once years ago that ideally mod points on sites like Slashdot or similar should be used to mod positive interactions between people instead of for specific comments. So I'll take that as a "+1" for our interaction.
You both may find this book of interest because you both talk about motivation whether in relation to competition or other things:
"Drive: The Surprising Truth About What Motivates Us" by Daniel H. Pink"
https://www.danpink.com/books/...
A related amusing video:
"RSA ANIMATE: Drive: The surprising truth about what motivates us"
https://www.youtube.com/watch?...
While no doubt there is more nuance to motivation, in short, Dan Pink explains that Autonomy, Mastery, and Purpose (I would lump Purpose in with Community) are major humans motivators. While extrinsic motivation like being paid-per-brick-you-place can get people to do physical jobs efficiently, intellectual jobs requiring creativity tend to be diminished by pay-per-idea rewards. Such rewards are different though from a boarder recognition of contributing (which is generally well-received and motivating).
Alfie Kohn makes a related point here on how rewards can diminish intrinsic motivation:
https://en.wikipedia.org/wiki/...
Growth Mindset is tangentially related:
"What Having a "Growth Mindset" Actually Means"
https://hbr.org/2016/01/what-h...
Not everyone agrees with all of this, of course, and there are various theories on all this:
https://en.wikipedia.org/wiki/...
Again, to address a previous point by by Ol Olsoc and others, concerns about "grading" as done in conventional schools is not the same as not providing "feedback". The issue is what kind of feedback with what timing is useful to the person and the community.
Related on feedback from Rands in Repose:
https://randsinrepose.com/arch...
https://randsinrepose.com/arch...
https://randsinrepose.com/sear...
And, as a key point, frequent feedback should go both ways:
https://randsinrepose.com/arch...
In general: "How Effective Feedback Fuels Performance"
https://www.gallup.com/workpla...
"Meaningful feedback is frequent.
Effective feedback has an expiration date. Feedback should be a common occurrence -- for most jobs, a few times per week. People remember their most recent experiences best, so feedback is most valuable when it occurs immediately after an action. Managers should maintain an ongoing dialogue with employees -- using conversations that offer timely, in-the-moment feedback that's inspiring, instructive and actionable."
Maybe both of you just have never had great managers? They sure seem rare...
https://www.gallup.com/workpla...
"Gallup has found that one of the most important decisions companies make is simply whom they name manager. Yet our analytics suggest they usually get it wrong. In fact, Gallup finds that companies fail to choose the candidate with the right talent for the job 82% of the time."
And an example of how assigning numbers to employees can go really wrong sometimes:
"How stack ranking corrupts culture, at Uber and Beyond"
https://www.perdoo.com/resourc...
"Creating a cutthroat culture inside your company may seem productive at first, but sooner or later it's bound to catch up -- as Uber is learning."
And:
"Stacked Ranking - A Great Way to Kill Collaboration on Agile Teams"
https://innolution.com/blog/st...
I've collected some stuff on being a better manager here (in part from my own frustrations over the years):
https://github.com/pdfernhout/...
All the best in finding approaches that work for you both to stay motivated in whatever social environments you find yourselves.
And to circle back to my original point, given all the above, what should "educational" social environments look like to keep people of any age motivated? And does that really differ from what is needed in "work" environments? Tangential, but relates to that point:
"The Three Boxes of Life [School, Work, Retirement/Leisure] and How to Get Out of Them: An Introduction to Life/Work Planning" by Nelson Bolles
https://www.amazon.com/Three-B...
A comment from there by hskydg80 from March 11, 2011: "Great concepts, just 30 years old, as are the sources pointed to in the book for more information. Concept of balancing education, work and leisure throughout life rather than overloading in each time periods is major point of book. Could see an update from interested writer to apply timeless principals to today's technology."
My problem lies in reconciling my gross habits with my net income. -- Errol Flynn Any man who has $10,000 left when he dies is a failure. -- Errol Flynn