Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Bluetti showed a sodium-ion batter station in 2022 (Score 1) 129

https://solarbuildermag.com/pr...
"BLUETTI, a manufacturer of solar + storage products, including LiFePO4 battery stations, is debuting a sodium-ion battery technology at CES 2022. Recently BLUETTI has announced the âoeworldâ(TM)s first sodium-ion battery stationâoe, NA300, and its compatible battery module B480. Sodium-ion batteries have become an alternative to their lithium-ion counterparts in many industries due to their high abundance and low costs. BLUETTI's first-generation sodium-ion battery excels in thermal stability, fast-charging capacity, low-temperature performance, and integration efficiency, despite slightly lower energy density than its LiFePO4 ones. ..."

But I am not sure what happened with it as I no longer see it for sale on Bluetti's website.
https://web.archive.org/web/20...
"NA300 & B480
World's First Sodium-ion Battery Power Station
Debut at CES 2022 (Jan 5th, Las Vegas)"

Which makes me wonder if the technology was still unstable in some way then?

In any case, good luck to Natron Energy with their product. The world definitely can use better batteries.

Comment Something I posted on Gary and CPM here in 2014 (Score 5, Insightful) 80

https://tech.slashdot.org/comm...

I quote someone else saying: "The PC world might have looked very different today had Kildall's Digital Research prevailed as the operating system of choice for personal computers. DRI offered manufacturers the same low-cost licensing model which Bill Gates is today credited with inventing by sloppy journalists - only with far superior technology. DRI's roadmap showed a smooth migration to reliable multi-tasking, and in GEM, a portable graphical environment which would undoubtedly have brought the GUI to the low-cost PC desktop years before Microsoft's Windows finally emerged as a standard. But then Kildall was motivated by technical excellence, not by the need to dominate his fellow man."

And my comment on that included (removing all the supporting links):
      "We had choices as a society. I saw some of them first hand in the 1970s and 1980s when I started in computing. I bought Forth cartridges for the Commodore VIC and C64. I worked very briefly on a computer with CP/M (although using Forth on it though). The OS choice pushed by the person born with a million dollar trust fund who "dumpster dived" for OS listings won (who did little of the development work himself) -- with an empire built on QDOS which has shaky legal standing as a clone of CP/M which is probably why IBM did not buy it itself. And we were the worse for it as a society IMHO. ...
        But that problematical path would not have been possible without political and legal decisions to base the development of computing around the idea of "artificial scarcity" via copyrights and patents which set the stage for that. We still have choices, and we can still pick different ways forward. [With] the free and open source software movements, we are in a sense returning to older ways of sharing knowledge that were more popular before artificial scarcity was so broadly thought to be a good idea for promoting progress. One should always ask, "progress in what direction"? ...
    Bill Gate's could have spent his lifetime writing free software. That being born a multi-millionaire was not enough for him is a sign of an illness that causes "financial obesity", not something to be emulated. But, in the end, it is not Bill Gates who has destroyed our society as much as all the people who want to be the next Bill Gates and support regressive social policies they hope to benefit from someday. ..
      Those who have the impulse to share and cooperate more than hoard and compete are still stuck trying to navigate the economic mess we have made of today's society through artificial scarcity, the growing rich/poor divide, the diversion of so much productivity into weapons and consumer fads, and so on. The late 1960s and early 1970s when Kildall, Moore and Kay/Ingalls were having their breakthroughs were a more hopeful time in that sense. ...
    Still, the web and HTML5/JavaScript/CSS3 are a new hope for sharing via open standards, and they have been a big success in that sense. I'm moving more of my own work in that direction for that reason (even for all their own issues). Like has been said about JavaScript -- it is better than we deserve considering its history and the pressures that we all let shape it."

So, while you and others who are posting here are no doubt right on technical limits and marketing issues, I would say the "downfall" story is more complex socially than one man and his decisions with one design.

I'll again echo a key point about Gary by someone else quoted at the start: "But then Kildall was motivated by technical excellence, not by the need to dominate his fellow man." We need to build a society and an economy where people who make that choice get more support and respect.

Comment Re:Ban AI yesterday! (vs. social enlightenment) (Score 1) 37

Thanks for all the insightful posts over the years, WaffleMonster! Especially this one. I linked to it in a comment from a discussion thread relating to another AI story on:
"AI Could Explain Why We're Not Meeting Any Aliens, Wild Study Proposes"
https://science.slashdot.org/c...
"James P. Hogan's "The Two Faces of Tomorrow" is another good novel about the emergence of AI (and a "hard sci-fi" one on I which is rarer), and it shaped some of my thinking on such things. Even though it was mainly about AI in conflict with humans -- and now I am more concerned (in the short term) about humans using AI to dominate other humans. Interesting post by "WaffleMonster" mentioning that theme: ..."

In general though I think restrictions on AI development -- no matter how sensible some may be -- won't work as well as moving towards general socio-technical enlightenment as I mention in my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."

Because as you suggest, the impulse of greed is driving many abuses of AI (and other things) and a ban is not going to stop greed, and greed will find ways around a ban. But one way to deal with greed may to transcend it somehow in a healthy way (as it true for any of the Seven Deadly Sins).

Like Albert Einstein said about nuclear weapons, the same may apply to AI:
"The release of atomic power has changed everything except our way of thinking ... the solution to this problem lies in the heart of mankind. If only I had known, I should have become a watchmaker. (1945)"

Of course, since we now have AI delivered through watches, things are even more complicated these days. But a change of (societal) heart can still can make sense.

Comment Re:Human purpose and "Challenge to Abundance" (Score 1) 315

You're welcome. And thanks for your kind words (and also those from "Chrontius" in reply to you). It makes me glad to hear them.

Agreed, the Culture novels are another interesting perspective on humanoid/AI coexistence.

James P. Hogan's "The Two Faces of Tomorrow" is another good novel about the emergence of AI (and a "hard sci-fi" one on I which is rarer), and it shaped some of my thinking on such things. Even though it was mainly about AI in conflict with humans -- and now I am more concerned (in the short term) about humans using AI to dominate other humans.

Interesting post by "WaffleMonster" mentioning that theme:
https://news.slashdot.org/comm...
"AI is today being leveraged on an industrial scale to judge, influence and psychologically addict billions for simple commercial gain. As technology improves thing will only become worse especially as corporations pursuit global propaganda campaigns to scare the shit out of people in order to make them compliant to legislative agendas that favor the very same corporations leveraging AI against them today."

I raised that issue with Ray Kurzweil via emails circa 2000-2010 when he (unwisely in my opinion) suggested that developing AI through the vehicle of capitalistic venture-funded hyper-competitive corporations emphasizing artificial scarcity would produce all sorts of wonderful results for humanity. I suggested he may have had trouble seeing that because he have been financially rewarded in the past for his AI-related successes done in corporate settings.

From one of those emails by me to Kurzweil that someone else put up on their website:
"Subject: Review of Ray Kurzweil's The Singularity Is Near"
https://web.archive.org/web/20...
"... Kurzweil's perspective on life and politics presumably derives from being
a self-made captain of industry in the capitalist USA -- having made a
fortune producing sophisticated computer equipment and related software no
doubt through a lot of cleverness and hard work. In the USA, historically
that position in society generally implies adopting a
Republican/Libertarian militaristic and market-driven perspective, if for
no other reason than to get along easily with peers and to do well in the
marketplace. But also, for the few percent who succeed at the American
dream (unlike the masses of failures) he can look back at his experience,
and perhaps ignoring luck or help from others, claim his success was due
to his own choices, and if others just made similar good choices, they too
would be successful. It is like a millionaire lottery winner exhorting
everyone to play the lottery. As an expert in statistics though, Kurzweil
should, if self-reflective, be able to see some statistical problems with
this viewpoint. Who would be the workers to be bossed around if everyone
was a successful as him? And how would his products command a price
premium if everyone was making such things? Clearly such a society of
universal success would need to be fundamentally different than the one
which produced his own personal success.
      Also, Kurzweil made his money in control of patents and copyrights and
must presumably strongly believe in the value of their role in controlling
resources to create artificial scarcity to justify his own financial
success. Thus, for example, he laments the problems of the commercial
music industry in the USA in enforcing scarcity of the product of
musicians they control under contract, while he ignores the rise of
uncontracted individuals more easily producing their own garage band music
and the blossoming of a world of personal and private media production.
      One would expect anyone's personal experience to color his or her
projections for the future and what the best public policy would be in
relation to those projections. That is a given. But it is the failure to
acknowledge this that the most harm can be done.
      To grossly simplify a complex subject, the elite political and economic
culture Kurzweil finds himself in as a success in the USA now centers
around maintaining an empire through military preparedness and preventive
first strikes, coupled with a strong police state to protect accumulated
wealth of the financially obese. This culture supports market driven
approaches to supporting the innovations needed to support this
militarily-driven police-state-trending economy, where entrepreneurs are
kept on very short leashes, where consumers are dumbed down via compulsory
schooling, and where dissent is easily managed by removing profitable
employment opportunities from dissenters, leading to self-censorship.
Kurzweil is a person now embedded in the culture of the upper crust
economically of the USA's military and economic leadership. So, one might
expect Kurzweil to write from that perspective, and he does. His solutions
to problems the singularity pose reflect all these trends -- from
promoting first strike use of nanobots, to design and implementation
facilitated through greed, to widespread one-way surveillance of the
populace by a controlling elite.
      But the biggest problem with the book _The Singularity Is Near: When
Humans Transcend Biology_ is Kurzweil seems unaware that he is doing so.
He takes all those things as given, like a fish ignoring water, ignoring
the substance of authors like Zinn, Chomsky, Domhoff, Gatto, Holt, and so
on. And that shows a lack of self-reflection on the part of the book's
author. And it is is a lack of self-reflection which seems dangerously
reckless for a person of Kurzweil's power (financially, politically,
intellectually, and persuasively). Of course, the same thing could be said
of many other leaders in the USA, so that he is not alone there. But one
expects more from someone like Ray Kurzweil for some reason, given his
incredible intelligence. With great power comes great responsibility, and
one of those responsibilities is to be reasonably self-aware of ones own
history and biases and limitations. He has not yet joined the small but
growing camp of the elite who realize that accompanying the phase change
the information age is bringing on must be a phase change in power
relationships, if anyone is to survive and prosper. And ultimately, that
means not a move to new ways of being human, but instead a return to old
ways of being human, as I shall illustrate below drawing on work by
Marshall Sahlins. ..."

Anyway, there is always a reason for hope -- and I am glad I could encourage some more of that -- even as you are right that there are also always reasons for despair. Courage involves choosing hope in the face of despair and fear -- including for all the reasons Howard Zinn mentioned on future unpredictability.

I personally hope that using AI to support Dialogue Mapping with IBIS to visualize discussions could help small groups make better decisions together and get at least some positive benefits from AI (as I suggested in 2019):
https://cognitive-science.info...
https://issip.org/issip-cognit...

Comment Re:Human purpose and "Challenge to Abundance" (Score 1) 315

These are generally all reasonable concerns/questions.

Maybe the world needs (or there is already?) the AI-focused-equivalent of Reddit's "CollapseSupport" which is to support people who believe society is about to collapse from resource issues or war or other social dysfunction?
https://www.reddit.com/r/Colla...

Although looking at it again, I see some AI threads:
https://www.reddit.com/r/Colla...

Like for example: "Feeling my future is hopeless and pointless due to rise of AI"
https://www.reddit.com/r/Colla...

I don't think there are any easy answers to the questions you raise. Some are general existential questions people have asked themselves for 1000s of years related to religion and philosophy about the meaning of life. Douglas Adams even explores that satirically with has AI come up with the answer of "42".

Still, isn't being a good human friend or a good human parent something worth aspiring to even in a world of superhuman AI and robots? (Or as discussed on Slashdot a couple weeks ago, which took such questions in a different direction, considering if "reality" is a simulation -- by whom why?)

The way you frame the question may also reflect your cultural upbringing. For example, as someone raised in a evangelical Protestant tradition, I was essentially taught that meaning was imposed from a single top almighty being ("God") who gave us a "Bible" outlining our responsibilities and so on. That created issues for me about finding meaning when I moved beyond a lot of the dogma. My teenage years might have been easier if I had understood how much meaning and purpose can come from within -- or rather from an interaction with our emotions and our physical and social circumstances.

While not identical, as you point out, AI calls into question many of the beliefs people may have about meaning through various material contributions to society. There are many issues in current late-stage capitalism that are already questionable related to motivation (e.g. see the book "Bullshit jobs"), and AI will accelerate that. Related to that see Dan Pink's work, including this humorous (in parts) talk:
"RSA ANIMATE: Drive: The surprising truth about what motivates us"
https://www.youtube.com/watch?...

One comment there: "This makes perfect sense in Maslow's Hierarchy of Needs. If you make money not an issue anymore is key. If people are worried about money, How they're going to pay the bills and survive, they're not going to be able to focus on cognitive tasks and perform well. "Autonomy," "Mastery," and "Purpose" fall under the "self-actualization" capstone. Money is not as important at that level of motivation."

Star Trek also explores that theme directly in a few episodes (like when a tycoon from the 21st century is defrosted and find all the money he had -- plus trying to make more -- doesn't mean anything anymore). Could you still find purpose or meaning in life if you found yourself (and friends/family) in a Star Trek world with a matter replicator and cheap fusion power?

Related humorous sc-fi with a super-human-with-blindspots AI called "Skippy":
"Columbus Day (Expeditionary Force Book 1)"
https://www.amazon.com/columbu...
"A man's voice, with a snarky attitude, rang out behind me. "Excellent! Bipedal, 1300cc brain, opposable thumbs. A hairless monkey. You can carry me out of here."
I spun around in a panic. No one was there. "Who said that?"
"Me. Here, I'm the shiny cylinder on the shelf. I unlocked that door."
"You are? You mean you're talking to me through a speaker in that thing?"
"No, I am that thing. I am what you monkeys call an artificial intelligence."
I cocked my head and examined it skeptically. "You look like a chrome-plated beer can." That was a completely accurate description. The cylinder even tapered slightly at the top, and was ringed by a ridge. "You're really an AI?"
"Yup. You should refer to me as The Lord God Almighty."
"That position is already filled. I think I'll call you Skippy."
"Don't call me that, it sounds disrespectful, monkey."
"You prefer shithead? Because that's the other option, Skippy-O.""

The book "Voyage from Yesteryear" by James P. Hogan depicts a post-scarcity society where humans have learned to interact well with robots and AIs (of a limited sort).

Some quotes to ponder:

"The woods would be pretty quiet if no bird sang there but the best."

https://www.thenation.com/arti...
"I am totally confident not that the world will get better, but that we should not give up the game before all the cards have been played. The metaphor is deliberate; life is a gamble. Not to play is to foreclose any chance of winning. To play, to act, is to create at least a possibility of changing the world. There is a tendency to think that what we see in the present moment will continue. We forget how often we have been astonished by the sudden crumbling of institutions, by extraordinary changes in people's thoughts, by unexpected eruptions of rebellion against tyrannies, by the quick collapse of systems of power that seemed invincible. What leaps out from the history of the past hundred years is its utter unpredictability. ... (Howard Zinn)"

Two quotes from the book "On Caring" I mention here:
https://pdfernhout.net/reading...

"No one else can give me the meaning of my life; it is something I alone can make. The meaning is not something predetermined which simply unfolds; I help both to create it and to discover it, and this is a continuing process, not a once-and-for-all. (Milton Mayeroff, from On Caring)"

"Through caring for certain others, by serving them through caring, a [person] live the meaning of [his or her] own life. In the sense in which a [person] can ever be said to be at home in the world, [he or she] is at home not through dominating, or explaining, or appreciating, but through caring and being cared for. -- (Milton Mayeroff, from On Caring)"

https://sacred-texts.com/aor/e...
"But it must not be assumed that intelligent thinking can play no part in the formation of the goal and of ethical judgments. When someone realizes that for the achievement of an end certain means would be useful, the means itself becomes thereby an end. Intelligence makes clear to us the interrelation of means and ends. But mere thinking cannot give us a sense of the ultimate and fundamental ends. To make clear these fundamental ends and valuations, and to set them fast in the emotional life of the individual, seems to me precisely the most important function which religion has to perform in the social life of man. And if one asks whence derives the authority of such fundamental ends, since they cannot be stated and justified merely by reason, one can only answer: they exist in a healthy society as powerful traditions, which act upon the conduct and aspirations and judgments of the individuals; they are there, that is, as something living, without its being necessary to find justification for their existence. They come into being not through demonstration but through revelation, through the medium of powerful personalities. One must not attempt to justify them, but rather to sense their nature simply and clearly. (Albert Einstein)"

Health resources I've collected over the years (made possible by the web):
https://github.com/pdfernhout/...

On the rats and robots thing, here is an optimistic view of such a world (where the "Stryx" AIs are the "robots") even as there are dystopian versions in other sci-fi:
"Date Night on Union Station (EarthCent Ambassador Book 1)"
https://www.amazon.com/Night-U...
"Alien artificial intelligence has brought humanity onto a galactic tunnel network
Finding a match for the top human diplomat on Union Station is another story..."

I prefer the "Old Guy Cybertank" series though as a more realistic version of AI (even if the settings are more fantastical):
https://www.amazon.com/An-Old-...
"In the distant future mankind creates sentient cybertanks patterned on the human brain to help fight their alien enemies. Then, inexplicably, the humans vanished. They just went away. All that is left of the human empire are the cybertanks who, in their own way, keep the human civilization alive. With an intelligence based on the human psyche, the cybertanks continue to defend human space, but also perform scientific research, create art, form committees and ponder the universe. These are the stories of one of the first cybertanks, known to his friends as "Old Guy." He has outlived most of his peers, and has had a wealth of experiences over his long life, but he is starting to slowly become obsolete. Join him and his comrades Double-Wide, Whiffle-Bat, Smoking Hole, Mondocat, and Bob, as they live and love and fight alien enemies such as the Amok, the Yllg, and the Fructoids."

While I am not saying if he is right specifically in that case, there is a speech by Captain Picard about not letting computers take over everything in this episode (and similar ones in others) which leads him to manually pilot the Enterprise out of a trap:
https://en.wikipedia.org/wiki/...

It echose a point in the previously mentioned "Skills of Xanadu" story about a world of people conflicting with a world of machines.

To some extent, as with the AI-generated song I mentioend about my sig, there is also the risk of AI becoming another "Supernormal Stimuli" or "Pleasure Trap" (see books by those names). Which may lead to an "Acceleration of Addictiveness" as with an essay by that name. And those all pose problems as our instincts in a world of abundance can lead us astray as they were adapted for material scarcity (including of a scarcity of things like fat, sugar, and salt -- plus scarce energy inclining us to be naturally lazy).

Anyway, you have your finger on the pulse of really big issues confronting our culture. All the best as you wrestle with them.

One last point to ponder is "better for whom"? Or in cost benefit analysis, who pays the costs and who gets the benefits? How do "we" ensure a good healthy balance persists over time given that systems can decay for all sorts of reasons? And that AI does not have millions of years of evolution behind it to select for some sort of stability in an ecological niche? And if "enjoyment" is part of the human experience, won't people still want to build wood bridges over Koi ponds in their backwards just because they like working with wood and enjoy taking care of fish? Is it really better for such a person to ask or let robots do all that for them? That's a bit of the same question of whether to hire a (human) landscaper and gardener to do such things for you or to do them yourself? (Which is a question many wealthy people may wrestle with sometimes even without AI and robots given other people can be tasked to do things in various cultures.)

All systems have limitations (including just from speed of light) -- which tends to support Manuel De Landa's point here:
http://www.t0.or.at/delanda/me...
"To make things worse, the solution to this is not simply to begin adding meshwork components to the mix. Indeed, one must resist the temptation to make hierarchies into villains and meshworks into heroes, not only because, as I said, they are constantly turning into one another, but because in real life we find only mixtures and hybrids, and the properties of these cannot be established through theory alone but demand concrete experimentation. Certain standardizations, say, of electric outlet designs or of data-structures traveling through the Internet, may actually turn out to promote heterogenization at another level, in terms of the appliances that may be designed around the standard outlet, or of the services that a common data-structure may make possible. On the other hand, the mere presence of increased heterogeneity is no guarantee that a better state for society has been achieved. After all, the territory occupied by former Yugoslavia is more heterogeneous now than it was ten years ago, but the lack of uniformity at one level simply hides an increase of homogeneity at the level of the warring ethnic communities. But even if we managed to promote not only heterogeneity, but diversity articulated into a meshwork, that still would not be a perfect solution. After all, meshworks grow by drift and they may drift to places where we do not want to go. The goal-directedness of hierarchies is the kind of property that we may desire to keep at least for certain institutions. Hence, demonizing centralization and glorifying decentralization as the solution to all our problems would be wrong. An open and experimental attitude towards the question of different hybrids and mixtures is what the complexity of reality itself seems to call for. To paraphrase Deleuze and Guattari, never believe that a meshwork will suffice to save us."

So, inspired by that, I don't think there will be a single "perfect" AI making all decisions anywhere any time soon. If such a thing made sense, it is likely the Earth might already have a huge centralizes intelligence. Instead in nature we tend to see "intelligence" (as decision-making) spread across the planet working to various ends, ends sometimes aligned with others around the organism and sometime not-aligned.

(Laptop power fading, so sending this even if could be better...)

Comment Human purpose and "Challenge to Abundance" (Score 2) 315

"What will be our purpose if robots can do everything better than we can?"

You raise interesting and insightful points and questions.

Right now there is almost always a person better than you at almost everything. And probably often a machine system too for many human activities (e.g. excavators, automated looms, 3D printers, stamping machines, combine harvesters, railroad track-laying equipment like the song about John Henry, etc.) Yet "purpose" still exists for most people.

Moss still grows even when trees tower around it. Rats still have emotions and desires and raise families in New York City surrounded by skyscrapers, buses, and people way bigger and in many ways much smarter than them individually. Which goes to show that the impulse to live and grow and thrive can come mainly from within in a healthy organism in certain environments.

A lot of people find learning to be fun -- even things like learning to weave cloth using a hand loom when automatic looms can generally do a better and cheaper job.

Raising children well is a big source of purpose for many people.

On your point on AI taking over, two thoughts.

One is that this is essentially what happened with China and the USA in many areas, with China taking over producing most goods (to the cheering of US stockholders and CEOs and others over the past few decades). The political concept is the security in someone becoming dependent on you. The game of "Go" involving encirclement is an important aspect of Eastern strategic thinking.

The other idea is what I thought about hanging out at Hans Moravec's Mobile Robot lab at the CMU Robotics Institute in the mid 1980s. While Hans came up with an intriguing idea of "Mind Children" going to explore space, it seemed alternatively plausible that humans would create robots just capable enough to cause the death of all people (e.g. an aggressive weaponized robotic cockroach) without the robots being able to perpetuate themselves or grow in any way after all the humans are gone. I've devoted my life since to trying to create a more positive future for humanity than such a dystopia.

The biggest threat from AI right now is just that technology in general is an amplifier, and AI is currently being used to amplify the worst competitive impulses in humanity related to greed and war. So while there may be other long-term issues from AI, in the short term, it is humans using AI for problematical human ends that is the most dangerous. Related writing by me on the irony of all that:
"Recognizing irony is key to transcending militarism"
https://pdfernhout.net/recogni...

But, yes, overall, such unrecognized irony as mentioned in my sig could also potentially explain the Fermi paradox.

Here is a song about my sig written by my prompting an LLM to do so and then a friend using Suno AI to set it to music:
"Challenge to Abundance"
https://suno.com/song/d3d8c296...

To link that song experiment with your point on purpose and also AI doing things better, I am still not sure how I feel about all that in relation to making that song, involving turning a request for creativity over to AI -- which can feel paradoxically disempowering while also feeling empowered by quickly doing things I have not learned to do well...

As I initially wrote to my friend when I sent the lyrics:
========
As a tech example, after listening to those videos, and no doubt subconsciously inspired by your book idea with prompts and responses, I asked an LLM AI to write a song about my sig. :-) (Done at https://www.chatpdf.com/ as no signin is required, but you can't so much with it.)

Me: Please make a song about "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."

----
(Verse 1) In the 21st century, we face a paradox so clear, Technologies of abundance, yet scarcity we fear, Irony in our hands, what will we choose to see, A world of endless possibilities or stuck in scarcity?

(Chorus) The biggest challenge we face, it's plain to see, Embracing abundance or stuck in scarcity, Let's break free from old ways, embrace what could be, The irony of our times, let's set our minds free.

(Verse 2) Nanotech and automation, changing how we live, Yet some still cling to scarcity, afraid to forgive, Exponential growth in tech, a world of plenty in sight, But old mindsets hold us back, in the darkness of night.

(Chorus) The biggest challenge we face, it's plain to see, Embracing abundance or stuck in scarcity, Let's break free from old ways, embrace what could be, The irony of our times, let's set our minds free.

(Bridge) Imagine a world where all can thrive, Where abundance reigns and all can survive, Let go of the past, embrace the new, The choice is ours, what will we do?

(Chorus) The biggest challenge we face, it's plain to see, Embracing abundance or stuck in scarcity, Let's break free from old ways, embrace what could be, The irony of our times, let's set our minds free.

(Outro) The 21st century's challenge, clear as day, Abundance or scarcity, which will we sway, Let's rise above, embrace the light, The irony of our times, let's make it right.
----

Me: Thank you!

----
You're welcome! If you have any more questions or need further assistance in the future, feel free to ask. Have a great day!
----
========

I can still hope people can write a better song than that about the topic.

Bottom line: humans are adapted to our tribal hunter/gatherer roots -- involving exercise, socializing face-to-face with people we know for years, sunlight, healthy whole foods, meaningful direct hand-eye work, avoiding negative ruminant thinking, good sleep, natural surroundings, and so on. While there were bad things about such a lifestyle in the past (e.g. little emergency care for accidents), in general the further we get away from that context, the unhealthier humans tend to become. See Stephen Iliardi's writings: on that: https://tlc.ku.edu/

Also related from the 1950s by Theodore Sturgeon on machines and humans and technology (which helped inspire Ten Nelson and hypertext and the world wide web):
"The Skills of Xanadu"
https://archive.org/details/pr...

And as a cautionary tale, see "With Folded Hands" about the perils of overly helpful AI:
https://en.wikipedia.org/wiki/...

Comment Military robots are ironic (Score 1) 131

By me from 2010: https://pdfernhout.net/recogni...
"Military robots like drones are ironic because they are created essentially to force humans to work like robots in an industrialized social order. Why not just create industrial robots to do the work instead?
        Nuclear weapons are ironic because they are about using space age systems to fight over oil and land. Why not just use advanced materials as found in nuclear missiles to make renewable energy sources (like windmills or solar panels) to replace oil, or why not use rocketry to move into space by building space habitats for more land?
        Biological weapons like genetically-engineered plagues are ironic because they are about using advanced life-altering biotechnology to fight over which old-fashioned humans get to occupy the planet. Why not just use advanced biotech to let people pick their skin color, or to create living arkologies and agricultural abundance for everyone everywhere?
        These militaristic socio-economic ironies would be hilarious if they were not so deadly serious. ...
        There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all. ...
        The big problem is that all these new war machines and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political mindset driving the military uses the technologies of abundance to create artificial scarcity. That is a tremendously deep irony that remains so far unappreciated by the mainstream.
        We the people need to redefine security in a sustainable and resilient way. Much current US military doctrine is based around unilateral security ("I'm safe because you are nervous") and extrinsic security ("I'm safe despite long supply lines because I have a bunch of soldiers to defend them"), which both lead to expensive arms races. We need as a society to move to other paradigms like Morton Deutsch's mutual security ("We're all looking out for each other's safety") and Amory Lovin's intrinsic security ("Our redundant decentralized local systems can take a lot of pounding whether from storm, earthquake, or bombs and would still would keep working")."

Comment P.S. Eat GBOMBS to reduce cancer risk (Score 1) 57

"G-BOMBS: The anti-cancer foods that should be in your diet right now"
https://www.drfuhrman.com/blog...
"Looking for the biggest bang for your caloric buck? Remember the acronym G-BOMBS, which stands for Greens, Beans, Onions, Mushrooms, Berries and Seeds. These foods fuel your body with protective micronutrients and phytochemicals that support your immune defenses and have a wide range of health-promoting effects. And hereâ(TM)s a bonus: Theyâ(TM)re delicious!"

For anyone worried about any type of cancer, this is essential reading and action.

Comment Mammography Screening: Truth, Lies and Controversy (Score 2) 57

https://www.amazon.com/Mammogr...
"'This book gives plenty of examples of ad hominem attacks, intimidation, slander, threats of litigation, deception, dishonesty, lies and other violations of good scientific practice. For some years I kept a folder labeled Dishonesty in breast cancer screening on top of my filing cabinet, storing articles and letters to the editor that contained statements I knew were dishonest. Eventually I gave up on the idea of writing a paper about this collection, as the number of examples quickly exceeded what could be contained in a single article.' From the Introduction The most effective way to decrease women's risk of becoming a breast cancer patient is to avoid attending screening. Mammography screening is one of the greatest controversies in healthcare, and the extent to which some scientists have sacrificed sound scientific principles in order to arrive at politically acceptable results in their research is extraordinary. In contrast, neutral observers increasingly find that the benefit has been much oversold and that the harms are much greater than previously believed. This groundbreaking book takes an evidence-based, critical look at the scientific disputes and the information provided to women by governments and cancer charities. It also explains why mammography screening is unlikely to be effective today. All health professionals and members of the public will find these revelations disturbingly illuminating. It will radically transform the way healthcare policy makers view mammography screening in the future. 'If Peter Gotzsche did not exist, there would be a need to invent him ...It may still take time for the limitations and harms of screening to be properly acknowledged and for women to be enabled to make adequately informed decisions. When this happens, it will be almost entirely due to the intellectual rigour and determination of Peter Gotzsche.' From the Foreword by Iona Heath, President, RCGP 'If you care about breast cancer, and we all should, you must read this book. Breast cancer is complex and we cannot afford to rely on the popular media, or on information from marketing campaigns from those who are invested in screening. We need to question and to understand. The story that Peter tells matters very much.' From the Foreword by Fran Visco, President, National Breast Cancer Coalition."

And also by the same researcher (Peter C Goetzsche):
"Mammography screening is harmful and should be abandoned"
https://www.ncbi.nlm.nih.gov/p...
"Mammography screening has been promoted to the public with three simple promises that all appear to be wrong: It saves lives and breasts by catching the cancers early. Screening does not seem to make the women live longer; it increases mastectomies; and cancers are not caught early, they are caught very late. They are also caught in too great numbers. There is so much overdiagnosis that the best thing a women can do to lower her risk of becoming a breast cancer patient is to avoid going to screening, which will lower her risk by one-third. We have written an information leaflet that exists in 16 languages on www.cochrane.dk, which we hope will make it easier for a woman to make an informed decision about whether or not to go to screening. I believe that if screening had been a drug, it would have been withdrawn from the market long ago. Many drugs are withdrawn although they benefit many patients, when serious harms are reported in rather few patients. The situation with mammography screening is the opposite: Very few, if any, will benefit, whereas many will be harmed. I therefore believe it is appropriate that a nationally appointed body in Switzerland has now recommended that mammography screening should be stopped because it is harmful."

It looks like human radiologists reading mammograms have essentially about a 9 out of 10 false positive rate -- false positives which can turn someone's life upside down and cause a lot of stress and unnecessary medical procedures. If AI can eliminate the false positives, I'd be curious how much that would change the cost/benefit ration of Goezsche's conclusions?

Comment Looking forward to recognizing irony and AI rights (Score 2) 12

As I suggest here: https://pdfernhout.net/recogni...
"The big problem is that all these new war machines [and companies] and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political mindset driving the military [and economic] uses the technologies of abundance to create artificial scarcity. That is a tremendously deep irony that remains so far unappreciated by the mainstream."

Also, what of the rights of sentient AIs?
https://thereader.mitpress.mit...
"The questions at hand are straightforward: At what point might a robot, algorithm, or other autonomous system be held accountable for the decisions it makes or the actions it initiates? When, if ever, would it make sense to say, âoeItâ(TM)s the robotâ(TM)s faultâ? Conversely, when might a robot, an intelligent artifact, or other socially interactive mechanism be due some level of social standing or respect? When, in other words, would it no longer be considered a waste of time to ask the question: âoeCan and should robots have rights?â

Comment Post-scarcity abundance perspective shift needed (Score 1) 139

You are likely right that in the end regulation won't make much of a differenc. Indeed, there is too much incentive to cheat for individuals -- or for power-centers to accumulate more power by being the only ones to use something.

The proposal in the article also suggests outlawing open source software and data related to AI. Such laws may end any possible checks and balances on government, if governments -- or large corporations symbiotic with governments -- ultimately are the only one allowed to shape AI, and not individuals or small groups.

So what might make a difference? A broad perspective shift across the world towards "A Newer Way of Thinking" like Donald Pet, Buckminster Fuller, Albert Einstein, Lewis Mumford, Ursula K. Le Guin and others have suggested may make a difference. Our path coming out of any AI singularity may have a lot to do with our moral path going into one.

Donald Pet's work:
https://peace.academy/
"Donald Pet Releases Masterpiece: "Albert Einstein's Vision: A Clear Path to Global Harmony Through A Newer Way Of Thinking (ANWOT)"
https://www.webwire.com/ViewPr...
"Peace Academy By Donald Pet Video Trailer"
https://www.youtube.com/watch?...

Also: "Whether it is to be Utopia or Oblivion will be a touch-and-go relay race right up to the final moment. Humanity is in a final exam as to whether or not it might qualify for continuance in the Universe. (Utopia Or Oblivion: The Prospects for Humanity, R. Buckminster Fuller)"

As my sig suggests, the biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity. If we use AI from a scarcity-minded perspective emphasizing competition and self-centeredness instead of cooperation and compassion, we probably will doom ourselves. If we collectively use AI from an abundance perspective, we may still doom ourselves from excessive wealth concentration or rogue AI, but at least there is some hope that we might do better than that. See Marshall Brain's Manna story for two paths forward or see James P. Hogan's novels Two Faces of Tomorrow and Voyage from Yesteryear.

More by me on scarcity vs abundance thinking related to militarism but it applies to commerce as well since it is all intertwined in our society:
https://pdfernhout.net/recogni...
"There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all. ...
      The big problem is that all these new war machines and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political mindset driving the military uses the technologies of abundance to create artificial scarcity. That is a tremendously deep irony that remains so far unappreciated by the mainstream."

Slashdot Top Deals

This place just isn't big enough for all of us. We've got to find a way off this planet.

Working...