Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:Oh holy shit (Score 2, Interesting) 89

Everyone I know who makes my equivalent AGI, except for my household, has 1+ dogs, work crazy hours, and have been told that their dogs are lonely and depressed.

Not one or two people.

EVERYONE. Dozens upon dozens of my clients, colleagues, peers, friends from grade school, etc, have a dog or two, and then they have to have someone come spend time with said dog when they're putting 10+ hours away from them.

Wag/Rover/etc is part of their crazy consumer spending. I always am shocked to hear they're spending $1000 a month on their pets.

Americans are insane about their pets. Instead of buying a dog, I invest in corporate veterinary hospitals, because it's crazy profitable.

Comment "Most of Us Are Using AI Backwards -- Here's Why" (Score 1) 196

https://www.youtube.com/watch?...
"Takeaways [by Nate B. Jones on his video]
  1. Compression Trap: We default to using AI to shrink information--summaries, bullet points, stakeholder briefs--missing opportunities for deeper insight.
  2. Optimize Brain Time: The real question isn't "How fast can I read?" but "When should I slow down and let ideas ferment?" AI can be tuned to extend, not shorten, our cognitive dwell-time on critical topics.
  3. Conversational Partnership: Advanced voice mode's give-and-take cadence keeps ideas flowing, acting like a patient therapist and sharp colleague rolled into one.
  4. Multi-Model Workflow: I pair models deliberately--4o voice for live riffing, O3 for distilling a thesis, Opus 4 for conceptual sculpting--to match each cognitive phase.
  5. Naming the Work: Speaking thoughts aloud while an AI listens helps "name" the terrain of a project, turning vague hunches into navigable coordinates.
  6. AI as Expander: Used thoughtfully, AI doesn't replace brainpower; it amplifies it, transforming routine tooling into a force-multiplier for deep thinking."

Comment Re:The Science is not there yet. (agreed) (Score 5, Informative) 72

As another example, sometimes a genetically-influenced mental trait can be good in one environment and problematical in another. For example:
https://www.psychologytoday.co...
        "One source of such variation in adaptive stability is surely genetic difference among infants, but genes alone do not make a child an orchid or a dandelion. As work by other researchers has shown, the genetic characteristics of children create their predispositions, but do not necessarily determine their outcomes. For example, a consortium studying Romanian children raised in horribly negligent, sometimes cruel orphanages under the dictatorship of Nicolae CeauÅYescu, before his fall in 1989, discovered that a shorter version of a gene related to the neurotransmitter serotonin produced orchid-like outcomes. Children with this shorter allele (an alternative form of a gene) who remained in the orphanages developed intellectual impairments and extreme maladjustment, while those with the same allele who were adopted into foster families recovered remarkably, in terms of both development and mental health.
        Similarly, a team of Dutch researchers studying experimental patterns of children's financial donations--in response to an emotionally evocative UNICEF video--found that participants with an orchid-like dopamine neurotransmitter gene gave either the most charitable contributions or the least, depending upon whether they were rated securely or insecurely attached to their parents--that is, depending on factors that were not genetic."

So, potentially parents can select, say, for children who may be less likely to get depressed or miserly in bad circumstances, but you will also select out children who might excellent or generous in good circumstances.

More examples: https://duckduckgo.com/?q=the+...

Other ideas include "tulip" children:
https://nurtureandthriveblog.c...

Is it ironic or intentional that the company has "orchid" in the name?

Comment Could join forces with New Public? Standards... (Score 3, Informative) 20

https://newpublic.org/
"Reimagine social media: We are researchers, engineers, designers, and community leaders working together to explore creating digital public spaces where people can thrive and connect."

Their Digital Spaces Directory listing hundreds of alternative platforms (including Slashdot):
https://newpublic.org/study/33...
"As the social media landscape changes and a new wave of digital spaces emerges, this Directory is meant to be a resource for our field -- a jumping-off-point for further exploration and research for anyone who's interested in studying, building, stewarding, or simply using digital social platforms. We hope this will inspire creative exploration, spark new collaborations, and highlight important progress."

Ultimately though, standards (open protocols, of which there are many good examples better than Bitcoin, like, say, email RFC 5322) are probably more important that implementations for distributed social media. I gave a five minute lightning talk about that for LibrePlanet 2022:
"Free/Libre Standards for Social Media and other Communications"
https://pdfernhout.net/media/l...

The text of the talk in IBIS outline format is available here:
https://pdfernhout.net/librepl...

From there:

What are key insights for moving forward?

        * Standards unify; incompatible services fragment
        * The power of plain text
        * Simple Made Easy ( Rich Hickey https://www.infoq.com/presenta... )
        * A democratic government is a special case of a free/libre software community

What are current free alternatives?

        * Matrix.org
        * GNU social
        * Mastodon
        * Mattermost (can import from Slack)
        * Wordpress + plugins
        * Drupal + plugins
        * Nextcloud
        * Email with better clients and servers including using JMAP, Nylas, mailpile etc
        * IRC with better clients
        * Smallest Federated Wiki (Ward Cunningham)
        * Citadel
        * Kolab
        * Diaspora
        * A plain website of text files using Git
        * Twirlip (my own experiments, very rough)
        * Many others

What are problems with free alternatives?

        * Usually more about implementations than standards
        * Hard to start using
        * Fragmentation of user bases with walled gardens
        * Often not federated
        * May not scale (like to trillions of messages)
        * Design missing the big messaging picture (e.g. whether email can be used to edit wikis)

What is my guess at what the future holds for innovation in messaging?
        * Free/Libre standards that unify messaging, with free implementations (a social semantic desktop?)
        * Obligatory XKCD on "How Standards Proliferate": https://xkcd.com/927/
        * It is the social consensus issues that are hard at this point, not the technical ones
        * We need less, not more: less standards, less code, less features, less division & stupidity
        * We need better: better standards, better code, better features, better peacemaking & sensemaking

Comment OpenAI CEO Altman says generations vary in use (Score 2) 248

https://tech.yahoo.com/ai/arti...
""Gross oversimplification, but like older people use ChatGPT as a Google replacement. Maybe people in their 20s and 30s use it as like a life advisor, and then, like people in college use it as an operating system," Altman said at Sequoia Capital's AI Ascent event earlier this month."

Not a surprise then that a lot of Slashdotters (who tend to be on the older side) emphasize search engine use.

Insightful video on other options for using AI:
"Most of Us Are Using AI Backwards -- Here's Why"
https://www.youtube.com/watch?...
"Takeaways
  1. Compression Trap: We default to using AI to shrink information--summaries, bullet points, stakeholder briefs--missing opportunities for deeper insight.
  2. Optimize Brain Time: The real question isn't "How fast can I read?" but "When should I slow down and let ideas ferment?" AI can be tuned to extend, not shorten, our cognitive dwell-time on critical topics.
  3. Conversational Partnership: Advanced voice mode's give-and-take cadence keeps ideas flowing, acting like a patient therapist and sharp colleague rolled into one.
  4. Multi-Model Workflow: I pair models deliberately--4o voice for live riffing, O3 for distilling a thesis, Opus 4 for conceptual sculpting--to match each cognitive phase.
  5. Naming the Work: Speaking thoughts aloud while an AI listens helps "name" the terrain of a project, turning vague hunches into navigable coordinates.
  6. AI as Expander: Used thoughtfully, AI doesn't replace brainpower; it amplifies it, transforming routine tooling into a force-multiplier for deep thinking."

Other interesting AI Videos:

"Godfather of AI: I Tried to Warn Them, But We've Already Lost Control! Geoffrey Hinton"
https://www.youtube.com/watch?...

"Is AI Apocalypse Inevitable? - Tristan Harris"
https://www.youtube.com/watch?...

See also an essay by Maggie Appleton: "The Dark Forest and Generative AI: Proving you're a human on a web flooded with generative AI content"
https://maggieappleton.com/ai-...

Talk & video version: "The Expanding Dark Forest and Generative AI: An exploration of the problems and possible futures of flooding the web with generative AI content"
https://maggieappleton.com/for...

On what Star Trek in the 1960s had to say about AI and becoming "Captain Dunsel" and also the risk of AI reflecting its obsessive & flawed creators,:
"The Ultimate Computer // Star Trek: The Original Series Reaction // Season 2"
https://www.youtube.com/watch?...

An insightful Substack post (which I replied to) on that theme of flawed creators making a flawed creation, mentioning the story of the Krell from Forbidden Planet:
https://substack.com/@bernsh/n...
"In Forbidden Planet the Krell built a machine of unimaginable power, designed to materialize thought itself -- but were ultimately destroyed because it also materialized their unconscious, primitive, destructive impulses, which they themselves did not fully understand or control. ..."

They also mention other stories there (perhaps generated from an LLM), including The Garden of Eden, Pandoraâ(TM)s Box, The Tower of Babel, The Icarus Myth, and Prometheus. I my response I mentioned some other sci-fi stories that touch on related themes for that and my sig on the irony of tools of abundance misused by scarcity-minded people.

Inspired by that first video on using AI to help refine ideas, a few days ago I used llama3.1 to discuss an essay I wrote related to my sig ( "Recognizing irony is key to transcending militarism" https://pdfernhout.net/recogni... ). The most surprisingly useful part was when I asked the LLM to list authors who had written related things (most of whom I knew of), and then, as a follow-up, what those authors might have thought about the essay I wrote. The LLM included for each author what parts of the essay they would have praised and also what was missing from the essay from that author's perspective.

Comment Re:Sums up the housing crisis (Score -1) 102

This is such cry-baby nonsense.

NONSENSE.

Since 2008, I have personally mentored dozens of young dudes (at no cost whatsoever, just because that's what successful people do).

I have helped poor dudes in bad neighborhoods buck up, get some side hustles, stack cash, and buy property.

You fucked yourself because you refuse to actually do someone to buy property. I don't know ANYONE, starting with even zero money, who couldn't find a nice home in just 2-3 years of saving money properly -- except the lepers in California, and fuck them anyway.

Comment The Big Crunch by David Goodstein (1994) (Score 3, Interesting) 78

https://web.archive.org/web/20...
"The period 1950-1970 was a true golden age for American science. Young Ph.D's could choose among excellent jobs, and anyone with a decent scientific idea could be sure of getting funds to pursue it. The impressive successes of scientific projects during the Second World War had paved the way for the federal government to assume responsibility for the support of basic research. Moreover, much of the rest of the world was still crippled by the after-effects of the war. At the same time, the G.I. Bill of Rights sent a whole generation back to college transforming the United States from a nation of elite higher education to a nation of mass higher education. ...
        By now, in the 1990's, the situation has changed dramatically. With the Cold War over, National Security is rapidly losing its appeal as a means of generating support for scientific research. There are those who argue that research is essential for our economic future, but the managers of the economy know better. The great corporations have decided that central research laboratories were not such a good idea after all. Many of the national laboratories have lost their missions and have not found new ones. The economy has gradually transformed from manufacturing to service, and service industries like banking and insurance don't support much scientific research. To make matters worse, the country is almost 5 trillion dollars in debt, and scientific research is among the few items of discretionary spending left in the national budget. There is much wringing of hands about impending shortages of trained scientific talent to ensure the Nation's future competitiveness, especially since by now other countries have been restored to economic and scientific vigor, but in fact, jobs are scarce for recent graduates. Finally, it should be clear by now that with more than half the kids in America already going to college, academic expansion is finished forever.
        Actually, during the period since 1970, the expansion of American science has not stopped altogether. Federal funding of scientific research, in inflation-corrected dollars, doubled during that period, and by no coincidence at all, the number of academic researchers has also doubled. Such a controlled rate of growth (controlled only by the available funding, to be sure) is not, however, consistent with the lifestyle that academic researchers have evolved. The average American professor in a research university turns out about 15 Ph.D students in the course of a career. In a stable, steady-state world of science, only one of those 15 can go on to become another professor in a research university. In a steady-state world, it is mathematically obvious that the professor's only reproductive role is to produce one professor for the next generation. But the American Ph.D is basically training to become a research professor. It didn't take long for American students to catch on to what was happening. The number of the best American students who decided to go to graduate school started to decline around 1970, and it has been declining ever since. ...
        Let me finish by summarizing what I've been trying to tell you. We stand at an historic juncture in the history of science. The long era of exponential expansion ended decades ago, but we have not yet reconciled ourselves to that fact. The present social structure of science, by which I mean institutions, education, funding, publications and so on all evolved during the period of exponential expansion, before The Big Crunch. They are not suited to the unknown future we face. Today's scientific leaders, in the universities, government, industry and the scientific societies are mostly people who came of age during the golden era, 1950 - 1970. I am myself part of that generation. We think those were normal times and expect them to return. But we are wrong. Nothing like it will ever happen again. It is by no means certain that science will even survive, much less flourish, in the difficult times we face. Before it can survive, those of us who have gained so much from the era of scientific elites and scientific illiterates must learn to face reality, and admit that those days are gone forever. I think we have our work cut out for us."

Comment Re: I know people who use Twitter (Score -1, Flamebait) 73

I would rather let Nazis speak and elect to block them myself than have an entire moderation team block everyone they disagree with.

Reddit is equally a shithole.

Heck. /. used to have a good libertarian minority and today it's nerds defending their trans kids here.

Comment "Is AI Apocalypse Inevitable? - Tristan Harris" (Score 1) 77

Another video echoing the point on the risks of AI combined with "bad" capitalism: https://www.youtube.com/watch?...
        "(8:54) So just to summarize: We're currently releasing the most powerful inscrutible uncontrollable technology that humanity has ever invented that's already demonstrating the behaviors of self-preservation and deception that we thought only existed in sci-fi movies. We're releasing it faster than we've released any other technology in history -- and under the maximum incentive to cut corners on safety. And we're doing this because we think it will lead to utopia? Now there's a word for what we're doing right now -- which is this is insane. This situation is insane.
        Now, notice what you're feeling right now. Do do you feel comfortable with this outcome? But do you think that if you're someone who's in China or in France or the Middle East or you're part of building AI and you're exposed to the same set of facts about the recklessness of this current race, do you think you would feel differently? There's a universal human experience to the thing hat's being threatened by the way we're currently rolling out this profound technology into society. So, if this is crazy why are we doing it? Because people believe it's inevitable. [Same argument for any arms race.] But just think for a second. Is the current way that we're rolling out AI actually inevitable like if if literally no one on Earth wanted this to happen would the laws of physics force AI out into society? There's a critical difference between believing it's inevitable which creates a self-fulfilling prophecy and leads people to being fatalistic and surrendering to this bad outcome -- versus believing it's really difficult to imagine how we would do something really different. But "it's difficult" opens up a whole new space of options and choice and possibility than simply believing "it's inevitable" which is a thought-terminating cliche. And so the ability for us to choose something else starts by stepping outside the self-fulfilling prophecy of inevitability. We can't do something else if we believe it's inevitable.
        Okay, so what would it take to choose another path? Well, I think it would take two fundamental things. The first is that we have to agree that the current path is unacceptable. And the second is that we have to commit to finding another path -- but under different incentives that offer more discernment, foresight, and where power is matched with responsibility. So, imagine if the whole world had this shared understanding about the insanity, how differently we might approach this problem..."

He also makes the point that we ignored the downsides of social media and so got the current problematical situations related to it -- and so do we really want to do the same with way-more-risky AI? He calls for "global clarity" on AI issues. He provides examples from nuclear, biotech, and ozone on how collective understanding and then collective action made a difference to manage risks.

Tristan Harris is associated with "The Center For Humane Technology" (of which I joined their mailing list while back):
https://www.humanetech.com/
"Articulating challenges.
Identifying interventions.
Empowering humanity."

Just saw this yesterday on former President Obama talking about concerns about AI not being hyped (mostly about economic disruption) and also how cooperation between people is the biggest issue:
"ORIGINAL FULL CONVERSATION: An Evening with President Barack Obama"
https://www.youtube.com/watch?...
        "(31:43) The changes I just described are accelerating. If you ask me right now the thing that is not talked about enough but is coming to your neighborhood faster than you think, this AI revolution is not made up; it's not overhyped. ... I was talking to some people backstage who are uh associated with businesses uh here in the Hartford community. Uh, I guarantee you you're going to start seeing shifts in white collar work as a consequence of uh what these new AI models can do. And so that's going to be more disruption. And it's going to speed up. Which is why uh, one of the things I discovered as president is most of the problems we face are not simply technical problems. If we want to solve climate change, uh we probably do need some new battery technologies and we need to make progress in terms of getting to zero emission carbons. But, if we were organized right now we could reduce our emissions by 30% with existing technologies. It'd be a big deal. But getting people organized to do that is hard. Most of the problems we have, have to do with how do we cooperate and work together, uh not you know ... do we have a ten point plan or the absence of it."

I would respectfully build on what President Obama said by adding that a major reason why getting people to cooperate about such technology is because we need to shift our perspective as suggested with my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."

I said much the same in the open letter to Michelle Obama from 2011:
https://pdfernhout.net/open-le...

One thing I would add to such a letter now is a mention of Dialogue Mapping using IBIS (perhaps even AI-assisted) to help people cooperate on solving "wicked" problems through visualizing the questions, options, and supporting pros and cons in their conversations:
https://cognitive-science.info...
https://pdfernhout.net/media/l...

Here is one example of some people working in that general area to support human collaboration on "wicked problems" (there are others, but I am conversing with related people at the moment): "The Sensemaking Scenius" (as one way to help get the "global clarity" that Tristan Harris and, indirectly, President Obama calls for):
https://www.scenius.space/
        "The internet gods blessed us with an abundance of information & connectivity -- and in the process, boiled our brains. We're lost in a swirl of irrelevancy, trading our attention, at too low a price. Technology has destroyed our collective sensemaking. It's time to rebuild our sanity. But how?
Introducing The Sensemaking Scenius, a community of practice for digital builders, researchers, artists & activists who share a vision of a regenerative intentional & meaningful internet."

Something related to that by me from 2011:
http://barcamp.org/w/page/4722...
        "This workshop was led by Paul Fernhout on the theme of tools for collective sensemaking and civic engagement."

I can hope for a convergence of these AI concerns, these sorts of collaborative tools, and civic engagement.

Bucky Fuller talked about being a "trim tab", a smaller rudder on a big rudder for a ship, where the trim tab slowly turns the bigger rudder which ultimately turns the ship. Perhaps civic groups can also be "trim tabs", as in: "Never doubt that a small group of thoughtful, committed citizens can change the world; indeed, it's the only thing that ever has. (Margaret Mead)"

To circle back to the original article on what Facebook is doing, frankly, if there are some people at Facebook who really care about the future of humanity more than the next quarter's profits, this is the kind of work they could be doing related to "Artificial Super Intelligence". They could use add tools for Dialogue Mapping to Facebook's platform (like with IBIS or similar, perhaps supported by AI) to help people understand the risks and opportunities of AI and to support related social collaboration towards workable solutions -- rather than just rushing ahead to create ASI for some perceived short-term economic advantage. And this sort of collaboration-enhancing work is the kind of thing Facebook should be paying 100 million dollar signing bonuses for if such bonuses make any sense.

I quoted President Carter in that open letter, and the sentiment is as relevant about AI as it was then about energy:
        http://www.pbs.org/wgbh/americ...
        "We are at a turning point in our history. There are two paths to choose. One is a path I've warned about tonight, the path that leads to fragmentation and self-interest. Down that road lies a mistaken idea of freedom, the right to grasp for ourselves some advantage over others. That path would be one of constant conflict between narrow interests ending in chaos and immobility. It is a certain route to failure. All the traditions of our past, all the lessons of our heritage, all the promises of our future point to another path, the path of common purpose and the restoration of American values. That path leads to true freedom for our nation and ourselves. We can take the first steps down that path as we begin to solve our energy [or AI] problem."

Comment Re:FireWire iPod? (Score 1) 64

I still have access to industrial/aerospace test benches using FireWire. But of course they are quite legacy equipment, I would not plug a Mac running a modern version of MacOS into them. Backwards compatibility can always be a bit tricky with Macs (switches to 64 bits only and changes of CPU architectures did not help, of course).

Slashdot Top Deals

Truly simple systems... require infinite testing. -- Norman Augustine

Working...