Forgot your password?
typodupeerror

Comment Need a laughable perspective shift (Score 1) 137

As with my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."
https://pdfernhout.net/recogni...
        "Nuclear weapons are ironic because they are about using space age systems to fight over oil and land. Why not just use advanced materials as found in nuclear missiles to make renewable energy sources (like windmills or solar panels) to replace oil, or why not use rocketry to move into space by building space habitats for more land? ...
        These militaristic socio-economic ironies would be hilarious if they were not so deadly serious. ...
        There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all. ...
      So, while in the past, we had "nothing to fear but fear itself", the thing to fear these days is ironcially ... irony. :-)
      So, how can we transcend militarism?
      Simple persuasive rhetoric was tried, and failed, when Albert Einstein said, with the creation of atomic weapons everything had changed except our way of thinking.
      The economic argument against war was tried, and failed; see "War is a Racket" by Two-Time Congressional Medal of Honor Recipient Major General Smedley D. Butler...
      A basic moral argument against war was tried, and failed; see Freeman Dyson's book "Weapons and Hope" that says nuclear weapons are a moral evil, like slavery.
      A deeper religious argument against war was tried, and failed, see "James P. Carse, Religious War In Light of the Infinite Game, SALT talk"...
      We even tried public education through TV to create an enlightened citizenry (what high hopes back when TV was created) and that even got corrupted into promoting and celebrating violence. See the book by Diane E. Levin and Nancy Carlsson-Paige "The War Play Dilemma" for ways to deal with that if you have children...
        So, people have tried, and tried again, and failed to turn the tide, both people in the military and people outside the military. Still, each attempt has contributed, but together they have not yet been enough yet to turn the tide and help the USA transcend militarism and empire.
      What else can we try that does not just beget more violence? ...
      Maybe ironic humor is our last, best hope against the war machines?
      As was quoted by Joel Goodman of the Humor Project...: "There are three things which are real: God, human folly, and laughter. The first two are beyond our comprehension. So we must do what we can with the third. (John F. Kennedy)"
      The big problem is that all these new war machines and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political mindset driving the military uses the technologies of abundance to create artificial scarcity. That is a tremendously deep irony that remains so far unappreciated by the mainstream.
      We the people need to redefine security in a sustainable and resilient way. Much current US military doctrine is based around unilateral security ("I'm safe because you are nervous") and extrinsic security ("I'm safe despite long supply lines because I have a bunch of soldiers to defend them"), which both lead to expensive arms races. We need as a society to move to other paradigms like Morton Deutsch's mutual security ("We're all looking out for each other's safety") and Amory Lovin's intrinsic security ("Our redundant decentralized local systems can take a lot of pounding whether from storm, earthquake, or bombs and would still would keep working"). ..."

Comment The value - and cost - of being first to market (Score 1) 163

The ZIP won out over a superior technology - Imation Superdisk - because it was first to market. Iomega's ZIP disc was proprietary and more expensive per megabyte, while also almost never being bootable. Imation solved those problems with the Superdisk, which could also read 1.44mb floppies in the same drive. However by the time Imation released theirs, Iomega had a huge headstart and few people paid attention.

Later on though Iomega's reliance on their being first to (mass) market ended up killing off their product. They weren't able to hit a cost per mb that was even remotely close to CD-R, let alone USB flash drives - nor could they get anywhere near the speed of USB flash drives. If they had taken the time to innovate further we would probably be talking about new ZIP-related technologies in the 10s of GBs (or larger), instead they are in the dustbin.

Comment Re:A serious question (Score 1) 40

It's a good question and one I'm working on trying to get an answer to. By giving AI hard, complex engineering problems, and then getting engineers to look at the output to determine if that output is meaningful or just expensive gibberish.

By doing this, I'm trying to feel around the edges of what AI could reasonably be used for. The trivial engineering problems usually given to it are problems that can usually be solved by people in a similar length of time. I believe the typical savings from AI use are in the order of 15% or less, which is great if you're a gecko involved in car insurance, but not so good if you're a business.

If the really hard problems aren't solvable by AI at all (it's all just gibberish) then you can never improve on that figure. It's as good as it is going to get.

I've open sourced what AIs have come up with so far, if you want to take a look. Because that is what is going to tell you if good can come out of AI or not.

Comment Re:Employee conversation in work environment (Score 1, Interesting) 40

The conversations are not private, but PII laws nonetheless still apply. Anything in the messages that violates PII privacy laws is forbidden regardless of company policy. Policy cannot overrule the law.

Now, in the US, where privacy is a fiction and where double-dealing is not only perfectly acceptable but a part of workplace culture, that isn't too much of an issue. The laws exist on paper but have no real existence in practice.

However, business these days is international and American corps tend to forget that. Any conversation involving European computers (even if all employers and employees are in the US) falls under the GDPR and is under the aspices of the European courts and the ECHR, not the US legal system. And cloud servers are often in Ireland. Guess what. That means any conversation that takes place physically on those computers in Ireland plays by European rules, even if the virtual conversation was in the US.

This was settled by the courts a LONG time ago. If you carry out unlawful activities on a computer in a foreign country, you are subject to the laws of that country.

Comment Eric Schmidt on AI used to make bioweapons soon (Score 1) 13

From the transcript about 43 minutes in of a public conversion with Eric Schmidt from Apr 10, 2025: https://www.youtube.com/watch?...
====
          "Question: Thanks for the great conversation so far. Leonard Justin. I'm a PhD student at MIT. Um, I was wondering if you could just discuss a bit more some of the risks you see coming specifically with respect to biology and how we should go about mitigating those. What's the role of the AI developers? What's the role of government? Um, yeah, how can we move forward on that?
        ----
        Schmidt: So, so you're going to know a lot more about this area than I, but speaking as an amateur in your field, the two current risks from these models are cyber and biorisks.
        The cyber ones are easy to understand. The system can generate cyber attacks and in theory can generate zero-day cyber attacks that we can't see and it can unleash them and furthermore it can do it at scale.
        In biology, you get some evil, you know, the equivalent of Osama bin Laden. They would start with an open-source model. Now these open source models have been restricted using a testing process. Uh they're called cards and they test it out and they delete that information from the model.
        It turns out it's relatively easy to un to reverse essentially those security modes around the model and that's a danger. So now you've got a model that can generate bad pathogens.
        Then the second thing you have to do is you have to find things to build them. Our collective assessment at the moment is that that's a nation state risk, not an individual terrorist risk. Although we could be wrong, but there's plenty of examples uh and this the the report talks about some of the Chinese examples where in theory if they wanted to they could not only manufacture bad things but sorry design them but also manufacture them.
        The good news and the reason we're all alive today is that the bio stuff is hard to manufacture and distribute and to make deadly and and spread and so forth and so on. Um there's lots of evidence for example that you can take a bad bio right now and modify it just enough that the testing regimes and the sort of surveillance regimes it bypasses and that's another threat.
        So that's what I worry about.
        But I think at the moment u our consensus is we're right below the threshold where this is an issue and the consensus in in my side of the industry is that one more or two more turns of the crank these issues will be -- and you know by then you'll be graduated and you can sort of help solve these problems.
        Um the a crank is turned every 18 months or so. This is about three years.
        ----
        Moderator: But theoretically, couldn't AI and biotechnology help you come up with a counter measure?
        ----
        Schmidt: Um, I had thought so, and that was the argument I made until I I do a lot of national security work. And there's a term called offense dominant. And an offense dominant is a is a situation in a military context where the attack cannot be countered at the same level as the attack. In other words, the damage is done.
        And most people, most biologists who've worked in this believe that while the model can be trained to counter this, the damage from the offense part is far greater than the ability to defend it, which is why we're so worried about it."
====

Ultimately, I feel a big part of the response to that threat needs to be a shift in perspective like through people laughing at my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity." :-)

Explored in more detail here:
"Recognizing irony is key to transcending militarism"
https://pdfernhout.net/recogni...
        "... Biological weapons like genetically-engineered plagues are ironic because they are about using advanced life-altering biotechnology to fight over which old-fashioned humans get to occupy the planet. Why not just use advanced biotech to let people pick their skin color, or to create living arkologies and agricultural abundance for everyone everywhere?
        These militaristic socio-economic ironies would be hilarious if they were not so deadly serious. ...
        There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all. ..."

Comment Not interesting yet. (Score 4, Informative) 49

It's possible that cetaceans have a true language. They certainly have something that seems to function the same as a "hello, I am (name)", where the name part differs between all cetaceans but the surrounding clicks are identical. The response clicks also include that same phrase which researchers think serves the purpose of a name.

But we've done structural analysis to death and, yes, all the results are interesting (it seems to have high information content, in the Shannon sense, seems to have some sort of structure, and seems to have intriguing early-language features), but so does the Voynich Manuscript and there's a 99.9% chance that the Voynich Manuscript is a fraud with absolutely no meaning whatsoever. Structure only tells you if something is worth a closer look and we have known for a long time that cetacean clicks were worth a closer look. Further structural work won't tell us anything we don't already know.

What we need is to have a long-term recording of activities and clicks/whistles, where the sounds are recorded from many different directions (because they can be highly directional) and where the recording positively identifies the source of each sound, what that source was doing at the time (plus what they'd been doing immediately prior and what they do next), along with what they're focused on and where the sounds were directed (if they were). This sort of analysis is where any new information can be found.

But we also need to look at lessons learned in primate research, linguistics, sociology and anthropology, to understand what ISN'T going to work, in terms of approaches. In all three cases, we've learned that you learn best immersively, not from a distance. If an approach has failed in EVERY OTHER SOCIAL SCIENCE, then assuming it is going to work in cetacean research is stupid. It might be the correct way to go, but assuming it is is the bit that is stupid. If things fail repeatedly, regardless of where they are applied, then there's a decent chance it is necessary to ask that maybe the stuff that keeps failing is defective.

Submission + - Slowbooks, AI coded cleanroom re-imagined Quickbooks (github.com)

Archangel Michael writes: The Story
VonHoltenCodes ran QuickBooks 2003 Pro for 14 years for side business invoicing and bookkeeping. Then the hard drive died. Intuit's activation servers have been dead since ~2017, so the software can't be reinstalled. The license paid for is worthless.

So he built his own replacement, transferred all his data from the old .QBW file using IIF export/import.

The codebase is annotated with "decompilation" comments referencing QBW32.EXE offsets, Btrieve table layouts, and MFC class names — a tribute to the software that served him well for 14 years before its maker decided it should stop working.

This is a clean-room reimplementation. No Intuit source code was available or used.

(Side Note from story submitter. This is the beginning of the end of Windows only applications)

Slashdot Top Deals

If you think the system is working, ask someone who's waiting for a prompt.

Working...