Forgot your password?
typodupeerror

Comment Re:What are the odds (Score 1) 66

Or maybe... you'll get to lead the local AI project to protect your company's knowledge from being freely given to (extracted by) Microsoft, OpenAI, Anthropic, Google, or Meta.

What is the liability for any of these companies when your company's use of their products becomes "training data" that any other customer can view with just the correct prompt.

After all, "it is for entertainment purposes only."

Comment I asked an AI: Why do LLMs hallucinate? (Score 1) 13

Here was the answer:

Large Language Models (LLMs) hallucinate due to several fundamental design characteristics and limitations:

**Training Data Issues:**
- **Inaccurate information in training data**: LLMs learn from vast amounts of text that may contain errors, biases, or outdated information
- **Confidence without verification**: They generate responses based on patterns in training data rather than factual accuracy
- **Synonym substitution**: They can replace words with semantically similar ones, potentially changing meaning

**Architecture Limitations:**
- **Probabilistic nature**: LLMs predict the next word based on probability distributions, not truth or fact-checking
- **No real-world grounding**: Unlike humans, they don't have direct experience of reality to validate claims
- **Pattern matching vs. understanding**: They excel at recognizing patterns but don't truly "understand" concepts

**Generation Process:**
- **Autoregressive sampling**: Each token is generated based on previous tokens, potentially amplifying initial errors
- **Temperature settings**: Higher temperatures increase randomness and potential for fabricated content
- **Context window limitations**: They may lose track of earlier context or contradict themselves

**Information Gaps:**
- **Knowledge cutoffs**: Training data has a cutoff date, so they don't know about recent events
- **Uncertainty handling**: They often generate confident-sounding responses even when uncertain
- **Lack of external verification**: No ability to cross-reference facts or consult real-world sources

**Inherent Design Trade-offs:**
- **Coherence vs. accuracy**: Prioritizing fluent, grammatically correct responses over factual correctness
- **Generalization vs. specificity**: May generate plausible-sounding but incorrect details for rare topics

The hallucination problem is essentially a consequence of treating language generation as a statistical pattern-matching exercise rather than a fact-gathering and verification process.

Discuss among yourselves...

Comment Crypto Is For Crime! (Score 4, Interesting) 31

Now with "wrench attacks" as XKCD informed us all of years ago.

Complete with wrench attack gangs whose real leaders are offshore directing their minions over phones using voice disguisers!

Just an added twist with all the crooked exchanges, holding companies and ATM machines.

Without strict world-wide regulation the only good move is not to play. Unless you run the the exchange and created the crypto.

Read how Crypto is the crime enabler: Number Go Up!

 

Comment Re:"for entertainment purposes only" (Score 1) 66

This is why almost every platform for Internet services DOES NOT USE Microsoft software. Because finding a security breach or bug can take months or years for Microsoft to fix. If they ever do...

At least, with Open Source software you can fix it or swap it out.

I omitted the critical aspect of infrastructure and internet-facing services with open ports providing essential services. This discussion focuses on Windows users, not Microsoft itself—whose own source code availability should compel them to address vulnerabilities internally rather than leaving users dependent on their arbitrary timelines and priorities.

Like most Linux users on the Internet, Windows users face significant security challenges when running internet-facing services.

Few companies that want to have reliable Internet facing services do it with Windows. Even Microsoft Azure has more Linux than Windows deployments.

  Do Oracle run their internet-facing servers on Windows? What about Facebook or Google?

If you require scale, security, and reliability for internet-facing services, along with the right to repair when issues arise, Microsoft is not the appropriate platform. They provide no warranty for any deployments and may never address bugs or critical vulnerabilities in a timely manner.

Comment Re:"for entertainment purposes only" (Score 1, Troll) 66

Sure, why not just pass all your sensitive personal and corporate data through a "thinking machine," "for entertainment purposes only." Sounds like a great plan that smart people would do. /sarcasm

Because Microsoft knows that you pay and use all their other easily breachable low security software without any warranty or fitness for any use and always have since the mid-1970s!

This is why almost every platform for Internet services DOES NOT USE Microsoft software. Because finding a security breach or bug can take months or years for Microsoft to fix. If they ever do...

At least, with Open Source software you can fix it or swap it out.

Comment Re:My kind of propaganda! (Score 1) 13

Obvious and stupid.

Just like "Social Media." That is really based on "personal realities" for each user created by algorithms that the current AI are largely based on.

Look at all the people that has injured. Now they win court cases. And countries smarter than the US are regulating it.

Comment This movie explains the situation well.. (Score 4, Interesting) 13

for your non tech industry associates and relatives.

The conclusion will hopefully start a lot of discussions and activism to prevent the dystopia path, the chaos path or extinction path.

Like this deep more complete one or the Schoolhouse afterschool special version.

Comment NO! - As AI's just make stuff up and lie about it. (Score 1) 22

AI is barely OK if you let rewrite a few paragraphs.

When you ask historic questions it makes stuff up, presents it as fact and argues with you about the false information.

So the longer an AI created article, the more fact checking has to be done. And that makes asking initially on anything important, really not worth it.

A lot of the current data in AI models remains biased.

And now we know that: Overall, across 1,372 participants and over 9,500 individual trials, the researchers found subjects were willing to accept faulty AI reasoning a whopping 73.2 percent of the time, while only overruling it 19.7 percent of the time.

Given that almost 75% of participants in a recent study trusted inaccurate AI and would pass it on "as fact" to others, how dumb will this slop make us all as it spreads as "more authoritative" than all the Social Media info and manipulation slop that is already everywhere on the net.

Recommended viewing: THE AI DOC: OR HOW I BECAME AN APOCALOPTIMIST

Comment Less Liability When AI Fucks Up - Can't sue the AI (Score 1) 89

So in our new world of irresponsibility and negligence by AI who takes on the liability and pays out the injury awards? Probably, some reverse Centaur...

There is a book coming out on this topic soon!

We'll all be so happy when we are employed by AI which cannot legally be held liable for all the mistakes it makes and we see everyday.

Comment Re:7 KM away (Score 1) 71

the considerable impact of humans becoming permanently unemployable.

What "considerable impact?" Famous economist Adolph Hitler discovered that if you just shoot them, peoples' recurring maintenance cost fall to nearly zero.

This also (mostly) worked for the American south prior to 1860, which is where "Hitler and team" got many of their ideas from.

Though "Hitler and team" did come up with more efficient disposal methods for equipment that no longer worked.

AI is bound to be even more efficient if anything that is not keeping it running and likely expanding is consuming resources it wants or needs.

Comment Re:why is anyone at Oracle still (Score 1) 46

The handwriting has been on the wall for years now.

Ah, not yet and likely not for a very long time... as long as Oracle customers keeping stepping into the trap.

The Oracle Licensing Model: A Perpetual Revenue Engine Funding Larry Ellison's Expanding Empire

Larry Ellison has built one of the most resilient revenue machines in enterprise technology. Oracle's licensing structure — characterized by complex, recurring fees, mandatory support contracts, and aggressive audit practices — creates what industry analysts have long described as a **"licensing treadmill."** Customers, once deeply integrated into Oracle's ecosystem, find migration prohibitively expensive, effectively ensuring a continuous stream of capital flowing toward Ellison's broader ambitions.

A Foundation of Predictable Revenue

Oracle's enterprise software contracts are engineered for retention and entrapment. Maintenance fees, cloud conversion pressures, and compliance audits collectively discourage departure, generating billions in annual recurring revenue with remarkable consistency.

Ellison's Expanding Portfolio

That capital has not remained idle. Ellison has deployed his wealth aggressively — acquiring a reported **98% stake in the Hawaiian island of Lnai**, pursuing significant influence within the media landscape, and maintaining notable proximity to political figures across the ideological spectrum. His part in in TikTok, Paramount and now Warner-Brothers/Discovery acquisitions further signals ambitions extending well beyond enterprise software.

A Pattern Worth Noting

The throughline is straightforward: enterprise customers fund an empire. Whether through media acquisitions, political relationships, or island ownership, Ellison's ambitions are, in no small part, underwritten by organizations that simply **cannot afford to leave.**

That is, by any measure, a formidable position to occupy.

Comment Re:Anyway just google it and you will find (Score 1) 50

Again I cannot emphasize this enough, the literal spoon's worth of plastic in your brain.

I really want to know the configuration of this AI and the base model it was built on, so I can identify it and avoid trusting or using it inadvertently

Go see The AI Doc. Or How I Became an Apocaloptimist. so you'll know when to kiss your ass goodbye or not...

Slashdot Top Deals

Nothing is finished until the paperwork is done.

Working...