Forgot your password?
typodupeerror

Comment Google's AI does not impress. (Score 1) 104

When I test the different AI systems, Google's AI system loses track of complex problems incredibly quickly. It's great on simple stuff, but for complex stuff, it's useless.

Unfortunately.... advice, overviews, etc, are very very complex problems indeed, which means that you're hitting the weakspot of their system.

Comment Re:Billionares Using Our Resources to Replace Peop (Score 1) 47

I've designed a few machines - some rather more insane than others - in meticulous detail using AI. What I have not done, so far, is get an engineer to review the designs to see if any of them can be turned into something that would be usable. My suspicion is that a few might be made workable, but that has to be verified.

Having said that, producing the design probably took a significant amount of compute power and a significant amount of water. If I'd fermented that same quantity of water and provided wine to an engineering team that cost the same as the computing resources consumed, I'd probably have better designs.But, that too, is unverified. As before, it's perfectly verifiable, it just hasn't been so far.

If an engineer looks at the design and dies laughing, then I'm probably liable for funeral costs but at least there would be absolutely no question as to how good AI is at challenging engineering concepts. On the other hand, if they pause and say that there's actually a neat idea in a few of the concepts, then it becomes a question of how much of that was ideas I put in and how much is stuff the AI actually put together. Again, though, we'd have a metric.

That, to me, is the crux. It's all fine and well arguing over whether AI is any good or not (and, tbh, I would say that my feeling is that you're absolutely right), but this should be definitively measured and quantified, not assumed. There may be far better benchmarks than the designs I have - I'm good but I'm not one of the greats, so the odds of someone coming up with better measures seems high. But we're not seeing those, we're just seeing toy tests by journalists and that's not a good measure of real-world usability.

If no such benchmark values actually appear, then I think it's fair to argue that it's because nobody believes any AI out there is going to do well at them.

(I can tell you now, Gemini won't. Gemini is next to useless -- but on the Other Side.)

Submission + - The Readability Threshold

Iamthecheese writes: Online Debate Has a Capacity Limit

This is formatted by an LLM. I've reviewed it, and I ask the reader to suppress his cringe at LLMisms like "the takeaway" because they do not detract from the usefulness of the piece.

Most people have seen this in online arguments: someone is wrong, and it could — in principle — be proven. But the proof would require so many definitions, caveats, steps, evidence, and background details that almost no one would read it all.

Further effort then hits diminishing returns, creating an information bottleneck that caps the conversation’s usefulness.

People often complain that online discourse rewards short, punchy claims over nuanced ones. That is true, but not the root issue. The deeper problem is that many topics demand a minimum level of detail to explain correctly, while online platforms and audiences impose a strict maximum on what they will absorb.

When required detail exceeds what the medium can carry, more effort no longer produces more understanding. The discussion can stay active and heated, but it stops doing the job it pretends to do.

A Simple Model

Define:
— d: detail required for a correct explanation
— l: detail actually delivered
— T: maximum detail the audience will process

Useful explanations must satisfy both l >= d and l
Thus, productive discourse requires d
Model audience tolerance as:
T = s × (i / d)

where:
— i: perceived importance of the topic
— s: scaling factor for platform and audience attention capacity

Substituting yields:
d2 s × i

Usefulness and Diminishing Returns

Define usefulness U as the fraction of required understanding successfully transmitted:
U = min(1, T / d) = min(1, (s × i) / d2)

This creates both a hard upper bound on usefulness and strong diminishing returns on effort. Early contributions can meaningfully improve understanding, but once delivered detail hits the audience limit T, additional effort delivers no further gain.

The constraint is not about intelligence or intellectual honesty — it is fundamentally a bandwidth limit.

What Happens Past the Limit

When d > T, participants are left with three poor options:
— Compress the argument and lose critical nuance
— Deliver the full explanation and lose most of the audience
— Disengage

All reduce the conversation’s value without solving the underlying capacity problem.

This explains why long, careful online explanations frequently fail to change minds — not because they are unconvincing, but because the extra detail does not survive transmission.

Why This Matters

The model shows that topics combining high importance (raising i) with high irreducible complexity (large d) are most likely to defeat online discourse. Attention grows only linearly with importance, while complexity penalizes quadratically.

Discourse does not collapse; it simplifies. Distinctions vanish, multi-step logic turns into slogans, and unstated assumptions proliferate. The conversation remains engaging but its usefulness is capped far below what the subject demands.

In this light, misinformation often stems less from deliberate falsehoods than from channel capacity too narrow to carry accurate understanding intact.

The Takeaway

The core problem with online debate is not merely a preference for brevity. Beyond a certain point, longer and more accurate arguments stop delivering proportional gains in shared understanding.

Once required detail exceeds what the medium can sustain, conversation usefulness is fundamentally bounded. Extra effort cannot overcome the limit.

Compactly:
Useful discourse requires: d2 s × i

Maximum usefulness: U = min(1, (s × i) / d2)

Everything else is commentary.

Linking to Social Response to Real Problems

This capacity model supplies a concrete diagnostic for societal responses to complex real-world issues such as climate change, pandemics, or economic inequality.

For any given problem, one can estimate d (its irreducible complexity), i (public attention via search trends or media volume), and s (platform bandwidth). The resulting discourse efficiency index (s × i) / d2 (identical to U when below 1) becomes a useful quantitative measure of social response potential.

When the index falls significantly below 1, expect strong emotional mobilisation paired with weak, symbolic, or counterproductive policy action; public debate that feels intense yet fails to converge on accurate solutions; and simplification of the issue into slogans and tribal signals rather than actionable understanding.

The index therefore flags problems likely to elicit performative rather than substantive collective responses.

Comment Forget gambling (Score 1) 83

Let's take a big step back. Before anyone has a say in what's done there, let's stop the abuse of the interstate commerce clause. Local jurisdiction is local, and the excuse that down the road money will change hands in an unrelated transaction because of this one is the worst abuse of the constitution out there.

Slashdot Top Deals

It was pity stayed his hand. "Pity I don't have any more bullets," thought Frito. -- _Bored_of_the_Rings_, a Harvard Lampoon parody of Tolkein

Working...