Submission + - The Readability Threshold
Iamthecheese writes: Online Debate Has a Capacity Limit
This is formatted by an LLM. I've reviewed it, and I ask the reader to suppress his cringe at LLMisms like "the takeaway" because they do not detract from the usefulness of the piece.
Most people have seen this in online arguments: someone is wrong, and it could — in principle — be proven. But the proof would require so many definitions, caveats, steps, evidence, and background details that almost no one would read it all.
Further effort then hits diminishing returns, creating an information bottleneck that caps the conversation’s usefulness.
People often complain that online discourse rewards short, punchy claims over nuanced ones. That is true, but not the root issue. The deeper problem is that many topics demand a minimum level of detail to explain correctly, while online platforms and audiences impose a strict maximum on what they will absorb.
When required detail exceeds what the medium can carry, more effort no longer produces more understanding. The discussion can stay active and heated, but it stops doing the job it pretends to do.
A Simple Model
Define:
— d: detail required for a correct explanation
— l: detail actually delivered
— T: maximum detail the audience will process
Useful explanations must satisfy both l >= d and l
Thus, productive discourse requires d
Model audience tolerance as:
T = s × (i / d)
where:
— i: perceived importance of the topic
— s: scaling factor for platform and audience attention capacity
Substituting yields:
d2 s × i
Usefulness and Diminishing Returns
Define usefulness U as the fraction of required understanding successfully transmitted:
U = min(1, T / d) = min(1, (s × i) / d2)
This creates both a hard upper bound on usefulness and strong diminishing returns on effort. Early contributions can meaningfully improve understanding, but once delivered detail hits the audience limit T, additional effort delivers no further gain.
The constraint is not about intelligence or intellectual honesty — it is fundamentally a bandwidth limit.
What Happens Past the Limit
When d > T, participants are left with three poor options:
— Compress the argument and lose critical nuance
— Deliver the full explanation and lose most of the audience
— Disengage
All reduce the conversation’s value without solving the underlying capacity problem.
This explains why long, careful online explanations frequently fail to change minds — not because they are unconvincing, but because the extra detail does not survive transmission.
Why This Matters
The model shows that topics combining high importance (raising i) with high irreducible complexity (large d) are most likely to defeat online discourse. Attention grows only linearly with importance, while complexity penalizes quadratically.
Discourse does not collapse; it simplifies. Distinctions vanish, multi-step logic turns into slogans, and unstated assumptions proliferate. The conversation remains engaging but its usefulness is capped far below what the subject demands.
In this light, misinformation often stems less from deliberate falsehoods than from channel capacity too narrow to carry accurate understanding intact.
The Takeaway
The core problem with online debate is not merely a preference for brevity. Beyond a certain point, longer and more accurate arguments stop delivering proportional gains in shared understanding.
Once required detail exceeds what the medium can sustain, conversation usefulness is fundamentally bounded. Extra effort cannot overcome the limit.
Compactly:
Useful discourse requires: d2 s × i
Maximum usefulness: U = min(1, (s × i) / d2)
Everything else is commentary.
Linking to Social Response to Real Problems
This capacity model supplies a concrete diagnostic for societal responses to complex real-world issues such as climate change, pandemics, or economic inequality.
For any given problem, one can estimate d (its irreducible complexity), i (public attention via search trends or media volume), and s (platform bandwidth). The resulting discourse efficiency index (s × i) / d2 (identical to U when below 1) becomes a useful quantitative measure of social response potential.
When the index falls significantly below 1, expect strong emotional mobilisation paired with weak, symbolic, or counterproductive policy action; public debate that feels intense yet fails to converge on accurate solutions; and simplification of the issue into slogans and tribal signals rather than actionable understanding.
The index therefore flags problems likely to elicit performative rather than substantive collective responses.
This is formatted by an LLM. I've reviewed it, and I ask the reader to suppress his cringe at LLMisms like "the takeaway" because they do not detract from the usefulness of the piece.
Most people have seen this in online arguments: someone is wrong, and it could — in principle — be proven. But the proof would require so many definitions, caveats, steps, evidence, and background details that almost no one would read it all.
Further effort then hits diminishing returns, creating an information bottleneck that caps the conversation’s usefulness.
People often complain that online discourse rewards short, punchy claims over nuanced ones. That is true, but not the root issue. The deeper problem is that many topics demand a minimum level of detail to explain correctly, while online platforms and audiences impose a strict maximum on what they will absorb.
When required detail exceeds what the medium can carry, more effort no longer produces more understanding. The discussion can stay active and heated, but it stops doing the job it pretends to do.
A Simple Model
Define:
— d: detail required for a correct explanation
— l: detail actually delivered
— T: maximum detail the audience will process
Useful explanations must satisfy both l >= d and l
Thus, productive discourse requires d
Model audience tolerance as:
T = s × (i / d)
where:
— i: perceived importance of the topic
— s: scaling factor for platform and audience attention capacity
Substituting yields:
d2 s × i
Usefulness and Diminishing Returns
Define usefulness U as the fraction of required understanding successfully transmitted:
U = min(1, T / d) = min(1, (s × i) / d2)
This creates both a hard upper bound on usefulness and strong diminishing returns on effort. Early contributions can meaningfully improve understanding, but once delivered detail hits the audience limit T, additional effort delivers no further gain.
The constraint is not about intelligence or intellectual honesty — it is fundamentally a bandwidth limit.
What Happens Past the Limit
When d > T, participants are left with three poor options:
— Compress the argument and lose critical nuance
— Deliver the full explanation and lose most of the audience
— Disengage
All reduce the conversation’s value without solving the underlying capacity problem.
This explains why long, careful online explanations frequently fail to change minds — not because they are unconvincing, but because the extra detail does not survive transmission.
Why This Matters
The model shows that topics combining high importance (raising i) with high irreducible complexity (large d) are most likely to defeat online discourse. Attention grows only linearly with importance, while complexity penalizes quadratically.
Discourse does not collapse; it simplifies. Distinctions vanish, multi-step logic turns into slogans, and unstated assumptions proliferate. The conversation remains engaging but its usefulness is capped far below what the subject demands.
In this light, misinformation often stems less from deliberate falsehoods than from channel capacity too narrow to carry accurate understanding intact.
The Takeaway
The core problem with online debate is not merely a preference for brevity. Beyond a certain point, longer and more accurate arguments stop delivering proportional gains in shared understanding.
Once required detail exceeds what the medium can sustain, conversation usefulness is fundamentally bounded. Extra effort cannot overcome the limit.
Compactly:
Useful discourse requires: d2 s × i
Maximum usefulness: U = min(1, (s × i) / d2)
Everything else is commentary.
Linking to Social Response to Real Problems
This capacity model supplies a concrete diagnostic for societal responses to complex real-world issues such as climate change, pandemics, or economic inequality.
For any given problem, one can estimate d (its irreducible complexity), i (public attention via search trends or media volume), and s (platform bandwidth). The resulting discourse efficiency index (s × i) / d2 (identical to U when below 1) becomes a useful quantitative measure of social response potential.
When the index falls significantly below 1, expect strong emotional mobilisation paired with weak, symbolic, or counterproductive policy action; public debate that feels intense yet fails to converge on accurate solutions; and simplification of the issue into slogans and tribal signals rather than actionable understanding.
The index therefore flags problems likely to elicit performative rather than substantive collective responses.