Comment Re:But I want it now! (Score 1) 26
Express service --> no cargo parachute
Express service --> no cargo parachute
They should sell cross-sections through the cable (about 1/2 to 1 inch thick) mounted and framed as a way for people to "own a piece of global communications history". People would pay upwards of $500-$1000 I bet. They could produce a limited run of them and recycle the rest. It wouldn't even use that much of the material -- the vast majority would still be recycled.
OK, not exactly. Vinge's story line around this was a bit more technically fanciful even by current standards. I'll give you that in spirit it sounds similar and I did think of the same thing as you right away when I read the post.
In Rainbow's End, the idea is that they're digitizing a library by essentially running the books through a big cross-cut shredder whose output is blown into the air by fans or some sort of blower. The fragments (from many books at once) are blown past a series of high-speed cameras that photograph large groups of all of the little pieces multiple times in flight as they pass through each camera's field of view. Algorithms on the back end reassemble everything in a way that's kind of like a 2-D equivalent of multiple sequence alignments from molecular biology.
In the book, it's controversial because the digital assembly process creates a fair amount of uncertainty and destroys the originals.
A further reach is the animated series "Pantheon" where human brains themselves are destructively scanned to create digital duplicates of peoples' minds.
If you're comfortable moving the threshold-of-trust out to ~34.024825 years, I'm cool with that move.
Note: I used days/year of 365.25 to somewhat crudely account for leap years when I did the calculation using pow(2,30).
Actually using 2^30 (== 28) seems too small a number of seconds to be practical. I don't want to stop trusting people within a half minute of their births... that seems too pessimistic.
Screw that. Never trust anyone under a billion seconds (approx. 31.709791 years).
Those breakpoints should all be written in seconds.
Indeed, I'm reminded of the paper "No Silver Bullet" by Fred Brooks (the guy who write "The Mythical Man-Month"). In the paper, he lays out the distinction between inherent complexity and accidental complexity.
Most AI coding tools at present appear to be able address accidental complexity (but imperfectly). When you creep into trying to get them to address inherent complexity, they're lack of reasoning skills seems to become more apparent.
I don't know of anyone having made an argument for LLMs or something like them or something like ChatGPT's new reasoning models being able to address inherent complexity well without as much human review effort needed as would be required to just have a human do the task from the start.
That's just today. It's going to be interesting to see how things develop over the next 1-2 years.
OK, that's just fricking hilarious. I wish I had a mod point for you. Clearly we've experienced some of the same pain in the past.
a brand new flavor of kool-aid!
Even if he were right, maybe it'll be more like one of Stanislaw Lem's stories: we turn on the giant, near-omniscient machine, and it just goes silent and won't talk. So we build a slightly lesser machine to try to communicate with it.
Or maybe it'll just turn out to have been a "bad ideas"(tm).
Thank you. That says it very well.
Maybe he means "suffering" in a much more abstract sense, as in something more like "cognitive dissonance", but that wasn't the impression I took from the post.
There's the idea of "productive struggle" in learning, but that's more along the lines of growth from being challenged, as you said.
On the other side of the coin, complete lack of suffering, or even privilege, tends to lead to expectation, whereas adversity tends to lead to adaptation. But trying to find the adaptable people by subjecting everyone to suffering would be sick and cruel.
Suffering does not, apparently, necessarily lead to empathy.
Sorry to self-reply here, but for clarification: I meant "essential complexity" where I wrote "intrinsic complexity" but you probably knew what I meant if you've read the paper.
Addressing accidental complexity in a definitive manner would be, I admit, a big deal.
Also, for the busy who never read the paper, the Wikipedia summary is decent: https://en.wikipedia.org/wiki/...
Dear everyone (almost): please calm down.
Are you calm? Good.
Now please (re)read "No Silver Bullet" by Fred Brooks. Done? Good.
Now explain to me how AI is addressing intrinsic complexity and not just accidental complexity. I'll wait.
These things are not good at (real) math and not good at (real) reasoning. If you build something out of statistical patterns in large text corpora of various kinds, you get something that appears to reason sometimes but is actually the world's most extensive stochastic parrot.
... coming from the "I can't believe we have to say this, but..." department.
"You are in a maze of twisty long chain molecules, all alike."
...will come from attempts to use this product for pornographic purposes, in accord with the immutable law that all new technologies are used first for porn, then for other things later.
This will give rise to rule 35: if you can think of it, there is porn of it in your dreams.
Center meeting at 4pm in 2C-543.