Once a subject goes over the mock horizon, it gets pretty hard to distinguish entire discussion threads from line noise.
Once a subject goes over the mock horizon, it gets pretty hard to distinguish entire discussion threads from line noise.
Slashdot is one of the few paces that routinely publishes "summaries" that are 100% content-free. I always marvel at how they do it- you'd think that a stray bit of info would find its way into the summary by chance once in a while but that doesn't seem to be the case here.
It wasn't always like this. Slashdot seems to wield a universal bike shed field only instead of everything tasting like chicken everything tastes like bike shed. Useless summary is the universal chicken sauce of click to view.
Isn't it just perfect to compare the leading top coder to the world's most recognizable figure from team sports?
He first began freaking people out in second grade, at age 8, when he took second place in a major Belarusian coding competition.
So how about Nadia Comaneci?
Comaneci came in 13th in her first Romanian National Championships in 1969, at the age of just 8.
Well, if we eliminate Nadia (either because we can't properly spell her surname on Slashdot, or because none of the 8-digit UIDs know who the fuck she is) then who are we left with, from an individual sport?
I don't think Tiger was ever accused of being perma-virgin material (ditto for Nadal). Pancho Gonzales seems a bit too troubled, but (despite being an elite athlete) he did share the tournaments general disregard for healthy living:
Pancho had no idea how to live or take care of himself. He was a hamburger-and-hot-dog guy to start with and had no concept of diet in training... On the court Gorgo would swig Cokes through a match... Also Gorgo was a pretty heavy cigarette smoker. He had terrible sleeping habits made even worse by the reality of a tour.
So I'm going to have to go with Rod Laver, the most impressive specimen most people who use the internet have barely heard of.
Laver was very quick and had a strong left forearm.
(I tried to add the 'c' onto 'lick' but
I was thinking I might read this book. Then I looked up the authors (you left out National Post columnist Andrew Coyne). I still might read this book, though a freshly Windexed critical lens.
I only had to read a few of his pieces on supply management (which I know something about) to discover that Coyne has a few things clear in his head.
Basically, he's a class act with the framing effect.
I won't bore people with the gospel according to Daniel Kahneman. Instead we'll ignore the eminent literature and just cut to the chase.
Here's how it works in practice. You start talking about "the consumer" (embedded in hot-button phrases such as "if politician X really cared about the consumer"—magic tricks always work best with a flourish of misdirection) and everyone automatically puts on their "good consumer" face, which for carnivores, is bringing home the bacon at the best possible price. Seriously, no-one wants to be left off Santa Claus's "good consumer" list. So it's immediately clear that Canadian consumers want American prices, right?
How about we start the conversation differently?
Who here kicks their dog? Who here would use an electric cattle prod to cut another $0.02 of the price of sirloin steak? This time the reaction is a little different—no-one wants to make Santa's permanent record under "cruelty to animals".
So where's the conflict? The conflict here is that these are the same fucking people.
Call them a consumer, they want a low price. Mention the dog beater down the street, then they give a shit about animal welfare, even if it hits them in the pocket book (to a degree).
The Canadian system is pretty much the worst system for achieving the lowest possible price. The American system is pretty much the worst system for achieving animal welfare and certain other controls over the quality of the food supply. (Mention listeria or ebola and you'll quickly discover that all the same people want to make yet a third Santa Claus list—just so long as we're on whatever list Santa is presently examining, it's all good).
The American system isn't even a "free" market by how the average person images any kind of "free" thing anywhere actually works.
Before anyone dumps me on their mental list of the short moment, I found the following equally interesting:
It's a complex world out there. Even Harper deserves a critic with two eyes.
so the question is, how can we get computers to know which branches are ok to prune, and which aren't?
Sigh. The reason that humans experts are no longer competitive is because human experts prune where Deep Ply fears to trust static analysis. Pitted against a relentless algorithm which resists intuitive pruning, grand-master human pruning leaks a full pawn or two per game.
It's damn amazing how well grand-master level pruning actually works, but don't mistake this for flawless chess. Beautiful? Maybe. Flawless? Not even close.
When it was still somewhat competitive between man and machine, the human chess players would think they were pressing an overwhelming advantage, only to discover themselves mired in tiny, unanticipated tactical disadvantages move after move after move after move. "The damn thing keeps finding these fiddling resources!" If you weren't careful, you could easily lose from what had initially appeared to be a won position (and it probably would have been, against a human opponent blind to all those fiddling resources).
The trick for the competitive chess programmer was to achieve the right balance in the static evaluator so that tangible material gains didn't consistently outweigh less tangible advantage of tempo. Matthew Lai in his paper does not seem to grasp this essential trajectory of computer chess. He seems to think it's remarkable that his Oldsmobile displays more rigidity on the impact sled than the lunar lander, when it's pretty clear to everyone else involved that no Oldsmobile ever made was going to win the space race. The ply-based chess engines had their static evaluators hand-tuned by experts over many decades within a space gram clock-cycle budget.
Until he actually defeats all these programs on existing commodity hardware at existing tournament time controls, he's comparing watermelons to kiln-dried coconut flakes.
It's the same problem with new technology. It isn't enough to merely be better in some personally favoured dimension of merit. Your immature new thing has to be better enough to actually pass the mature old thing on its own terms.
Got a better substrate than silicon? Yeah? What's your defect density cranking out 10,000 wafers per month? Oh, you haven't actually developed all that quality-control infrastructure yet, but you figure you can do it at half the price once you work out the final kink from your strained bullerene crystal lattice?
Awesome progress, pal, but I think I'll invest my own Bitcoin elsewhere.
For the record, I've long believed that the trade-off moving from depth to sophistication wouldn't prove particularly steep (for the right sophistication). But any gradient that's a net loss (no matter how small) provides pretty much no immediate competitive incentive for anyone to invest any real effort hoeing that row.
The great thing about neural networks is that they don't actually require much real effort. The machine itself does most of the work in 72 hours. And then what have you got? A RISC chip that never actually kills x86 (because those idiots were busy touting microcosmic instruction set efficiency long after the real game had shifted to streamlining the cache hierarchy, where's there's no low-hanging ideological shortcut to help you overcome the first-mover fat-payroll advantage).
I have seen something else under the sun: The race is not to the swift or the battle to the strong, nor does food come to the wise or wealth to the brilliant or favor to the learned; but sunk cost and legacy happen to them all.
This is inordinately difficult, and yet it represents a gap of at most a few SQ points.
Thanks for embedding a bright red hand print on my forehead. You do know that the difference between ice and water is only a few degrees Celsius? We've barely established that cetaceans even have an oral culture with anything in common with pre-historical human oral culture.
For all we know, the phase change to a symbolic written culture just might be the largest singular catastrophe in the standard SQ sequence.
Why why why does this field attract so many extrapolation retards?
How is that practical if the spouse works on the other side of town?
Until we get all the way to xaria law (sharia law for Christians) staying with your current spouse employed on the wrong side of town also counts as a personal preference.
So many things can be fixed once we complete the sharing economy transition to Uber Madison.
but do not address possible unsuitable uses, such as for the purposes of employment assessment or insurance premiums
When the day comes that such a thing is invented by sociologists there will surely be a scope-creep coda to the tune of "more research needed" within the vast sphere of human malfeasance.
Just what we need is a technological literature brimming with amateur hand-wringing and armchair ethics. I'd just love to read what Shockley might have written about his invention in the last paragraph of the last page if given a greenish-yellow editorial light to paint the future.
While we're at it, how about some moral footnotes from Fritz Haber?
On 2 May 1915, following an argument with Haber, Clara Immerwahr committed suicide in their garden by shooting herself in the heart with his service revolver.
A sad end, but a fine act of ethical commentary by the first woman to be awarded a doctorate in chemistry in Germany. To think what we might have learned if only she'd been wearing a mood bracelet.
Usually when presented with this information
"Information" is an awfully big word to apply to your chosen narrative tactic.
Rule 34a: if there's a thing, there's straw of the thing.
This can be broadly demonstrated with just two words: straw manginas.
I once spend a day hacking on J. Never warmed up to the ASCII replacements of the original APL character set.
In university, long ago, they had a mandatory course for English majors that used SNOBOL. My willingness to help out with SNOBOL programming got me more attention from girls than anything else I did there.
On another note, I wouldn't want to be the person tasked with proving the Turing completeness of DSSSL. It might not be hard (one way or the other), but I just wouldn't want to have to do it.
The FreeBSD Project has a problem harboring unrepentant douche bags like Kip Macy, and also Randi Harper.
You do know that there is such a thing as false conviction, and the standard of "repentance or permanent ostracization"—remaining in glorious effect long after punishment by the state has run its course—effectively demands the the wrongfully convicted confess to crimes they never committed, in order to have any hope of returning to productive society ever again?
In general (absent subsequent evidence), we don't actually know who are the wrongfully convicted, or we wouldn't have convicted them in the first place.
Sometimes (for a value of "sometimes" with no fixed address) the rush to judgment really sucks ass. That ought to give you at least a moment's pause before this kind of sentiment as an anonymous coward. It's why we allow the state to assign punishment rather than throwing blemished produce at the town pillory (e.g. a perfectly edible cucumber that's not quite straight, or harbours somewhere a small scab).
Sure, he sounds like a royal douche. But is it really my job to see that he suffers forever-after on nothing but a thin gruel of second-hand story telling?
Has it never occurred to you that there's a downside to your unthoughtful bitterness?
"Promising" barely scrapes the surface of what's involved here.
Battle Story Passchendaele 1917
Another push toward Passchendaele brings promising results: the Canadians reach the outskirts of Passchendaele, and take strongpoints such as Vienna Cottage, Snipe Hall, Duck Lodge and Vapour Farm.
And, no, I did not make those "strong points" up.
It was due to the bravery of Major George Peakes and his battalion (5th Canadian Mounted Rifles) that these strongholds were captured and secured. This was one of the bravest small-group actions and ensured the success of the attack on October 30. Major Peakes was awarded a VC for his leadership.
I'm imagining a member of the British upper crust sitting in his warm, fireside chair peering eagerly into Galadriel's water mirror (circa 1913) to soak up this promising tidbit about the looming war, while someone in the next room hums "onward fusion soldiers".
No, a technology does not become promising merely because a singularly large obstacle looks a little smaller today than it did yesterday.
That's just pride fuckin' with you.
But then one day the neural net has a "senior moment" and drives the car off a cliff.
It's actually your geek pride that just plunged to astounding depths.
Computers don't beat humans at chess by playing human chess better than humans. They beat humans by having a deeper view of the combinations and permutations and by making very few mistakes.
A momentary "senior moment" in a self-driving car (I wish I could have rendered that in priapismic scare quotes, but Slashdot defeats me) would just as likely be followed by a Mario Andretti moment 100 ms later as it recomputes several of the box-within-box outer safety profiles ab initio with fresh camera and sensor data. It's so unlike a senior moment as to make my jaw drop (unless you count those senior moments in Quake 3 where you could momentarily see through a solid wall if your POV landed on just the right surface boundary).
You had the whole time you were writing that paragraph to reverse out a bad rhetorical gambit, and never bothered.
What's next in the self-driving car? Liver spots? Bladder failure?
The candle that burns twice as bright burns half as long.
You're safe then. If your candle was burning twice as bright, you might have factored colour temperature into the equation, or you might have said that the candle that burns twice as bright burns green, or something interesting like that (though it appears that the candles that burn half as bright also burn green.)
C is a trivially simple language
Back in the eighties when I was primarily a C programmer, I spent years mastering the art of writing portable C code. Our main application was required to compile under both the Microsoft and the Watcom compiler, and under the Watcom compiler we targeted both MSDOS and QNX. This was a royal PITA at times. The worst case I recall is that Microsoft had a bug in their type deduction logic for expressions that mixed signed and unsigned values. In actual fact, the Microsoft code generator used the correct rules, but the Microsoft diagnostic routine in the parser did not, causing it to issue "type conversion" warnings opposite to its own internal behaviour. Just imagine how that gave us a bad case of group-consciousness head spin until we tracked down the underlying cause.
It's terribly hard in C to defend yourself against certain kinds of accidental errors, which is one of my original reasons for moving to C++. My well-developed C programming subset (oh yes, I had a subset) was even more robust in C++. For example, in modern C++ there's much less justification for writing complex expressions using #define. Modern C++ programmers largely restrict the use of the C++ preprocessor for implementing a Turing-complete language at compile time.
Actually, I lied. That harmless looking C preprocessor from the dusty depths of time is but a C-hair short of being Turing complete at compile time. The smallest fiddle in the specification of token pasting might get you there.
Concerning underhandedness, the Karen Pease PIU winner would not survive having __isleap() recoded from a macro to a C++ inline function. Many of the other examples abuse the #define mechanism for encoding object lengths, rather than having the objects maintain their own lengths, such as any STL container does.
What you can foist in the unwary if you're off-scale malicious in C++ is off-scale high (it is, after all, a superset of C itself).
On the other end of the scale, if you use C++ abstractions to do good rather than evil, the never-ending refinement of the C++ language takes you to a better place, not a worse place.
I'm not overly enamoured of Great Man theory, and likewise I'm not greatly enamoured of sanitary-conception language design, in which all the sins of the past are taken behind the woodshed and put straight en masse.
Co-existence with our dirty origins is a simple fact of human biology. It isn't true that every complexity of human evolution is automatically a turn for the worse (as you seem to imply about accrued complexity in programming language design).
The truth of the matter is that C++ used wisely can be a clean and empowering programming language, for those of us able and willing to pay the price of admission.
Whether it's reasonable to pay that price given the many other choices available now is another question. In my case, I had already paid half the price in my first professional decade as a C programmer, after stripping away the illusion that C is simple language.
I'm pretty much agnostic at this point about whether an ambitious young programmer should bother learning C++ or not, unless it happens that C++ is the only vehicle that will take you where you want to go (high abstraction level co-existing with raw hardware performance).
Too many people sit there in a state of contempt fundamentally saying "if C++ is the only viable solution, then I want a simpler problem to solve!"
Well, go to it. Fill your boots. But don't sit there and sneer at the brave souls who make the opposite choice.
It's not hard to admit errors that are [only] cosmetically wrong. -- J.K. Galbraith