I've reverse-engineered a lot of code from stripped object back to source. It got more difficult to do manually with some of the odd flows that RISC processors do and progressively more optimizing compilers, but it's hardly impossible. And there are fine tools to support it now.
Once you get to uncommented source for something where you roughly understand the program's function it's usually pretty easy to figure out what the author intended. Then you can comment it.
The fun part is finding errors. (I recall one where I was reverse-engineering a Unix driver and identified a place where the programmer had written (approximately) "if (a=b)" when he meant "if (a==b)". It was doubly fun to feed this back to a guy in the OS group - especially when I walked him through the code to the statement and he asked about a nearby assertion which had been conditionally not-compiled into the object that I was working from. He hadn't really internalized that I'd decompiled to source until I pointed out that I couldn't see the assertion. B-) )
Wow. I think I may have said something that could possibly be taken out of context in my previous reply. Having been unaware of the originations and meaning of that particular southern saying, which I just recently became educated about... I would like to retract my statement.
You see I thought you were referring to Sin you make at home.. when you know.. nobody else is around. Just yourself.
Umm yeah..... I think I am going to look up sayings before I reply to posts now....
Mininova doesn't have its own tracker... i.e. anything you get from Mininova actually comes from somewhere else, most often TPB.
You may well struggle to get any tcp-based communication, as this comment from
/usr/src/linux/net/ipv4/tcp_timer.c shows:
* we do not increase the rtt estimate. rto is initialized
* from rtt, but increases here. Jacobson (SIGCOMM 88) suggests
* that doubling rto each time is the least we can get away with.
* In KA9Q, Karn uses this for the first few times, and then
* goes to quadratic. netBSD doubles, but only goes up to *64,
* and clamps at 1 to 64 sec afterwards. Note that 120 sec is
* defined in the protocol as the maximum possible RTT. I guess
* we'll have to use something other than TCP to talk to the
* University of Mars.
*
* PAWS allows us longer timeouts and large windows, so once
* implemented ftp to mars will work nicely. We will have to fix
* the 120 second clamps though!
*/
(I know this has been around for donkeys years, but I just checked on Slackware 13 and it's still in there.)
"Common sense" is nothing more than the mass of background assumptions you picked up by osmosis while growing up and never bothered to challenge. Lots of it is perfectly logical, for obvious reasons, but other parts are nothing more than oft-repeated lies.
They just need to think. That's what they study for (ideally). Thinking people with open minds can tackle anything, including the "scale of the internet".
When I was in high school, I used a slide rule. When I entered university, I got me a calculator. Did maths or problem solving abilities change or improve because of the calculator? no. Student today can jolly well learn about networking on small LANs, or learn to manage small datasets on aging university computers, so long as what they learn is good, they'll be able to transpose their knowledge on a vaster scale, or invent the next Big Thing. I don't see the problem.
The article isn't loading for me, but: can't they simply measure the amount of CPU used during the benchmark and use that information in the benchmark? I don't think it's basically evil to perform that kind of offloading (except in this case when the rules of 3DMark forbid using empirical data on it to optimize performance; but then again, I would imagine many other pieces of software also get this treatment without bad effects on quality or game experience), but dynamically detecting the situation would definitely be complicated; and it might even sometimes give the wrong answer.
One pretty useful heuristic for this kind of optimization would however be "is the CPU usage high without offloading GPU work to CPU: if so, don't do it". Hey, maybe the drivers could have a 'profiling'-mode, which would perhaps slow the performance but figure out the optimal parameters for running the program.
Pee Wee Herman... is that you?
A poke can lack a focus as well. If you don't deliberately try and poke anyone, you simply end up poking your own belly and giggling.
What ever you call it, the causes are disturbingly familar. THe great depression, S&L banking crisis and many others were caused by artificial bubbles in the banking system just like the ones you noted in your post. The only difference being that today's bubbles are often hailed as a paradigm in the economy where prosperity never seen before will reign; but it is just as I said; they're a bubble, nothing more.
Oblig Red Dwarf:
Lister: What d'ya think of Betty?
Cat: Betty Rubble? Well, I would go with Betty... but I'd be thinking of Wilma.
Lister: This is crazy. Why are we talking about going to bed with Wilma Flintstone?
Cat: You're right. We're nuts. This is an insane conversation.
Lister: She'll never leave Fred, and we know it.
This has everything to do business planning of funding and timely rollout of redundancy and backup systems (including staff). The tech, tools, good IT staff hires, procedures, and strategies are out there and are pretty well known. I don't have to know about their infrastructure or staffing to tell you that they didn't invest in the infrastructure and/or training for staff to prevent exactly this sort of disaster.
I can think of three possible reasons why they didn't have such an infrastructure in place:
1) When M$ bought Danger, they scaled back to maximize profits
2) The risk/cost analysis (odds of failure, cost, and cost to prevent) of such a disaster was out of date, incorrect, or simply accepted
3) Funding was available but rollout of redundancy/backup was taking longer than expected
I doubt #3 since Danger has been around for a while and their customer base probably isn't growing exceedingly quickly. #2 is mildly possible, IMHO, if they simply accepted the risk. But, a class-action after a disaster like this has to be really expensive unless they think they can dodge the legal obligation of backing up user data. #1 is fairly typical of take overs, especially if profit is more important than safeguarding user data.
Everything should be made as simple as possible, but not simpler. -- Albert Einstein