Improved warning time leads to better preparedness which leads to less costly aftermaths. Well, at least in sane societies
The Euro model by itself was more accurate than the multi-model forecast run by NWS, which in turns was more accurate than raw GFS. IIRC, the Euro model predicted the Sandy landfall 320km off, while the NWS multi-model analysis was 1500km off, and raw GFS said it'd not hit land at all, going WAAAAY east.
The NWS multi-model forecast predicting landfall only came out a few days before it hit, while the euro model predicted it more than a week before. The US Navy multi-model forecast was also ahead of the NWS multi-model, resulting in all the big ships that could make it leaving port, and all ships that couldn't make it setup for storm anchor(as did the US Coast Guard with several ships)
It's not just once. Several hurricanes and other severe weather systems have been most accurately predicted by the European model. In fact, if you read some of the links in the article, you'll see references to that.
Portal 2 runs on an engine that's effectively 12 years old by now, with just some updates. It's far more CPU dependant than more modern engines for example.
Same thing with Left 4 Dead 2, the benchmark of which Valve rigged by using a 1Â½ year newer update for the Linux version than what's available for the Windows version, an update that actually shifts more stuff to the GPU for example.
OK, so the SGI O2's UMA has now been reinvented for a new generation, just with more words tacked on....
They already have that. The game segregates everything into star systems and stations. That's easy to parallelize and under normal loads a server handles several star systems.
Every star system and the stations in them is limited to 1 thread, ergo load cannot be spread over multiple cores. The activities that run independently of star systems are things such as Market, Contracts, Chat etc. Also, it cannot be dynamically reallocated. Moving a system to a reinforced node can only be done during a restart, with the system being reinforced started on a separate node with beefier processor(And even then you can only use a single core....)
It's a huge change in the architecture. As I understand it from about a year ago, they cut out a lot of the player interactions in heavy battle conditions, introduced time dialation (which is a remarkable innovation, I might add), and pumped up the servers that are intended to handle battles. But you still have a large number of interactions between players. If you hide some players from each other, then you can end up with situations like battlefield commanders unable to target critical ships because they can't see them (focus fire or everyone shooting the same target is a key tactic). Ships can have many drones apiece (5 for normal ships, 10-20 for "carriers" and "supercarriers"). And of course, someone will want to see the pretty explosions and pew pews. That means your multiserver infrastructure will have considerably more interaction than the current Eve cluster experiences. And if you're going to bother with that, why not just have a really beefy server with multiple CPUs which has built-in the necessary communication network?
It's a major change, for sure, but it doesn't have to come at a great cost, since it'd also increase reliability and fault tolerance if done right. Time dilation is a band-aid, nothing more. There was actually a period around Dominion where TiDi didn't exist, but there were larger battles with less delays than now(Think BoB's MAX and MAX 2 campaigns).
As for the interactions, I'm well aware of how many there are, having played since 2004, and they haven't really reduced the combat interactions. What they do is that they reprioritize non-combat actions(swapping ships in station for example), and reduced strict sequential combat process.
As for multiserver, EVE already runs on a cluster. The change I'm proposing would enable a star system to use multiple cores if certain requirements are met, and low-priority systems migrated to other nodes without requiring a restart of nodes etc. That would, however, require them to upgrade from Stackless Python to something that would actually make sense, as well as performing proper software and system engineering.
The point is to make a system that can be dynamically reconfigured for increased performance on the fly. Currently, nodes need to be put into reinforced mode manually, which requires a reinit, and thus is generally only done during downtime.
At great cost? Why would it have to be at great cost? It could even save them money in the long run. Also, if done Erlang-style, they'd also get better reliability/fault tolerance.
Instead, CCP are wasting a lot of CPU cycles, pissing about bragging about being "Agile", instead of doing proper software engineering.
No, it doesn't. EVE is hard-designed to be one thread per system. The other systems just get less hardware resources. If the node is put into reinforced(which is done manually by CCP admins), the other systems are "rebooted" on other nodes for the duration, which can cause people to lose connection.
AMD is decent if you fit into some very specific memory access patterns... If you don't, they slow down to a crawl.
Nvidia with CUDA is far more versatile, and not to mention MUCH more solid drivers(and don't need X under Linux, unlike AMD....)
"I find many of the computer scientists studying e.g. AI (my old field), often become frustrated by the real world's refusal to comply with their theory. They tend to be theory first, data second. That's the hallmark of bad science."
I have similar experiences, and a colleague ran into it during his studies a couple of years back. During an algorithms lecture, the professor wrote up an algorithm and explained it, and then finished that with "And this is the best algorithm you can find for this task", whereupon my colleague remarks that he knows of at least 3 architectures where it'll either be dog slow or not work at all, due to hardware not supporting some features the algorithm relied upon(lots of fdivs). He was called up to the professor for a talk about his "attitude problem regarding computer science"
They compared a version of the engine for Linux that had optimizations done that were not available for the Windows version they compared to(And still haven't been released)
Ergo, it was a rigged comparison.
And that's supposed to be news?
Common practice for high-end/specialist freelancers here in northern europe at least.
I commonly work with one agent(who's also my lawyer), and sometimes with another agent, in a slightly different field. In fact, if you get a trustworthy agent, it's one of the best way to sort out the "grinders"(clients who try to pile on more and more work on a project), scammers and other undesirables.
In fact, those two agents and those of us who use their services have formed a guild of sorts, blacklisting bad clients, blacklisting devs who negatively impact the reputation of freelancers by being scammers or just failures, helping each other out in case of sickness, or just the need for a vacation, yet we still compete with each other in bids for projects etc, so yes, it requires blacklisting out the sociopaths that can't cooperate.
Might not work quite as well in the US though, US geeks seeming content with being exploited and seeing banding together in mutual defense as anathema......
You don't need to use the mobile chips for that, there are even Xeons with under 20W, and ECC+virtualization support..
Given your stated views on OSS philosophy, you could also be one of those "easily led sheep", just in another spectrum, so be careful about what generalisations you toss around. People have different blind spots. Case in point, RMS. He's done very little that can be considered productive after mid-80's yet many geeks blindly follow his preaching without further questioning.
Also, you're talking about the storm in retrospective: Many persons, even geeks here on Slashdot who should have enough physics knowledge and sense of scale, talked down the dangers of the storm with claims such as "It's just a category 1, the media is hyping it up as usual". Also consider the fact that by the time the weather reports that used an aggregate of the euro model and the normal models used in north america were getting spread in media, the ship was already out at sea, so before that, they'd already been fed a lot of "It's just hype, it won't be so bad" etc.
Multiple factors went wrong, the crew being dazzled by the captain just being one of them.
The thing is, for that particular hurricane, even many USN ships, the ones not fast enough to outrun a hurricane that size, remained in port areas, anchored up for hurricane away from the docks. Hell, from what I read on the SA forums, even many USCG ships sheltered from the hurricane, anchoring up-river in the lee of hills if possible.
Other tall ship captains remained with their ships in port, and even warned the captain of The Bounty, but he set out anyway. The problem is, the captain ran with a personality cult crew who was selected based on who was agreeable. and he WAS a thrillseeker. Several experienced Tall Ship sailors refused to work with him. An interview was found where he stated that "you chase hurricanes".
Another reason behind his departure may have been corporate pressure, wanting them down in St. Petersburg as early as possible for cost reasons.