It works out to $2424 per school, roughly. Might add one faculty member per school system, at least for larger systems.
Slashdot videos: Now with more Slashdot!
Persia? I could see it going either way...
I'm positive the triple existed in 1984 (the last year I lived in a particular city, so I remember it well). It was probably on the menu around 1981 or so - the year that Wendy's came to Our Fair City. The triple was considered obscene, and hence became the traditional end-of-season meal from high school wrestlers.
I've been running Hadoop on a 400 node ethernet cluster for a couple years now, and Spark for a few months. I'll give Spark points for speed - as long as your problem fits in RAM, it screams. They have their problems, certainly. Hadoop's dependence on Java and Spark's dependence on Scala... seriously, Java for HPC? WTF? If you're running on anything but x86 Linux you need your head examined. C and Fortran, folks.
You're absolutely right- Hadoop needs the right kind of job. It needs a problem where processing is per-record and has no dependencies on any other record. That eliminates a lot of interesting problems right there. It needs colossal logical block sizes, both to keep the network and drives saturated, but also to keep from bottlenecking on the HDFS namenode. This strongly suggests a small number of utterly huge files - maybe a hundred 100G files. These problems are, commercially, rare. I'm doing genomics-related things, and my 3 to 60 gig files (about 3TB total) are probably not big enough.
Spark is pretty clever. As long as your problem fits in RAM.
As far as "conventional" cluster programming, I think a good MPI programmer is about as hard to hire as a Scala programmer. MPI looks easy until you get into the corner cases, as I'm sure you've experienced yourself. Trying to do scatter/gather in an environment where worker nodes can vanish without warning is basically a whole lot of not fun. Then there's infiniband. Infiniband FDR is kind of... touchy. If you order a hundred cables, you'll get 98 good ones, and 2 will fail intermittently. It'd be nice if the vendor would label which two were bad, but somehow they don't do this. It was bad enough that Mellanox blamed an earnings miss on bad cables. Maybe they're overcome that? Probably. Maybe. I'll give Hadoop points for working around dead machines and crippled networks.
You know, I've wanted to try sector and sphere, but somehow never gotten around to it.
In market-based economies, pricing of goods depends on fixed and marginal costs. Perfectly competitive (i.e., totally equivalent goods, completely interchangable with each other) cannot be priced above the marginal cost of producing another unit of it (in the long run, at least). Generating pricing power requires differentiation.
Software that is a commodity cannot be priced above its marginal cost. The marginal cost of another OpenSSL download is about zilch. If there was an efficient market able to make micropayments, market balance could be restored. As it is now, it's a hobby activity for individuals and a cost of doing business for large companies.
I would argue that editors, OS kernels, and compilers are, at this point, commodities. Obviously commercial offerings are differentiated just enough to generate some pricing power, and that suggests that Open Source offerings at least theoretically could (dual open/commercial licenses, like Qt in the past), but I would argue this is a temporary market inefficiency.
Incidentally, the classic way to make money giving away software was to then sell the consulting services around it.
Not that different from Doctors (MDs), actually. Their basic is two weeks. Makes sense when you figure their basic probably costs $2-3 per minute.
I'm still trying to work out why they're paying GlobalFoundries to take the plants. The pension argument doesn't make sense - IBM switched from "defined benefit" to "defined contribution" about ten years ago, so they can walk away on a whim now. The only factors I can think of are:
1) IBM received a decent subsidy ($600M) from the feds to run a "trusted semiconductor foundry" line, on US soil (google it - not a secret). The government does this in several markets and industries just to make sure they prop up at least one US supplier - they used to pay Micron to make RAM in the US (and may still). Seemed at the time like they wanted to support "one architecture in addition to x86", which would of course be POWER. So, would a shutdown have triggered a repayment clause?
2) Or... semiconductor manufacturing is a nasty business - literally. Maybe it's cheaper to pay someone to take it than it is to clean up all the, say, arsenic that various processes use a lot of. Still, I would think that just sealing the doors with concrete and walking away would be pretty cheap, too.
I've got a version 3 Das Keyboard. Loud, obnoxious, feels great. My co-workers have basically just decided to accept it.
I posted, so can someone else mod the parent up? tnx.
There are urban airshed models that do exactly this for air quality studies and plume analysis models for hazmat, but I'm not aware of weather forecasting at the block-by-block level. Right off the cuff, I would suspect that albedo is at least as important - at human building scales, reynolds number is going to be pretty high. At that point, it looks more like computational fluid dynamics and less like weather - hence airshed modeling and plume analysis.
I'm a computer engineer, not a meteorologist, but I've worked with them off and on for about eight years now. One of the most common models for research use is "Weather Research and Forecasting Model" (WRF, pronounced like the dude from ST:TNG). There are several versions in use, so caveats are in order, but in general WRF can produce really good results at a 1.6KM grid for 48 hours in the future. I was given the impression that coarser grids are the route to happiness for longer period forecasts.
WRF will accept about as much or as little of an initializer as you want to give it. Between NEXRAD radar observations, ground met stations all over the place, two hundred or so balloon launches per day, satellite water vapor estimates, and a cooperative agreement with airlines to download in-flight met conditions (after landing, natch), there's gobs of data available.
The National Weather Service wants to run new models side-by-side with older models and then back check the daylights out of them, so we can expect the regular forecast products to improve dramatically over the next (very) few years.
I wish I had mod points right now.
CS adjuncts are, additionally, looking for bright people to hire.
...stays in Vegas.
Nokia has completely shifted gears before - they used to make forestry equipment at one point (early 70s?), which indirectly led to their making VHF radios with telephone interfaces for use out in the boondocks, which led to cellphones for them.
The VHF "portable phones" from the late 80s, by the way, can be hacked into becoming 2 meter (144 MHz) ham radios. Have fun...
On an ASUS Transformer, the keyboard is where most of the value is, along with the oh-so-strange fifteen (15) volt charger. Sell the keyboard and charger, grind the tablet to powder. It's the only way to be sure.