Follow Slashdot stories on Twitter


Forgot your password?

Comment: Wendy's Triple existed by 1984 (Score 1) 348

by erikscott (#48859843) Attached to: Regular Exercise Not Enough To Make Up For Sitting All Day

I'm positive the triple existed in 1984 (the last year I lived in a particular city, so I remember it well). It was probably on the menu around 1981 or so - the year that Wendy's came to Our Fair City. The triple was considered obscene, and hence became the traditional end-of-season meal from high school wrestlers. :-)

Comment: Hadoop needs a fairly specialized problem (Score 5, Interesting) 34

by erikscott (#48803199) Attached to: Meet Flink, the Apache Software Foundation's Newest Top-Level Project

I've been running Hadoop on a 400 node ethernet cluster for a couple years now, and Spark for a few months. I'll give Spark points for speed - as long as your problem fits in RAM, it screams. They have their problems, certainly. Hadoop's dependence on Java and Spark's dependence on Scala... seriously, Java for HPC? WTF? If you're running on anything but x86 Linux you need your head examined. C and Fortran, folks.

You're absolutely right- Hadoop needs the right kind of job. It needs a problem where processing is per-record and has no dependencies on any other record. That eliminates a lot of interesting problems right there. It needs colossal logical block sizes, both to keep the network and drives saturated, but also to keep from bottlenecking on the HDFS namenode. This strongly suggests a small number of utterly huge files - maybe a hundred 100G files. These problems are, commercially, rare. I'm doing genomics-related things, and my 3 to 60 gig files (about 3TB total) are probably not big enough.

Spark is pretty clever. As long as your problem fits in RAM. :-) Since you're writing code in Scala, you're (a) the only person who can be on call and (b) irreplacable, so on balance that may not be so bad. Just depends.

As far as "conventional" cluster programming, I think a good MPI programmer is about as hard to hire as a Scala programmer. MPI looks easy until you get into the corner cases, as I'm sure you've experienced yourself. Trying to do scatter/gather in an environment where worker nodes can vanish without warning is basically a whole lot of not fun. Then there's infiniband. Infiniband FDR is kind of... touchy. If you order a hundred cables, you'll get 98 good ones, and 2 will fail intermittently. It'd be nice if the vendor would label which two were bad, but somehow they don't do this. It was bad enough that Mellanox blamed an earnings miss on bad cables. Maybe they're overcome that? Probably. Maybe. I'll give Hadoop points for working around dead machines and crippled networks.

You know, I've wanted to try sector and sphere, but somehow never gotten around to it.

Comment: Perfectly competitive goods and economic pricing (Score 1) 205

by erikscott (#48554651) Attached to: The Failed Economics of Our Software Commons

In market-based economies, pricing of goods depends on fixed and marginal costs. Perfectly competitive (i.e., totally equivalent goods, completely interchangable with each other) cannot be priced above the marginal cost of producing another unit of it (in the long run, at least). Generating pricing power requires differentiation.

Software that is a commodity cannot be priced above its marginal cost. The marginal cost of another OpenSSL download is about zilch. If there was an efficient market able to make micropayments, market balance could be restored. As it is now, it's a hobby activity for individuals and a cost of doing business for large companies.

I would argue that editors, OS kernels, and compilers are, at this point, commodities. Obviously commercial offerings are differentiated just enough to generate some pricing power, and that suggests that Open Source offerings at least theoretically could (dual open/commercial licenses, like Qt in the past), but I would argue this is a temporary market inefficiency.

Incidentally, the classic way to make money giving away software was to then sell the consulting services around it.

Comment: Re:Bigger fuckup than John Akers (Score 3) 84

by erikscott (#48186157) Attached to: IBM Pays GlobalFoundries $1.5 Billion To Shed Its Chip Division

I'm still trying to work out why they're paying GlobalFoundries to take the plants. The pension argument doesn't make sense - IBM switched from "defined benefit" to "defined contribution" about ten years ago, so they can walk away on a whim now. The only factors I can think of are:

1) IBM received a decent subsidy ($600M) from the feds to run a "trusted semiconductor foundry" line, on US soil (google it - not a secret). The government does this in several markets and industries just to make sure they prop up at least one US supplier - they used to pay Micron to make RAM in the US (and may still). Seemed at the time like they wanted to support "one architecture in addition to x86", which would of course be POWER. So, would a shutdown have triggered a repayment clause?

2) Or... semiconductor manufacturing is a nasty business - literally. Maybe it's cheaper to pay someone to take it than it is to clean up all the, say, arsenic that various processes use a lot of. Still, I would think that just sealing the doors with concrete and walking away would be pretty cheap, too.

Comment: Re:And as the resolution increases ... (Score 1) 77

by erikscott (#48055839) Attached to: Supercomputing Upgrade Produces High-Resolution Storm Forecasts

There are urban airshed models that do exactly this for air quality studies and plume analysis models for hazmat, but I'm not aware of weather forecasting at the block-by-block level. Right off the cuff, I would suspect that albedo is at least as important - at human building scales, reynolds number is going to be pretty high. At that point, it looks more like computational fluid dynamics and less like weather - hence airshed modeling and plume analysis.

Comment: WRF has gotten pretty good, actually (Score 4, Informative) 77

by erikscott (#48053489) Attached to: Supercomputing Upgrade Produces High-Resolution Storm Forecasts

I'm a computer engineer, not a meteorologist, but I've worked with them off and on for about eight years now. One of the most common models for research use is "Weather Research and Forecasting Model" (WRF, pronounced like the dude from ST:TNG). There are several versions in use, so caveats are in order, but in general WRF can produce really good results at a 1.6KM grid for 48 hours in the future. I was given the impression that coarser grids are the route to happiness for longer period forecasts.

WRF will accept about as much or as little of an initializer as you want to give it. Between NEXRAD radar observations, ground met stations all over the place, two hundred or so balloon launches per day, satellite water vapor estimates, and a cooperative agreement with airlines to download in-flight met conditions (after landing, natch), there's gobs of data available.

The National Weather Service wants to run new models side-by-side with older models and then back check the daylights out of them, so we can expect the regular forecast products to improve dramatically over the next (very) few years.

Comment: Re:Nokia still has products? (Score 1) 54

by erikscott (#47574727) Attached to: Nokia Buys a Chunk of Panasonic

Nokia has completely shifted gears before - they used to make forestry equipment at one point (early 70s?), which indirectly led to their making VHF radios with telephone interfaces for use out in the boondocks, which led to cellphones for them.

The VHF "portable phones" from the late 80s, by the way, can be hacked into becoming 2 meter (144 MHz) ham radios. Have fun...

An elephant is a mouse with an operating system.