Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Take advantage of Black Friday with 15% off sitewide with coupon code "BLACKFRIDAY" on Slashdot Deals (some exclusions apply)". ×

Comment Re:Gorbachev's off the cuff comment I heard live (Score 1) 198

What should not be forgotten is how bad the CIA estimates were on the Soviet economy. They were utter crap. I cannot stress this enough, and I encourage anyone interested in the topic to read the highly-rated (at the time) US texts on Soviet economy.

Virtually all highly rated US texts in 1980-1999 on Soviet/Russian economy were garbage.

So, what does this say about the CIA?

Comment Re:Don't Use UTC (Score 2) 143

Are you sure YOU know the difference between UTC and GMT?

It's not that UTC sounds cooler... It's what we actually use. UTC ticks based on atomic clocks and it's what's distributed through NTP. GMT (really UT) tracks Earth's rotation, doesn't have a stable second, and there are no high-precision realtime references.

Most programmers just need to know the number of seconds since Midnight, Jan 1, 1970, GMT, as God intended.

time_t doesn't count the number of absolute SI seconds since the Epoch: it assumes days are always 86400.0 seconds long, and completely ignores leap seconds... even worse, before 1972 they used sub-second leaps, so the offset isn't even an integer.

So, all told, why not refer to it as UTC since that's actually correct?

Comment Re: 20 cores DOES matter (Score 1) 167

If we're talking about bulk builds, for any language, there is going to be a huge amount of locality of reference that matches well against caches. shared text RO, lots of shared files RO, stack use is localized (RW), process data is relatively localized (RW), and file writeouts are independent. Plus any decent scheduler will recognize the batch-like nature of the compile jobs and use relatively large switch ticks. For a bulk build the scheduler doesn't have to be very smart, it just needs to avoid moving processes around between cpus excessively so and be somewhat HW cache aware.

Data and stack will be different, but one nice thing about bulk builds is that there is a huge amount of sharing of the text (code) space. Here's an example of a bulk build relatively early in its cycle (so the C++ compiles aren't eating 1GB each like they do later in the cycle when the larger packages are being built):


Notice that nothing is blocked on storage accesses. The processes are either in a pure run state or are waiting for a child process to exit.

I've never come close to maxing out the memory BW on an Intel system, at least not with bulk builds. I have maxed out the memory BW on opteron systems but even there one still gets an incremental improvement with more cores.

The real bottleneck for something like the above is not the scheduler or the pegged cpus. The real bottleneck is the operating system which is having to deal with hundreds of fork/exec/run/exit sequences per second and often more than a million VM faults per second (across the whole system)... almost all on shared resources BTW, so it isn't an easy nut for the kernel to crack (think of what it means to the kernel to fork/exec/run/exit something like /bin/sh hundreds of times per second across many cpus all at the same time).

Another big issue for the kernel, for concurrent compiles, is the massive number of shared namecache resources which are getting hit all at once, particularly negative cache hits for files which don't exist (think about compiler include path searches).

These issues tend to trump basic memory BW issues. Memory bandwidth can become an issue, but it will mainly be with jobs which are more memory-centric (access memory more and do less processing / execute fewer instructions per memory access due to the nature of the job). Bulk compiles do not fit into that category.


Comment Re:No kidding (Score 2) 103

The fact that you get /a/ face isn't profound, but the resulting image is interesting. It gives a good picture of the things that human vision uses to locate faces: obviously the eyes and mouth are most prominent; there's moderate contrast for the cheekbones and nose; the oval shape is only vague; the neck, ears, eyebrows, and hairline are almost entirely missing.

I expect those are already well known to vision specialists, but to me, it's an interesting analysis of the exact details which make an inanimate object become a face.

Comment Re: 20 cores DOES matter (Score 4, Informative) 167

Urm. And you've investigated this and found that your drive is pegged because? Of What? Or you haven't investigated this and you have no idea why your drive is pegged. I'll take a guess... you are running out of memory and the disk activity you see is heavy paging.

Let me rephrase... we do bulk builds with pourdriere of 20,000 applications. It takes a bit less than two days. We set the parallelism to roughly 2x the number of cpu threads available. There are usually several hundred processes active in various states at any given moment. The cpu load is pegged. Disk activity is zero for most of the time.

If I do something less strenuous, like a buildworld or buildkernel, almost the same result. Cpu is mostly pegged, disk activity is zero for the roughly 30 minutes the buildworld takes. However, smaller builds such as a buildworld or buildkernel, or a linux kernel build, regardless of the -j concurrency you specify, will certainly have bottlenecks in the build subsystem that have nothing to do with the cpu. A little work on the Makefiles will solve that problem. In our case there are always two or three ridiculously huge source files in the GCC build that the Make has to wait for before it can proceed with the link pass. Similarly with a kernel build there is a make depend step at the beginning which is not parallelized and the final link at the end which cannot be parallelized which actually take most of the time. Compiling the sources in the middle finishes in a flash.

But your problem sounds a bit different... kinda sounds like you are running yourself out of memory. Parallel builds can run machines out of memory if the dev specifies more concurrency than his memory can handle. For example, when building packages there are many C++ source files which #include the kitchen sink and wind up with process run sizes north of 1GB. If someone only has 8GB of ram and tries a -j 8 build under those circumstances, that person will run out of memory and start to page heavily.

So its a good idea to look at the footprint of the individual processes you are trying to parallelize, too.

Memory is cheap these days. Buy more. Even those tiny little BRIX one can get these days can hold 32G of ram. For a decent concurrent build on a decent cpu you want 8GB minimum, 16GB is better, or more.


Comment Re:20 cores DOES matter (Score 4, Informative) 167

Hyperthreading on intel gives about a +30 to +50% performance improvement. So each core winds up being about 1.3 to 1.5 times the performance with two threads verses 1.0 with one. Quite significant. It depends on the type of load, of course.

The main reason for the improvement is of course due to one thread being able to make good use of execution units while the other thread is stalled on something (like memory or TLB, significant integer shifts, or dependent Integer or FPU multiply and divide operations).


Comment Re: 20 cores DOES matter (Score 4, Interesting) 167

Actually, parallel builds barely touch the storage subsystem. Everything is basically cached in ram and writes to files wind up being aggregated into relatively small bursts. So the drives are generally almost entirely idle the whole time.

It's almost a pure-cpu exercise and also does a pretty good job testing concurrency within the kernel due to the fork/exec/run/exit load (particularly for Makefile-based builds which use /bin/sh a lot). I've seen fork/exec rates in excess of 5000 forks/sec during poudriere runs, for example.


Comment Re:Easiest solution is NUC style (Score 1) 197

Indeed, though one would have to examine the NUC/BRIX specs carefully. They are being driven (typically) by a mobile chipset GPU which will have some limitations.

In fact, one could probably stuff them without any storage at all, just ram, and netboot the suckers from a single PC. I have a couple of BRIX (basically the same as a NUC) for GPU testing with 16GB of ram in each and they netboot just fine.

Maintainance -> basically none.

Expandability -> unlimited w/virtually no setup/work required.

Performance -> highly distributed and powerful.

Wiring -> highly localized, only the ethernet cables and power leave the monitor space (WIFI is available on these devices, but I would recommend hardwired ethernet and you might not be able to netboot over WIFI).


Comment Easiest solution is NUC style (Score 1) 197

I'd use a NUC form factor with one mounted on the back of each monitor (or mounted on the back of every other monitor since it has two outputs). Basically no maintenance, easy to expand, and the off-the-shelf solution means easy to upgrade later. Will never fail if a small SSD is used, and has an ethernet hard port and plenty of resources (including 8-32GB of ram). Most monitors already have the necessary mounts.


Comment Amazon less comprehensive than it used to be. (Score 1) 233

Honestly, the only reason I shop at Amazon is because they seem to have everything. As I have been noticing that they depart from that, or their prices get higher, I'm starting to look elsewhere. They were the best marketplace option but are becoming less so over the past year or so, and their competition is seeming a little better. But the competition still do not offer a really good comprehensive service... so I'm looking for other options but what I'm finding isn't great.

Comment Re:I call BS (Score 1) 157

Oh, and I left out rx, and payments to (and clawbacks from) providers. We can also discuss vision and dental, but again, what is the point???

The American medical insurance is so broken there is no believable reason another nation would wish to copy it.

Why not adopt a more believable (and simpler) hypothesis that the reason medical insurance claims data was of interest was because of mental illness and sexual disease claims?

Comment I call BS (Score 1) 157

I offer my services to the Chinese. For a mere $300K I will elucidate in greater detail if required. Medical insurance is not exactly rocket science (and you've already launched something to the moon! Congrats!)

Let's break it down. There are a few components:

Plans - but in a Communist country I'd expect everyone has the same insurance plan, right? Or is one animal great than another?

Member Information - things like name, gender, age, tobacco user, dependents, etc. Again, though, given it is a Communist country, are there really dependents, or is everyone a participant?

Provider information - things like provider name, address, tax ID. I hate to sound tedious, but in a Communist country I don't think you'd have PPO networks, right? Aren't all providers equal?

Premiums - this ties the member to a plan. In a capitalistic society this gets pretty complicated, as there are a vast number of plans, and different rates based on member age/smoker status/gender, etc. As I have now stated ad nauseum, I expect under a Communist system everyone pays the same, so maybe this becomes a trivial issue?

Claims processing - in a Capitalistic system one would take the billing codes, procedure codes, diagnostic codes, etc., match them against the date of service, connect this to the member and the plan, and adjudicate to determine if 1) the claim falls under the plan, 2) who gets paid what, 3) a bunch of other stuff (like lifetime deductibles). Now, if we were operating under a Communist plan, wouldn't all claims be covered?

When I started writing my response, I sympathized w/ the Chinese, as I've been involved with a few claims processing systems, and there's a ton of institutional knowledge. However, under a Communist system, what really is there?

Ps - Honestly, as an American who thinks theAmerican insurance & medical business is a complete scam, I'd love to hear what Sweden and other civilized countries need in the way of medical insurance software.

Comment The stupid, it burns! (Score 2) 246

I worked for seven years in the medical insurance business (so glad to have left the field!) and the ignorance seen in many high-rated posts here is astounding.

1. GAO report, so no fraud
2. Even if someone wanted to fraudulently create an applicant, I don't see the problem, as long as they don't submit a claim. What's wrong w/ additional premium? (I will ignore the geeky underwriters, as I understand their position, but haven't seen any relevant objections so far about messing up the statistics.)
3. You cannot begin to appreciate the stupidity of pretty much everyone in the insurance business - so the inability to do very basic SSN validity checking comes as no surprise at all.

I left the year ACA came into effect, so got to experience the fun as we tried to implement insurance plans that Congress had not defined. See, ACA went into effect 2014, but we (that is, insurance companies) didn't have black letter law or even Federally-defined policies established (on many different fronts) until way past Jan 2014. How can you determine policies if underwriters don't know what the rules are???

Biut what continues to be under-reported is what a complete disaster/fail the back-office procedures are. Are we finally able to determine if someone is eligible? When I left, there was no way to tell if an applicant was qualfied for subsidies under the various arcane income rules.

If I were dictator, I'd immediately force hospitals and pharmaceutical companies to fall under the anti trust laws that everyone else has to follow. The high-deductible plans were created under the assumption that consumers would be motivited to shop around for the cheapest deal. But, it is impossible to get an actual quote for a procedure. If you require hospitals to produce a rate sheet that applies to all, and permitted people to import drugs from anywhere in the world, a massive amount of money could be saved.

But this cuts into rx profits, and we can't have that.

Philosophy: A route of many roads leading from nowhere to nothing. -- Ambrose Bierce