Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Take advantage of Black Friday with 15% off sitewide with coupon code "BLACKFRIDAY" on Slashdot Deals (some exclusions apply)". ×

Comment Not the first full recovery from space (Score 1) 121

SpaceShip One touched space and all elements were recovered and flew to space again.

BO's demonstration is more publicity than practical rocketry. It doesn't look like the aerodynamic elements of BO's current rocket are suitable for recovery after orbital injection, just after a straight up-down space tourism flight with no potential for orbit, just like SpaceShip One (and Two). They can't put an object in space and have it stay in orbit. They can just take dudes up for a short and expensive view and a little time in zero gee.

It's going to be real history when SpaceX recovers the first stage after an orbital injection, in that it will completely change the economics of getting to space and staying there.

Comment Re:Another in a long series of marketing mistakes (Score 1) 137

You'd need a popular product to pull off obtaining second-clientage from governments, and you'd need not to reveal that your device had legal intercept.

This is just a poorly-directed company continuing to shoot itself in the foot. It's not made its product desirable for government, or for anyone else.

Comment Another in a long series of marketing mistakes (Score 2) 137

There's a truism in marketing that you can only differentiate your product on the parts that the customer sees and uses. Blackberry just can't learn this lesson. They tried differentiating on the OS kernel, which the customer never sees. And now on an insecurity feature that the customer won't be allowed to use. It's been a protracted death spiral, but it's a continuing one.

Comment What's Wrong with the Hobbit? (Score 2) 174

The Hobbit books are to a great extent about race war. The races are alien and fictional, but they are races, and the identification of good or bad is on racial boundaries. This isn't all that unusual in the fantasy genre, or even some sci-fi.

Lots of people love those books. And there's lots of good in them. To me, the race stuff stuck out.

Comment Re: 20 cores DOES matter (Score 1) 167

If we're talking about bulk builds, for any language, there is going to be a huge amount of locality of reference that matches well against caches. shared text RO, lots of shared files RO, stack use is localized (RW), process data is relatively localized (RW), and file writeouts are independent. Plus any decent scheduler will recognize the batch-like nature of the compile jobs and use relatively large switch ticks. For a bulk build the scheduler doesn't have to be very smart, it just needs to avoid moving processes around between cpus excessively so and be somewhat HW cache aware.

Data and stack will be different, but one nice thing about bulk builds is that there is a huge amount of sharing of the text (code) space. Here's an example of a bulk build relatively early in its cycle (so the C++ compiles aren't eating 1GB each like they do later in the cycle when the larger packages are being built):

Notice that nothing is blocked on storage accesses. The processes are either in a pure run state or are waiting for a child process to exit.

I've never come close to maxing out the memory BW on an Intel system, at least not with bulk builds. I have maxed out the memory BW on opteron systems but even there one still gets an incremental improvement with more cores.

The real bottleneck for something like the above is not the scheduler or the pegged cpus. The real bottleneck is the operating system which is having to deal with hundreds of fork/exec/run/exit sequences per second and often more than a million VM faults per second (across the whole system)... almost all on shared resources BTW, so it isn't an easy nut for the kernel to crack (think of what it means to the kernel to fork/exec/run/exit something like /bin/sh hundreds of times per second across many cpus all at the same time).

Another big issue for the kernel, for concurrent compiles, is the massive number of shared namecache resources which are getting hit all at once, particularly negative cache hits for files which don't exist (think about compiler include path searches).

These issues tend to trump basic memory BW issues. Memory bandwidth can become an issue, but it will mainly be with jobs which are more memory-centric (access memory more and do less processing / execute fewer instructions per memory access due to the nature of the job). Bulk compiles do not fit into that category.


Comment Re:Reward networks for not upgrading (Score 1) 75

What happens on eBay is just a market. It's fundamental that a properly working market works to determine the optimum price for whatever is being sold. A properly working market would have multiple sellers and multiple buyers, all with somewhat differing circumstances. Improperly working markets are dominated by a single vendor, etc. No market works perfectly, there are always factors that cause markets to be less efficient than they should be.

Demand pricing is something one vendor does deliberately and with calculation. In contrast, the market pricing is arrived at as the aggregate of the behavior of many people. The market's actually broken if the calculation of one person can influence it disproportionately.

Comment Re:Amazon Model (Score 1) 75

First, there's no shortage of interurban data links for these companies to use if they're willing to. A shortage of infrastructure is a myth.

Second, the customers will indeed abscond, but not to conventional telephone companies.

Anyone who is considering how to jack up voice call pricing is moving around deck chairs on the Titanic.

Comment Re:Reward networks for not upgrading (Score 1) 75

No definition of "surge pricing" could include eBay because it's an auction with multiple independent bidders. Uber, on the other hand, is one bidder with multiple operators who work through its pricing structure. Experienced Uber operators actually avoid areas with high dynamic pricing because there's too much traffic around them. It's more profitable to do three less expensive rides than one expensive one.

Uber dynamic pricing fails the riders, and fails the operators. Uber still makes its money, they don't particularly care that they aren't serving either bloc efficiently.

Comment Re: 20 cores DOES matter (Score 4, Informative) 167

Urm. And you've investigated this and found that your drive is pegged because? Of What? Or you haven't investigated this and you have no idea why your drive is pegged. I'll take a guess... you are running out of memory and the disk activity you see is heavy paging.

Let me rephrase... we do bulk builds with pourdriere of 20,000 applications. It takes a bit less than two days. We set the parallelism to roughly 2x the number of cpu threads available. There are usually several hundred processes active in various states at any given moment. The cpu load is pegged. Disk activity is zero for most of the time.

If I do something less strenuous, like a buildworld or buildkernel, almost the same result. Cpu is mostly pegged, disk activity is zero for the roughly 30 minutes the buildworld takes. However, smaller builds such as a buildworld or buildkernel, or a linux kernel build, regardless of the -j concurrency you specify, will certainly have bottlenecks in the build subsystem that have nothing to do with the cpu. A little work on the Makefiles will solve that problem. In our case there are always two or three ridiculously huge source files in the GCC build that the Make has to wait for before it can proceed with the link pass. Similarly with a kernel build there is a make depend step at the beginning which is not parallelized and the final link at the end which cannot be parallelized which actually take most of the time. Compiling the sources in the middle finishes in a flash.

But your problem sounds a bit different... kinda sounds like you are running yourself out of memory. Parallel builds can run machines out of memory if the dev specifies more concurrency than his memory can handle. For example, when building packages there are many C++ source files which #include the kitchen sink and wind up with process run sizes north of 1GB. If someone only has 8GB of ram and tries a -j 8 build under those circumstances, that person will run out of memory and start to page heavily.

So its a good idea to look at the footprint of the individual processes you are trying to parallelize, too.

Memory is cheap these days. Buy more. Even those tiny little BRIX one can get these days can hold 32G of ram. For a decent concurrent build on a decent cpu you want 8GB minimum, 16GB is better, or more.


Comment Re:20 cores DOES matter (Score 4, Informative) 167

Hyperthreading on intel gives about a +30 to +50% performance improvement. So each core winds up being about 1.3 to 1.5 times the performance with two threads verses 1.0 with one. Quite significant. It depends on the type of load, of course.

The main reason for the improvement is of course due to one thread being able to make good use of execution units while the other thread is stalled on something (like memory or TLB, significant integer shifts, or dependent Integer or FPU multiply and divide operations).


Comment Re: 20 cores DOES matter (Score 4, Interesting) 167

Actually, parallel builds barely touch the storage subsystem. Everything is basically cached in ram and writes to files wind up being aggregated into relatively small bursts. So the drives are generally almost entirely idle the whole time.

It's almost a pure-cpu exercise and also does a pretty good job testing concurrency within the kernel due to the fork/exec/run/exit load (particularly for Makefile-based builds which use /bin/sh a lot). I've seen fork/exec rates in excess of 5000 forks/sec during poudriere runs, for example.


The brain is a wonderful organ; it starts working the moment you get up in the morning, and does not stop until you get to work.