Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Just a Motorola Oncore Receiver bug (Score 3, Interesting) 187

This is the second time a bug in the firmware of Motorola Oncore GPS receivers have manifested itself. There is a bug relating to a 32 bit wide bitmap, and DoD just took the GPS satellite numbered 32 out of the constellation and that seems to be the cause. I have data for two such receivers showing the anomaly and for one different receiver seeing no trouble at all.

Comment Re:Not a big deal... (Score 2) 458

No, not necessarily. The problem isn't so much the cpu cores, those will be mostly backwards-compatible. The real problem are all the other discrete PCI devices. If Microsoft does not provide updated drivers for their older OS releases, those older releases have no chance of working on newer hardware.

For example, Intel's Skylake chipset has I219 gigabit ethernet now (uprev from I218 which itself was an uprev from I217). The chance of older ethernet drivers working with newer chips is zero. In the case of the I219, the flash mapping and access mechanics changed drastically.

The integrated GPU is another good example. Skylake is up to Gen9. The chances that Gen8 code will work with it are zero.

One can go down the list. The only chipsets which are generic enough for older drivers to work are going to be the USB and AHCI chipsets. Everything else? Forget it.

But I don't know why people are complaining so much. The same can be said for BSD and Linux distros. An older BSD or Linux release is not going to work on newer systems. Most people don't care since they just update to the latest. While it is possible to backport the drivers to older OS releases, not very many people have the skill required so for all intents and purposes you need to run newer OpenSource OS's on these newer chips too.

-Matt

Comment Re:Energy consumption is going to increase (Score 1) 645

The problem with projections is that they rarely predict how things will actually develop. What happens in reality is that society slowly recognizes that a problem is present. Often too slowly (and probably too late when it come to climate change), but never-the-less it eventually gets recognized, and society shifts and adjusts.

Nobody seems to understand just how huge the energy economy is in the developed world, just to support our current life-style. The numbers you quoting, despite being probably wrong (very wrong most likely)... still, the numbers you are quoting are *nothing* compared to the energy infrastructure that drives the U.S. economy today. On a relative measure, if we can have what we have now, we can certainly achieve anything you've mentioned above.

At some point in the last 10 years this whole 'thorium reactor' movement cropped up, and frankly its hard debunk the utter stupidity of the model because very few people talking about it are bonafide scientists who know what they are talking about. Me included, on this issue. But on the other-hand I'm probably one of the few people who actually knows how to use a geiger counter and has had conversations with scientists standing in front piles of lead shielding crap I wouldn't want to touch with my bare hands.

When one of those guys tells me that thorium is a disaster due to its secondary byproducts, I believe him.

-Matt

Comment Article is kinda pie-in-the-sky wrong (Score 3, Interesting) 100

At least, not totally correct. Memory bus non-volatile storage such as Intel's X-Point stuff still requires significant cache management by the operating system. Why? Because it doesn't have nearly enough durability to just be mapped as general purpose memory. A DRAM cell goes through trillions of cycles in its live time. Something like X-Point might be 1000x more durable than standard flash, but it is still 6 orders of magnitude LESS durable than DRAM. So you can't just let user programs write to it however they like.

Secondly, in terms of data-center machines becoming obsolete. Also not correct. SSDs make a fine bridge between traditional HDD or networked storage and something like X-Point. For two reasons: First, all data center machines have multiple SATA busses running at 6GBits. Gang them all together and you have a few gigabytes/sec worth of standard storage bandwidth. Secondly, you can pop nVME flash (PCI-E based flash controllers) into a server and each one has in excess of 1 GByte/sec of bandwidth (and usually much more).

Third, in terms of memory management, paging to/from SSD or nVME 'swap' space, or using it as a front-end cache for slower remote storage or spinny disks, already provides servers with a fresh new life that means they won't be obsolete for years to come.

And finally there is the cost. These specialized memory-bus non-volatile memories are going to be expensive. VERY expensive. To the point where existing configurations still have a pretty solid niche to play in. Not all workloads are storage-intensive and these new memory-bus non-volatile memories don't have the density to be able to replace the storage required for large databases (or anywhere near it).

So, the article is basically a bit too pie-in-the-sky and ignores a lot of issues.

-Matt

Comment Re:Have they moved to LLVM/Clang? (Score 4, Informative) 26

LLVM/Clang builds the DragonFly world and kernel but does not yet build the boot loader. It can be brought in via dports. So it isn't 100% yet but very close. When it does get to 100% it will become one of our two officially supported compilers. Those are currently gcc-4.7 and gcc-5.2.1.

Wayland support isn't really up to us, but there is wayland support in XOrg that I think works for programs desiring to use that API. Don't quote me on it though.

-Matt

Submission + - GPL Enforcement under threat. Support Conservancy fundraiser. (sfconservancy.org)

Jeremy Allison - Sam writes: "Some companies have withdrawn from funding us and some have even successfully pressured conferences to cancel or prevent talks on our enforcement work. We do this work because we think that it is good for everyone in the long run, because we know it is the right thing to do, and because we know that we are in the best position to do it. But that's not enough — you have to think it's right too and show us by becoming a Supporter now."

Submission + - Software Freedom Conservancy asks for supporters

paroneayea writes: Software Freedom Conservancy has is asking people to join as supporters to save both their basic work and GPL enforcement. Conservancy is the steward of projects like it, Samba, Wine, BusyBox, QEMU, Inkscape, Selenium, and many more. Conservancy also does much work around GPL enforcement and needs 2,500 members to join in order to save copyleft compliance work. You can join as a member here.

Comment Re: 20 cores DOES matter (Score 1) 167

If we're talking about bulk builds, for any language, there is going to be a huge amount of locality of reference that matches well against caches. shared text RO, lots of shared files RO, stack use is localized (RW), process data is relatively localized (RW), and file writeouts are independent. Plus any decent scheduler will recognize the batch-like nature of the compile jobs and use relatively large switch ticks. For a bulk build the scheduler doesn't have to be very smart, it just needs to avoid moving processes around between cpus excessively so and be somewhat HW cache aware.

Data and stack will be different, but one nice thing about bulk builds is that there is a huge amount of sharing of the text (code) space. Here's an example of a bulk build relatively early in its cycle (so the C++ compiles aren't eating 1GB each like they do later in the cycle when the larger packages are being built):

http://apollo.backplane.com/DF...

Notice that nothing is blocked on storage accesses. The processes are either in a pure run state or are waiting for a child process to exit.

I've never come close to maxing out the memory BW on an Intel system, at least not with bulk builds. I have maxed out the memory BW on opteron systems but even there one still gets an incremental improvement with more cores.

The real bottleneck for something like the above is not the scheduler or the pegged cpus. The real bottleneck is the operating system which is having to deal with hundreds of fork/exec/run/exit sequences per second and often more than a million VM faults per second (across the whole system)... almost all on shared resources BTW, so it isn't an easy nut for the kernel to crack (think of what it means to the kernel to fork/exec/run/exit something like /bin/sh hundreds of times per second across many cpus all at the same time).

Another big issue for the kernel, for concurrent compiles, is the massive number of shared namecache resources which are getting hit all at once, particularly negative cache hits for files which don't exist (think about compiler include path searches).

These issues tend to trump basic memory BW issues. Memory bandwidth can become an issue, but it will mainly be with jobs which are more memory-centric (access memory more and do less processing / execute fewer instructions per memory access due to the nature of the job). Bulk compiles do not fit into that category.

-Matt

Comment Re: 20 cores DOES matter (Score 4, Informative) 167

Urm. And you've investigated this and found that your drive is pegged because? Of What? Or you haven't investigated this and you have no idea why your drive is pegged. I'll take a guess... you are running out of memory and the disk activity you see is heavy paging.

Let me rephrase... we do bulk builds with pourdriere of 20,000 applications. It takes a bit less than two days. We set the parallelism to roughly 2x the number of cpu threads available. There are usually several hundred processes active in various states at any given moment. The cpu load is pegged. Disk activity is zero for most of the time.

If I do something less strenuous, like a buildworld or buildkernel, almost the same result. Cpu is mostly pegged, disk activity is zero for the roughly 30 minutes the buildworld takes. However, smaller builds such as a buildworld or buildkernel, or a linux kernel build, regardless of the -j concurrency you specify, will certainly have bottlenecks in the build subsystem that have nothing to do with the cpu. A little work on the Makefiles will solve that problem. In our case there are always two or three ridiculously huge source files in the GCC build that the Make has to wait for before it can proceed with the link pass. Similarly with a kernel build there is a make depend step at the beginning which is not parallelized and the final link at the end which cannot be parallelized which actually take most of the time. Compiling the sources in the middle finishes in a flash.

But your problem sounds a bit different... kinda sounds like you are running yourself out of memory. Parallel builds can run machines out of memory if the dev specifies more concurrency than his memory can handle. For example, when building packages there are many C++ source files which #include the kitchen sink and wind up with process run sizes north of 1GB. If someone only has 8GB of ram and tries a -j 8 build under those circumstances, that person will run out of memory and start to page heavily.

So its a good idea to look at the footprint of the individual processes you are trying to parallelize, too.

Memory is cheap these days. Buy more. Even those tiny little BRIX one can get these days can hold 32G of ram. For a decent concurrent build on a decent cpu you want 8GB minimum, 16GB is better, or more.

-Matt

Comment Re:20 cores DOES matter (Score 4, Informative) 167

Hyperthreading on intel gives about a +30 to +50% performance improvement. So each core winds up being about 1.3 to 1.5 times the performance with two threads verses 1.0 with one. Quite significant. It depends on the type of load, of course.

The main reason for the improvement is of course due to one thread being able to make good use of execution units while the other thread is stalled on something (like memory or TLB, significant integer shifts, or dependent Integer or FPU multiply and divide operations).

-Matt

Comment Re: 20 cores DOES matter (Score 4, Interesting) 167

Actually, parallel builds barely touch the storage subsystem. Everything is basically cached in ram and writes to files wind up being aggregated into relatively small bursts. So the drives are generally almost entirely idle the whole time.

It's almost a pure-cpu exercise and also does a pretty good job testing concurrency within the kernel due to the fork/exec/run/exit load (particularly for Makefile-based builds which use /bin/sh a lot). I've seen fork/exec rates in excess of 5000 forks/sec during poudriere runs, for example.

-Matt

Comment Re:Easiest solution is NUC style (Score 1) 197

Indeed, though one would have to examine the NUC/BRIX specs carefully. They are being driven (typically) by a mobile chipset GPU which will have some limitations.

In fact, one could probably stuff them without any storage at all, just ram, and netboot the suckers from a single PC. I have a couple of BRIX (basically the same as a NUC) for GPU testing with 16GB of ram in each and they netboot just fine.

Maintainance -> basically none.

Expandability -> unlimited w/virtually no setup/work required.

Performance -> highly distributed and powerful.

Wiring -> highly localized, only the ethernet cables and power leave the monitor space (WIFI is available on these devices, but I would recommend hardwired ethernet and you might not be able to netboot over WIFI).

-Matt

Comment Easiest solution is NUC style (Score 1) 197

I'd use a NUC form factor with one mounted on the back of each monitor (or mounted on the back of every other monitor since it has two outputs). Basically no maintenance, easy to expand, and the off-the-shelf solution means easy to upgrade later. Will never fail if a small SSD is used, and has an ethernet hard port and plenty of resources (including 8-32GB of ram). Most monitors already have the necessary mounts.

-Matt

Comment Touch job ahead, all the luck! (Score 1) 688

Forking a large project is a tough, many-years job, it will need a lot more than just a few patches that weren't accepted to make it fly and it will need dedicated developers. But I think it's possible and I wish him luck.

There is a conceivable advantage to doing this. With some care, the forked linux kernel could be stabilized (something Linux really needs at the current juncture, frankly) and provide a goal for the FreeBSD linux emulation layer to go after, resulting in significant synergies between Linux and FreeBSD. Ultimately it might be possible to merge the device framework and solve the major problem that all kernel projects have of device-driver chasing by allowing developer resources to become more concentrated. That would be a difficult, but worthy goal.

-Matt

Slashdot Top Deals

"Why should we subsidize intellectual curiosity?" -Ronald Reagan

Working...