Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 internet speed test! ×

Comment Its rather exaggerated (Score 5, Interesting) 63

Intels claims are rather exaggerated. Their claims have already been torn apart on numerous tech forums. At best we're talking only a ~3-5x reduction in QD1 latency and the intentionally omit vital information in the specs to force everyone to guess what the actual durability of the XPoint devices is. They say '12PB' of durability for the 375GB part but refuse to tell us how much overprovisioning they do. They say '30 drive writes per day' without tellling us what the warrenty will be.

In fact, over the last 6 months Intel has walked back their claims by orders of magnitude, to the point now where they don't even claim to be bandwidth competitive. They focus on low queue depths and and play fast and loose with the stats they supply.

For example, their QOS guarantee is only 60uS 4KB (99.999%) random access latency and in the same breath they talk about being orders of magnitude faster than NAND NVMe devices. They fail to mention that, for example, the Samsung NVMe devices also typically run around ~60-70uS QD1 latencies. Then Intel mumbles about 10uS latencies but bandies about large factors of improvement over NAND NVMe devices, far larger than the 6:1 one gets simply assuming 10uS vs 60uS.

Then they go on to say that they will have a NVDIMM form for the device later this year, with much faster access times (since in the NVMe form factor access times are constricted by the PCIe bus and block I/O protocol). But with potentially only 33,000 rewrite cycles per cell to failure that's seriously problematic. (And that's the best guess, since Intel won't actually tell us what the cell durability is).


The price point is way too high for what XPoint in the NVMe format appears to actually be capable of doing. The metrics look impossible for a NVDIMM form later this year. Literally we are supposed to actually buy the thing to get actual performance metrics for it? I don't think so.

Its insane. This is probably the biggest marketing failure Intel has ever had. Don't they realize that nobody is being fooled by their crap specs?


Comment Another nail in the coffin for Firefox (Score 1) 322

Pulseaudio is nortiously linux-specific. We've had nothing but trouble trying to use it on BSD and switched to ALSA (which is a lot more reliable on BSDs) a year or two ago for that reason.

I guess that's the end of Firefox's portability. Most of our users use Chromium anyway because Firefox has been so unstable and crash-prone. Long live Chromium?


Comment Re:I'll stick with HDDs for now (Score 1) 167

Your problem was that you were using Kingston, Patriot, etc... all third-rate SSD vendors who use whatever flash chips happen to be cheapest. Crucial (aka Micron), Samsung, and a few others are first-line vendors.

SSDs can certainly fail, but its kinda like PSUs... some vendors are first-line, most are not.


Comment Now this is very cool (Score 5, Insightful) 306

Or Hot :-). I read a number of articles from analysts who thought it would take around 15 years for the technology to be produced in commercial volumes. But the fact that it looks like this is going to happen at all, even with a 10-15 year time-frame, is a BIG deal. 3x the charge will give electric vehicles a 600+ mile range.


Comment What, you haven't noticed until now? (Score 1) 644

Ultimately automation improves everyone's quality of life, but it does so by requiring fewer workers for the same output. The work available is higher-quality (improved quality of life), but there are fewer slots. Automation also reduces the cost of goods. Just google the relative cost (out of your salary... the percentage of your work time required to fund it) for, say, heating and lighting your home for example and compare with the cost a hundred years ago. Conditions are so much better today than, say, in the 1800's, or 1700's, or 1600's.

But there is a cost. Automation and technology also cause a major dislocation as the population must find new and different things to do than the things they did before.

Look at coal mining today. Since the 50's, output has gone from roughly 1:1 (ton:worker) to 12:1 (ton:worker). In other words, your average coal mine today has 1/12 the number of workers needed for the same output 70 years ago. The same thing is happening in ALL industries.

However, automation also creates relatively severe economic disparities between people. Investors get a larger piece of the pie, workers get a smaller piece of the pie. This is because automation improves margins (even when goods cost less) but the larger number of people trying to go after fewer job slots forces wages down. Since investors tend to be more affluent, the result is that the rich get much richer and the poor get much poorer, relatively speaking. Even including the fact that there must be people to buy the goods in order to be able to sell them, the goods do not become cheaper quickly enough to completely offset the difference in economic standing.

Most people don't understand that this is a relative equation, not an absolute equation. The quality of life for everyone can improve at the same time that the economic disparity increases. But perception is relative in nature, and today's society is not kind to people doing dumb things (credit card debt being a prime example).

A *LARGE* portion of US citizens do dumb things, not the least of which being to elect people to government by relying on promises that translate, in reality, to the exact opposite of what the person was elected to achieve. Take taxes for example. Remember that the Rich already have most of the wealth in the U.S. All current policy now points to a major reduction of taxes on the Rich. A number of states have tried to reduce taxes. That is going to make the gap even worse. Your typical tax payer on the lower-end of the economic scale might save $200 in a year. Your typical affluent 'rich' person is going to save $200,000 in a year. And more. And yet people vote for this crap because they think saving that $200 will make life better. It won't, because the whole system will then rebalance based on $200,200 which puts the people on the lower-end of the economic scale in an even worse position a few years down the line. Its like spreading a few crumbs onto the pavement while the master eats 99.9% of the pie.


Comment Some hints (Score 1) 118

I have a few suggestions for people, having used computers from a very young age and having my own since 7th grade, eye strain has always been an issue.

(1) If you are near sighted (which I am), have your the prescription *slightlt* detuned, so it isn't perfect. Mine is detuned by I think around 0.25. This reduces eye strain by a HUGE amount. You won't be able to read highway signs from far away but who needs to do that any more with gps nav?

(2) Tinge your desktop foreground coloring scheme more towards the greens and do not yet a dead-black background solid, and do not have a bright background picture relative to your windows. This reduces excess contrast while simultaneously allowing you to reduce the brightness of the screen. Excess contrast is a major source of eye strain. If you see characters burning into your retina excessively you have too much contrast.

For example, for xterm's I use the following resources:

xterm*background: #100010000000
xterm*foreground: #7FFFDFFFDFFF
xterm*cursorColor: white

(3) Monitor(s) should be arms-length from your eyes with your fingers stretched out. If they are any closer, you are doing something wrong. Any closer and your eyes will get strain for excessive crossing.

(4) Glasses vs Contacts. I don't know. I prefer glasses myself, but I've never really liked to use contacts so mostly I just don't any more... its glasses all the way.


Submission + - Bradley Kuhn's Copyleft Keynote at FOSDEM (

Jeremy Allison - Sam writes: I saw this talk live. Bradley did an amazing job ! I would recommend anyone interested in Copyleft or Free Software licensing or the Software Freedom Conservancy to watch !

(Disclosure, I'm on the Conservancy Board of Directors).

Comment Inevitable (Score 5, Insightful) 97

We already dropped 32-bit support in DFly. There are many good reasons for doing it on Linux and the other BSDs as well. I will outline a few of them.

(1) The big reason is that kernel algorithms on FreeBSD, DragonFly, and Linux are starting to seriously rely on having a 64-bit address space to be able to properly size kernel data structures and KVM reservations. While (for FreeBSD) 32 bit builds still work, resource limitations are fairly confining relative to the resources that modern machines have (even 32-bit ones).

(2) Being able to have a DMAP makes kernel programming a whole lot easier. You can't have one on a 32-bit system unless you limit ram to something like 1GB. Being able to make a DMAP a kernel-standard requirement is important moving forwards.

(3) Modern systems are beginning to rely more and more (on x86 anyway) on having the %xmm registers available. To the point where many compilers now just assume that they will exist. ARM's 64-bit architecture also has some nice goodies that it would be nice to be able to rely on being available in-kernel.

(4) Optimizations for 64-bit systems create regressions on 32-bit systems. Memory copies, zeroing, and setmem, for example. Even if 32-bit support is kept, performance on those systems will continue to drop.

(5) There is a lot of ancient cruft in 32-bit code that we kernel programmers don't like to have to sift through. For example, being able to get rid of the EISA and most of the ISA support went a long ways towards cleaning up the codebase. Old drivers are a stick in the craw because nobody can test them any more, so the chances of them even working on an old system is reduced for every release. Eventually it gets to the point where there's no point trying to maintain the old driver.

(6) People should not expect modern features on old machines. The cost of replacing that old machine is minimal. Live with it. It's part of the price of progress. If the industry is a bit slow understanding what 'old' means, than the fewer systems which support these older architectures the better, it will make the point more obvious to the corporations who've lost their innovative edge.

(7) For ARM, going back to the corporate point, there's really no reason under the sun to continue to produce 32-bit cpus, even for highly embedded and IOT stuff. The world has moved on, and even embedded systems have major resource limitations in 32-bit configurations. If kernel programmers have to put an exclamation mark on that point, then so be it.


Comment Re:Still no competition (Score 1) 91

Its not 25% of the price though. Not sure where you got that from. The benchmark Intel cpu that AMD is competing against is the I7-7700K, which is $350 on Amazon. It will be the AMD 6-core against that one.

AMD will also be able to compete against Intel's i3's. An unlocked Ryzen *anything* (say, the 4-core ryzen) will be the hands-down winner against any Intel i3 chip on the low-end. Intel will have to either unlock the multipliers on all of its chips to compete, or pump up what they offer in their i3-* series.

The AMD 8-core is probably not going to be a competitive consumer chip.


Comment Re:parallelism vs raw clock speed (Score 2) 139

The unfortunate result of VLIW was that the cpu caches became ineffective, causing long latencies or requiring a much larger cache. In otherwords, non-competitive given the same cache size. This is also one of the reasons why ARM has been having such a tough time catching up to Intel (though maybe there is light at the end of the tunnel there, finally, after years and years). Even though Intel's instruction set requires some significant decoding to convert to uOPS internally, it's actually highly compact in terms of the L2/L3 cache footprint. That turned out to matter more.

People often misinterpret the effects of serialization. It's actually a matrix. When a portion of a problem has to be serialized it winds up adding latency, but requiring a portion of a problem to be serialized does not necessarily mean that the larger program cannot run with parallelism. There will often be many individual threads each having to run serially which in aggregate can run in parallel on a machine, and in such cases (really the vast majority of cases) one can utilize however many cores the machine has relatively efficiently.

This is true for databases, video processing, sound processing, and many other work loads. For example, if one cannot parallelize video compression on a frame by frame basis that doesn't mean that one cannot use all available cpus by having each cpu encode a different portion of the video.

Same with sound processing. If one is mixing 40 channels in an inherently serialized process this does not prevent the program from using all available cpus by having each one mix a different portion of the overall piece.

For databases there will often be many clients. Even if the query from one particular client cannot be parallelized, if one has 1000 queries running on a 72-core system one gains scale from those 72 cores. And at that point it just comes down to making sure the caches are large enough (including main memory) such that all the cpus can remain fully loaded.


Comment Re:Most depressing thing I've read all week (Score 2) 139

The main bottleneck for a modern cpu are main memory accesses. What is amazing is that all of the prediction and the huge (192+) number of uOPS that can be on the deck at once is able to absorb enough of the massive latencies main memory accesses cause to bring the actual average IPC back towards roughly ~1.4. And this is with the cache misses causing *only* around ~6 GBytes/sec worth of main memory accesses per socket (with a maximum main memory bandwidth of around 50 GBytes/sec per socket, if I remember right).

Without all of that stuff, a single cache miss can impose several hundred clock cycles of latency and destroy average IPC throughput.

So, for example, here is a 16 core / 32 thread - dual socket E5-2620v4 @ 2.1 GHz system doing a bunch of parallel compiles, using Intel's PCM infrastructure to measure what the cpu threads are actually doing:

Remember, 32 hyperthreads here so two hyperthreads per core. Actual physical core IPC (shown at the bottom) is roughly 1.39. At 2.1-2.4 GHz this system is retiring a total of 55 billion instructions per second.

In this particular case, being mostly integer math, the bottleneck is almost entirely memory-related. It doesn't take much to stall out a core. If I were running FP-intensive programs instead it would more likely be bottlenecked in the FP unit and not so much on main memory. Also note the temperature... barely ~40C with a standard copper heatsink and fan. Different workloads will cause different levels of cpu and memory loading.


Slashdot Top Deals

UNIX enhancements aren't.