Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 internet speed test! ×

Comment Its rather exaggerated (Score 5, Interesting) 63

Intels claims are rather exaggerated. Their claims have already been torn apart on numerous tech forums. At best we're talking only a ~3-5x reduction in QD1 latency and the intentionally omit vital information in the specs to force everyone to guess what the actual durability of the XPoint devices is. They say '12PB' of durability for the 375GB part but refuse to tell us how much overprovisioning they do. They say '30 drive writes per day' without tellling us what the warrenty will be.

In fact, over the last 6 months Intel has walked back their claims by orders of magnitude, to the point now where they don't even claim to be bandwidth competitive. They focus on low queue depths and and play fast and loose with the stats they supply.

For example, their QOS guarantee is only 60uS 4KB (99.999%) random access latency and in the same breath they talk about being orders of magnitude faster than NAND NVMe devices. They fail to mention that, for example, the Samsung NVMe devices also typically run around ~60-70uS QD1 latencies. Then Intel mumbles about 10uS latencies but bandies about large factors of improvement over NAND NVMe devices, far larger than the 6:1 one gets simply assuming 10uS vs 60uS.

Then they go on to say that they will have a NVDIMM form for the device later this year, with much faster access times (since in the NVMe form factor access times are constricted by the PCIe bus and block I/O protocol). But with potentially only 33,000 rewrite cycles per cell to failure that's seriously problematic. (And that's the best guess, since Intel won't actually tell us what the cell durability is).

--

The price point is way too high for what XPoint in the NVMe format appears to actually be capable of doing. The metrics look impossible for a NVDIMM form later this year. Literally we are supposed to actually buy the thing to get actual performance metrics for it? I don't think so.

Its insane. This is probably the biggest marketing failure Intel has ever had. Don't they realize that nobody is being fooled by their crap specs?

-Matt

Comment Another nail in the coffin for Firefox (Score 1) 322

Pulseaudio is nortiously linux-specific. We've had nothing but trouble trying to use it on BSD and switched to ALSA (which is a lot more reliable on BSDs) a year or two ago for that reason.

I guess that's the end of Firefox's portability. Most of our users use Chromium anyway because Firefox has been so unstable and crash-prone. Long live Chromium?

-Matt

Comment Re:I'll stick with HDDs for now (Score 1) 167

Your problem was that you were using Kingston, Patriot, etc... all third-rate SSD vendors who use whatever flash chips happen to be cheapest. Crucial (aka Micron), Samsung, and a few others are first-line vendors.

SSDs can certainly fail, but its kinda like PSUs... some vendors are first-line, most are not.

-Matt

Comment Now this is very cool (Score 5, Insightful) 306

Or Hot :-). I read a number of articles from analysts who thought it would take around 15 years for the technology to be produced in commercial volumes. But the fact that it looks like this is going to happen at all, even with a 10-15 year time-frame, is a BIG deal. 3x the charge will give electric vehicles a 600+ mile range.

-Matt

Comment What, you haven't noticed until now? (Score 1) 644

Ultimately automation improves everyone's quality of life, but it does so by requiring fewer workers for the same output. The work available is higher-quality (improved quality of life), but there are fewer slots. Automation also reduces the cost of goods. Just google the relative cost (out of your salary... the percentage of your work time required to fund it) for, say, heating and lighting your home for example and compare with the cost a hundred years ago. Conditions are so much better today than, say, in the 1800's, or 1700's, or 1600's.

But there is a cost. Automation and technology also cause a major dislocation as the population must find new and different things to do than the things they did before.

Look at coal mining today. Since the 50's, output has gone from roughly 1:1 (ton:worker) to 12:1 (ton:worker). In other words, your average coal mine today has 1/12 the number of workers needed for the same output 70 years ago. The same thing is happening in ALL industries.

However, automation also creates relatively severe economic disparities between people. Investors get a larger piece of the pie, workers get a smaller piece of the pie. This is because automation improves margins (even when goods cost less) but the larger number of people trying to go after fewer job slots forces wages down. Since investors tend to be more affluent, the result is that the rich get much richer and the poor get much poorer, relatively speaking. Even including the fact that there must be people to buy the goods in order to be able to sell them, the goods do not become cheaper quickly enough to completely offset the difference in economic standing.

Most people don't understand that this is a relative equation, not an absolute equation. The quality of life for everyone can improve at the same time that the economic disparity increases. But perception is relative in nature, and today's society is not kind to people doing dumb things (credit card debt being a prime example).

A *LARGE* portion of US citizens do dumb things, not the least of which being to elect people to government by relying on promises that translate, in reality, to the exact opposite of what the person was elected to achieve. Take taxes for example. Remember that the Rich already have most of the wealth in the U.S. All current policy now points to a major reduction of taxes on the Rich. A number of states have tried to reduce taxes. That is going to make the gap even worse. Your typical tax payer on the lower-end of the economic scale might save $200 in a year. Your typical affluent 'rich' person is going to save $200,000 in a year. And more. And yet people vote for this crap because they think saving that $200 will make life better. It won't, because the whole system will then rebalance based on $200,200 which puts the people on the lower-end of the economic scale in an even worse position a few years down the line. Its like spreading a few crumbs onto the pavement while the master eats 99.9% of the pie.

-Matt

Comment Some hints (Score 1) 118

I have a few suggestions for people, having used computers from a very young age and having my own since 7th grade, eye strain has always been an issue.

(1) If you are near sighted (which I am), have your the prescription *slightlt* detuned, so it isn't perfect. Mine is detuned by I think around 0.25. This reduces eye strain by a HUGE amount. You won't be able to read highway signs from far away but who needs to do that any more with gps nav?

(2) Tinge your desktop foreground coloring scheme more towards the greens and do not yet a dead-black background solid, and do not have a bright background picture relative to your windows. This reduces excess contrast while simultaneously allowing you to reduce the brightness of the screen. Excess contrast is a major source of eye strain. If you see characters burning into your retina excessively you have too much contrast.

For example, for xterm's I use the following resources:

xterm*background: #100010000000
xterm*foreground: #7FFFDFFFDFFF
xterm*cursorColor: white

(3) Monitor(s) should be arms-length from your eyes with your fingers stretched out. If they are any closer, you are doing something wrong. Any closer and your eyes will get strain for excessive crossing.

(4) Glasses vs Contacts. I don't know. I prefer glasses myself, but I've never really liked to use contacts so mostly I just don't any more... its glasses all the way.

-Matt

Comment Advent of Code (Score 1) 312

A lot of the other responses here focus on what language to choose, but not what to do with it. So pick a language - Python is good, Ruby is good, PHP if you want to work with web sites, C# if you want to work with Microsoft stuff, Swift if you want to work with Apple stuff. And then:

- Sign up for a free Github account (https://github.com).

- Start working through the Advent of Code challenges. (http://adventofcode.com) They're a set of 25 two-part challenges posted in December 2015 and then another set in December 2016. I recommend starting with the 2015 set (because I think it was more straightforward). The reason I suggest these are that they are problems with specific solutions (you'll know when you get them right), they will make you learn how to solve problems in your language of choice, and there are a lot of posts on Reddit (https://www.reddit.com/r/adventofcode/) with tips for each individual challenge. There are other programming puzzles out there, but Advent of Code is a good collection of challenges in one place.

- Post your solutions on your Github account. This will get you familiar with using Git. It's a good skill to have.

- Tweak your solutions if you'd like. Find a way to write more concise code? Want to practice documenting better? Interested in optimizing your solutions to run faster in less memory? There you go.

- If you ever want to use your new programming skills for a job, link to your Github repo from your resume. It will let people clearly see how you code.

Good luck!

Comment Inevitable (Score 5, Insightful) 97

We already dropped 32-bit support in DFly. There are many good reasons for doing it on Linux and the other BSDs as well. I will outline a few of them.

(1) The big reason is that kernel algorithms on FreeBSD, DragonFly, and Linux are starting to seriously rely on having a 64-bit address space to be able to properly size kernel data structures and KVM reservations. While (for FreeBSD) 32 bit builds still work, resource limitations are fairly confining relative to the resources that modern machines have (even 32-bit ones).

(2) Being able to have a DMAP makes kernel programming a whole lot easier. You can't have one on a 32-bit system unless you limit ram to something like 1GB. Being able to make a DMAP a kernel-standard requirement is important moving forwards.

(3) Modern systems are beginning to rely more and more (on x86 anyway) on having the %xmm registers available. To the point where many compilers now just assume that they will exist. ARM's 64-bit architecture also has some nice goodies that it would be nice to be able to rely on being available in-kernel.

(4) Optimizations for 64-bit systems create regressions on 32-bit systems. Memory copies, zeroing, and setmem, for example. Even if 32-bit support is kept, performance on those systems will continue to drop.

(5) There is a lot of ancient cruft in 32-bit code that we kernel programmers don't like to have to sift through. For example, being able to get rid of the EISA and most of the ISA support went a long ways towards cleaning up the codebase. Old drivers are a stick in the craw because nobody can test them any more, so the chances of them even working on an old system is reduced for every release. Eventually it gets to the point where there's no point trying to maintain the old driver.

(6) People should not expect modern features on old machines. The cost of replacing that old machine is minimal. Live with it. It's part of the price of progress. If the industry is a bit slow understanding what 'old' means, than the fewer systems which support these older architectures the better, it will make the point more obvious to the corporations who've lost their innovative edge.

(7) For ARM, going back to the corporate point, there's really no reason under the sun to continue to produce 32-bit cpus, even for highly embedded and IOT stuff. The world has moved on, and even embedded systems have major resource limitations in 32-bit configurations. If kernel programmers have to put an exclamation mark on that point, then so be it.

-Matt

Comment Re:Still no competition (Score 1) 91

Its not 25% of the price though. Not sure where you got that from. The benchmark Intel cpu that AMD is competing against is the I7-7700K, which is $350 on Amazon. It will be the AMD 6-core against that one.

AMD will also be able to compete against Intel's i3's. An unlocked Ryzen *anything* (say, the 4-core ryzen) will be the hands-down winner against any Intel i3 chip on the low-end. Intel will have to either unlock the multipliers on all of its chips to compete, or pump up what they offer in their i3-* series.

The AMD 8-core is probably not going to be a competitive consumer chip.

-Matt

Comment Any passive 3D computer monitors still being sold? (Score 1) 435

Active 3D is a pain, with the need for expensive shutter glasses. But passive 3D is wonderful, with each scan line being polarized in opposite directions. Passive 3D glasses are cheap and the displays don't need high refresh rates.

I'd like to have a passive 3D computer monitor for gaming, but it looks like there aren't any on the market any more. So I figured I'd ask here - anyone know of any that are still being sold?

Slashdot Top Deals

You mean you didn't *know* she was off making lots of little phone companies?

Working...