Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Backups (Score 1) 184

No, its a stupid recommendation. Spinning rust doesn't last very long on a shelf. It will rapidly go bad mechanically if you keep switching between shelf and active. SSDs are far superior and data retention is going to remain very high until they really dig into their durability. If you still care, there's no reason why you can't just leave them disconnected from a computer but still powered... they eat no real current compared to a hard drive. SSD-based data retention should be 30+ years if left powered... impossible to test as yet :-)... but no reason why not.

However, for backup purposes there is still an issue of cost. Using SSDs for bulk backup storage can be expensive... it wouldn't matter for a big business so much but cost can be a big issue for individual users.

SSDs don't go bad the way HDDs do. With a HDD maximum reasonably-safe life is 3 years whether powered or not (and swapping between powered and shelf will radically reduce its durability). With a SSD only durability really matters. A business can easily justify buying the required SSD storage in bulk with a marginal cost calculation, but it might be too big a hunk of change for an individual.

Personally speaking I still use HDDs for my backups, for reasons of cost, but I expect in the next few years that will change as SSD prices continue to drop. I just bumped up from 2TB x 3 (active, on-site backup, off-site backup) to 4TB x 3. My storage needs are going up more slowly than the technology is dropping in price. The two will meet in a few years and I'll be 100% SSDs. I'm already 100% SSDs for everything else. No point even contemplating a HDD any more except for bulk backup storage or software test rigs.

-Matt

Comment Re:toy anyway (Score 1) 65

Actually, more and more SSDs today *DO* have power loss protection. Take it apart... if you see a bunch of capacitors on the mainboard all bunched together with no obvious purpose it's probably to keep power good long enough to finish writing out meta-data. Cheaper to use a lot of normal caps than to use thin-film high capacity caps.

-Matt

Comment Re:Strange Linux behavior (Score 1) 65

This is not related to the SSD. If your cpus are pegged then it's something outside the disk driver. If it's system time it could be two things: (1) Either the compilers are getting into a system call loop of some sort or (2) The filesystem is doing something that is causing lock contention or other problems.

Well, it could be more than two things, but it is highly unlikely to be the SSD.

One thing I've noticed with fast storage devices is that sometimes housekeeping operations by filesystems can stall out the whole system because the housekeeping operations assume the disk I/O will block when, in many cases, the disk I/O completes instantly and essentially does not block, causing the kernel thread to eat more cpu than intended.

-Matt

Comment There's no news here. (Score 1) 184

These tests explicitly state that the SSD is rewritten until it reaches its endurance rating before the retention test is done. At that point the flash in a consumer would not be expected to retain data unpowered for more than 1 year.

If you write your data to a fresh SSD once, multiply the number by at least 10.

-Matt

Comment Re:Specced too low, weird form factor (Score 1) 174

This is the *mobile* i5, not the full blown desktop i5. It's basically the Broadwell successor to the Haswell 29xx series. 15W TDP or less. The BRIX runs 8W idle (not sleeping) and 20W at 100% cpu (all 4 threads full out). Intel is playing fast and loose with their naming schema for Broadwell.

-Matt

Comment Re:Specced too low, weird form factor (Score 1) 174

All the older haswell-based boxes have dropped in price significantly. They make decent boxes too as long as you are not compute-heavy. E.G. the 2957U is 2-core, no hypthreading, 1.4 GHz, no-turbo, and no AESNI (so https and other crypto is slow). Whereas even the Broadwell i3-5200U is 2-core/4-thread, 2.2 GHz with Turbo to 2.7 GHz, and has AESNI.

I have an Acer C720P chromebook running DragonFly (BSD) with the 2955U in it, which is very close to the 2957U. I would call it decent for its purpose and it can certainly drive the chromebook's display fairly well. Firefox is not as snappy as I would like, though.

On the i5-5200U even unaccelerated video decoding can run full frame at full speed on my 1920x1050 monitor and firefox is quite snappy.

If I had to make a cost-concious decision on using the older Haswell based cpu and giving up some cpu power I would say that it would still be a reasonable choice *BUT* I would compensate at least a little by throwing in more ram (at least 4GB).

-Matt

Comment Specced too low, weird form factor (Score 2) 174

It's specced way too low to really be useful as a general computing device, and the form factor is 'weird' to say the least. It's too big to really be called a stick, and too small to be able to pack a decent cpu. There's plenty of space behind the monitor for a somewhat larger device in a better form factor. The stick is a play toy that you will become disappointed with very quickly (think the old 'netbook' concept Intel tried to push a few years ago... that's what the stick feels like).

Honestly, the 'compute stick' makes zero sense for a TV-mounted device. It is far better to just go with a chrome cast stick or an AppleTV for airplay and using a pad or cell in your hand to control it if you want to throw a display up on the TV. Otherwise you will be fumbling around with a horrible remote or you have to throw together a bluetooth keyboard (etc...) and it just won't be a fun or convenient experience.

My recommendation... don't bother with this gadget. Instead, spend a bit more money and get an Intel NUC or Gigabyte BRIX (both based on Broadwell). And get at least the i5 version, the lack of turbo in the i3 version is telling. e.g. i5-5200 based box or better. It will cost significantly more than the stick, but it packs a decent cpu, can take up to 16GB of ram (2x204pin SO-DIMM DDR3), and depending on the model might even have room for a 2.5" SSD or HDD in it. The broadwell i5-5200U makes for quite a reasonable compact workstation and boxes based on it will be almost universally dual-headed. Of course, whatever floats your boat but I would definitely say that the lowest-priced Intel NUC or Gigabyte BRIX that is haswell-based or broadwell-based is still going to be an order of magnitude better than the compute stick.

I have one of the Gigabyte GB-BXi5H-5200's myself ('H' version fits a normal 2.5" SSD or HDD) and packed 16GB of ram into it. It is dual-headed so I can drive two displays with it and the box is small enough to mount on the back of a monitor if you so desire (it even includes a mounting plate and most monitors, such as LG monitors, are ready to take it). And if mounting it on the back of a TV doesn't make sense, mount it on the back of a monitor instead or just let it float behind the monitor. It's a small box, after all, it won't get in the way of anything. 4-thread (2-core), 2.2 GHz turbo to 2.7 GHz. Dual-head. Decent.

-Matt

Comment Re:Latency vs bandwidth (Score 5, Interesting) 162

That's isn't correct. The queue depth for a normal AHCI controller is 31 (assuming 1 tag is reserved for error handling). It only takes a queue depth of 2 or 3 for maximum linear throughput.

Also, most operating systems are doing read-ahead for the program. Even if a program is requesting data from a file in small 4K read() chunks, the OS itself is doing read-ahead with multiple tags and likely much larger 16K-64K chunks. That's assuming the data hasn't been cached in ram yet.

For writing, the OS is buffering the data and issuing the writes asynchronously so writing is not usually a bottleneck unless a vast amount of data is being shoved out.

-Matt

Comment Re:ISTR hearing something about that... (Score 2) 162

Actually, large compiles use surprisingly little actual I/O. Run a large compile... e.g. a parallel buildworld or a large ports bulk build or something like that while observing physical disk I/O statistics. You'll realize very quickly that the compiles are not I/O constrained in the least.

'most' server demons are also not I/O constrained in the least. A web server can be IOPS-constrained when asked to load, e.g. tons of small icons or thumbnails. If managing a lot of video or audio streams a web server typically becomes network-constrained but the IOPS will be high enough to warrant at least a SATA SSD and not a HDD.

Random database accesses are I/O constrained if not well-cached in ram, which depends on the size of the database too, of course. Very large databases which cannot be well cached are the best suited for PCIe SSDs. Not a whole lot else.

-Matt

Comment Not surprising (Score 4, Informative) 162

I mean, why would anyone think images would load faster? The cpu is doing enough transformative work processing the image for display that the storage system only has to be able to keep ahead of it... which it can do trivially at 600 MBytes/sec if the data is not otherwise cached.

Did the author think that the OS wouldn't request the data from storage until the program actually asked for it? Of course the OS is doing read-ahead.

And programs aren't going to load much faster either, dynamic linking overhead puts a cap on it and the program is going to be cached in ram indefinitely after the first load anyway.

These PCIe SSDs are useful only in a few special mostly server-oriented cases. That said, it doesn't actually cost any more to have a direct PCIe interface verses a SATA interface so I these things are here to stay. Personally though I prefer the far more portable SATA SSDs.

-Matt

Comment Re:Wow... (Score 1) 51

Well, except that it isn't a mere month. Unpowered data retention is around 10 years for relatively unworn flash and around 1 year for worn flash. Powered data retention is almost indefinite (doesn't matter if the data is static or not). The modern SSD controller will rewrite blocks as the bits leave the sweet zone.

The main benefit, though, is that SSD wear is essentially based on how much data you've written, which is a very controllable parameter and means, among other things, that even a SSD which has been sitting on a shelf for a long time and lost its data can still be used for fresh data (TRIM wipe + newfs). I have tons of SSDs sitting on a shelf ready to be reused when I need them next. I can't really do that with HDDs and still expect them to be reliable.

Hard drives have a relatively fixed life whether powered or not. If you have a modestly used hard drive and take it out and put it on a shelf for a year, chances are it either won't be able to spin up after that year or it will die relatively quickly (within a few weeks, possibly even faster) once you have spun it up. So get your data off it fast if you can.

So SSDs already win in the data retention and reliability-on-reuse department.

-Matt

Comment Re:nvidia/ATI should keep their new stuff propriet (Score 1) 309

I don't understand what you mean by 'non graphics competitors'. Intel, AMD, and ARM cpu offerings already have integrated GPUs with dual-head capability (and have for a few years now). There are no non graphics competitors.

Currently the best open source kernel and driver compatibility is with the Intel and AMD integrated GPUs. That's what all the KMS work was responsible for giving us. The performance of integrated GPUs has increased steadily over the last few years and has reached a point now where most 3D games will run with modest (but not high-end) settings, and *all* 2D (aka desktop operations) will run faster than you can blink.

I splurged for a mid-range card for my windows gaming box, but all my workstations just use the cpu-integrated gpus these days for dual-head operation. And they're nice and quiet and fast.

-Matt

Comment Consumers are not going to notice much difference. (Score 2) 72

Well, nobody with a laptop is really going to notice much of a difference because frankly there isn't a whole lot of software that actually needs that kind of performance over the ~550 MBytes/sec that can already be obtained with SATA-III. Certainly not that would be run on a laptop anyway.

It's just using the PCI-e lanes on the M.2 connector instead of the SATA-III lanes. This isn't a magical technology. There's a loss of robustness and portability that gets traded off. It does point to SATA needing another few speed bumps, though. The fundamental serial link technology used at the physical level by PCI-e and SATA is almost identical. The main difference is that SATA is designed for cabling while M.2 is not (at least not M.2's PCI-e lanes).

-Matt

Slashdot Top Deals

It is easier to write an incorrect program than understand a correct one.

Working...