Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:Write limits (Score 1) 183

MLC at ~20nm is 1000-3000 or so (older MLC's using larger chip feature sizes could run in the 10000+ range). The wear indicator will probably hit zero somewhere between 1000 and 2000 erase cycles (over the full drive). It's usually conservative so it might indicate zero after ~1000 erase cycles or so but it depends, but usually the drive can continue to be written to well past 2000 erase cycles.

However, if you continue to write significantly to the drive after the wear counter reaches zero then you risk catastrophic failure above and beyond what the vendor nominally targets. Cells can wind up going bad and causing massive corruption past a certain point if left to sit a while, even if the last write & verify succeeded.

-Matt

Comment Re:Are you kidding? (Score 1) 183

There is a lot of misinformation on SSD failure modes, mostly because people ignore what SMART tells them and run their SSDs into the ground before replacing them.

Basically it comes down to this... The drive has a pretty good idea of how worn-down the flash chips get. It will count-down the wear indicator (from SMART) from 100 to 0. At the point it reaches 0 the drive can no longer guarantee nominal operations.

If you continue writing to the drive AFTER the wear indicator has reached 0 then you CAN have catastrophic failure situations. The more you write to the drive after that point, the more likely it will happen. In many cases you can write 2x or more the amount you wrote to get from 100 to 0 and the drive will work fine, which is why people tend to ignore the wear indicator after it has hit 0. But the drive just can't guarantee that data integrity will be within reason after the wear indicator hits 0.

If you replace the drive when the wear indicator has reached zero then you are far less likely to get a catastrophic failure. Sure they can still happen... failures are not intentionally created by the drive firmware after all. But if you use the wear indicator properly you can mitigate the catastrophic failure cases by at least an order of magnitude (and probably better).

It's that simple.

--

It is also true that there have been many firmware bugs over the years. SSDs at these capacities are fairly new beasts after all. There are dozens and dozens of SSD brands (though really only three major controller chipsets). Firmware is where the differences are. The only two vendors *I* trust are Crucial and Intel and that is it... I trust them because they are clearly staying on top of their firmware upgrades.

Even with the firmware bugs I know exist on my oldest SSDs (Intel 40Grs), I haven't had any failures. In fact, at this point I have at least 30 SSDs in operation and have yet to actually see any honest-to-god failures. I am certainly more confident of newer models (for Intel and Crucial anyhow) than older ones. Firmware might have a few hicups here and there as time passes but it will generally just get better and better. Judging a current deployment based on 5-year-old distrust is stupid.

-Matt

Comment Re:writes no problem for HOME use. Months for serv (Score 1) 183

I think you're a bit confused over how normal filesystem operations are cached on a modern OS (e.g. OS X, Linux, BSDs, Solaris, etc). Even normal log writing (whether you run it through a compression program or not) on a normal non-SSD-aware filesystem is not going to result in tiny little writes to the SSD. The writes will simply wind up in the buffer cache (or equivalent) and get flushed to media when the filesystem syncer comes around every 30-60 seconds or so. The most highly fragmented case in this situation might require the SSD to flush a 128KB a dozen times each 60 seconds which doesn't even remotely wear it out. Only a complete idiot tries to fsync() a log file on each line, so baring that ... it isn't an issue.

I've heard this complaint many times over the years and not one person has EVER provided any factual information as to what and how much and how often they are actually writing to the SSD. Not once.

The amount of data being written is always an issue with a SSD, but if it's being permanently stored it actually isn't the issue you think it is because the equivalent cost of storage for archival data is actually better with a SSD simply due to the SSD in write-once situation lasting forever (maybe rewrite a full drive once every 5 years or so to refresh the cells x ~1000-3000 rewrites). The SSD will easily last 25 years or longer (probably until the firmware itself degrades), whereas a HDD has to be replaced every 3-5 years whether it's off, idle, or doing work. SSDs are great for archiving stuff. They take virtually no energy when idle and can simply be left attached and powered and the only real wear occurs when you write.

For temporarily staged data... this is probably a SSDs one issue. There is a wear limit after all, so constantly rewriting the drive at a high rate will wear it out. But this is also a problem with an easy solution... since such data is usually laid down linearly and processed linearly, HDDs are still useful as a storage medium. Simpler staging of temporary data doesn't even have to use ANY media if the data trivially fits in ram... you just use a tmpfs mount and schedule a job to process the data at reasonable intervals.

In terms of swap, again you appear to be confused. Simply placing swap on a SSD is not going to wear it out. It depends heavily on how much the OS actually pages data in and out. In most consumer/home-system situations the answer will be 'not often' (relative to the SSD's wear limit). In a server situation swap is not written to under normal operating conditions at all unless someone made a major mistake. It's just there to handle DOS attacks and burst situations in order to allow the system to be tuned to utilize all of its resources as fully as possible.

For example, again a 'tmpfs' (memory filesystem) which is backed by swap can actually be VERY write-efficient since the OS isn't going to flush it to its backing media unless the system is actually under memory pressure. If one schedules things such that the system is not normally under memory pressure (which is a typical case for a server installation), then the SSD won't be worn out... but it will be available for those situations that happen every once in a while that really need it.

Oh well, I don't expect much from Slashdot posters anyway. But, honestly, these things should be obvious to people by now.

-Matt

Comment Re:1 BTC = Between $20,000.00 and $500,000.00 (Score 0) 371

That's a joke, right? You actually believe Bitcoin is accepted and trusted by merchants? You can count the number of serious businesses trying to use bitcoin on one hand. Walmart's virtual currency is already used in far more transactions than Bitcoin ever will be.

You are staring at pretty numbers on a screen and have absolutely no sense of scale.

And as far as credit cards go, the space is getting pretty crowded already with Yahoo, Google, and others pushing into the space (after realizing that wallet apps are pretty useless without them). Lets see someone due monetary conversions to a bitcoin account with a credit card and see how long it takes them to either go bankrupt or be forced to charged significant fees to stay in business.

Bitcoin has no advantage whatsoever as a first-mover when the 'currency' is too unstable to actually be usable in the wider world of commerce.

-Matt

Comment Re:1 BTC = Between $20,000.00 and $500,000.00 (Score 1) 371

Why would the world population want to do that when they could be using BTC2 or BTC3 or ... BTC (each with a slight tweek of the algorithm) instead?

There are tons of virtual currencies already in existence, and no barrier to entry for creating more (including no barrier to entry for creating clones of Bitcoin that use a different cryptographic base).

-Matt

Comment Re:Other Uses (Score 2) 371

It costs me $0 to transfer dollars to relatives electronically, bank-to-bank. No fees at all.

Also, perhaps you are unaware, but Western Union has been losing business for two decades now. Many immigrants now send money home by opening a bank account and mailing a physical ATM card to their relatives, which can cost as little as $0 if the relatives withdrawal the cash in $ (which many can), and can be as little as 1% if a currency conversion is involved.

Even using a non-network ATM typically has a fixed charge, e.g. $3/$300 = 1% charge to get a hard currency in one's pocket.

Bitcoin cannot do better (people who think they can transfer BTC for free are ignoring both the exchange spreads and the time/volatility factor).

-Matt

Comment Re:Sell now. (Score 1) 371

This mechanism is no guarantee that the seller will actually get the price they programmed in. In a crash there is not likely going to be a buyer at or above the programmed-in price and the seller can't force counter-parties to buy BTC (or anything) at the desired price.

In fact this mechanism exists in the real stock market and is manipulated all the time by contrived flash crashes to force those sell points to get hit. Since the price crashes through the sell levels so quickly there are no buyers at those prices and the sales wind up going at the market and forcing the stock down even more. Even worse, enough people using this sort of mechanism can CAUSE computer-driven crashes and lose their shirts in the process.

Unlike something like Bitcoin, stocks actually do have a real intrinsic value (even if investors can't calculate it with any accuracy). Which means that you can't sell a stock into oblivion and expect it to stay there very long. In the case of flash crashes, sometimes seconds, up to a few minutes at best. People who use these sorts of orders can wind up with sold positions and a stock price that has recovered before they can react and then they are screwed.

-Matt

Comment Re:Why do they call it a currency? (Score 1) 371

You've hit upon the basic mechanism but you haven't taken the next logical step. The issue is what happens when the pricing goes exponential (as it has, twice now).

What happens is that people who 'break even' almost universally go back into the market with more than just the house's money. Because the value is appreciating non-linearly, going back into the market almost always means spending far more money buying back in than you ever got out before that point.

For example, people who cashed out when it was hitting $250 a few months ago are now kicking themselves. In order to buy back in, they have to spend every last drop of their previous profits plus more to obtain a 'reasonable' speculative position at the new price.

This also points to how people's view of money shifts as the market rises, particularly people who have never had a lot of money or have never handled large sums or volumes of money (via their jobs or otherwise)... basically the definition of a 'sucker'. As long as they think they are making money, it becomes like a game of monopoly. In the first round someone who made a few $hundred thought they did really well. In the second round those few $hundred feel like chump change... it takes a few $thousand to really feel that you did well.

Well, it doesn't stop there. It keeps scaling up. In the third round a person might feel that a few $tens-of-thousands is necessary to do well and that a measily few $thousand is chump change. It keeps scaling up until a person either runs out of cash or runs out of borrowing capacity... the mindset continues to scale all the way until the whole thing comes crashing down.

That's why you wind up with people bankrupted instead of people being happy that they had fun playing with the house's money after the first round or two.

There are certainly a few smart people in the Bitcoin market who know how to extract actual profit from the trading. It mostly comes down to understanding the scaling effect and NOT being tricked by it (not scaling up the investment in each successive round by more than 1/2 the profit from the previous round). There are numerous well-known methodologies and I'm certain a small but significant percentage of Bitcoin traders are doing precisely that.

People doing the above scheme are the smart ones. They are draining hard money (real dollars) from the exchange against everyone else who is continuing to lever up. In the end, real money has been removed from the exchange while the remaining participants are basically just playing musical chairs with each other. When the eventual panic occurs there is simply not enough cash left with the exchange participants to cover everyone and the value crashes into oblivion.

This is how it will play out. It is ALWAYS how it plays out. Bitcoin is not going to be an exception.

-Matt

Comment Re:The value of bitcoin (Score 1) 371

VISA is more like 1% (or less) for anyone doing real volume. Small businesses have plenty of choices for credit card transactions... even Apple is getting into the game. And besides, if you want to compare Bitcoin transaction costs against something you should be comparing against something like PayPal, not VISA.

For bitcoin on either Mt Gox or coinbase, the bid/ask spread *alone* for a small transaction is ~0.5% or so, and exceeds 1% for the types of volumes a small business would have (and that is only just recently. Just a few days ago the spread was easily over 2%). That's just the bid/ask spread, never mind any other fees.

Anyone who thinks they are doing bitcoin transactions for 'free' is smoking something.

What we have here are a whole lot of people so fixated on the hype they don't realize just how much they are getting milked.

-Matt

Comment Re:Why do they call it a currency? (Score 1) 371

Maybe several million dollars when purchased new, but pretty much all computing equipment depreciates to near worthlessness over a few years, not to mention the cost of power. So if you are trying to argue that bitcoin has something physical backing it then it kinda falls on its face.

Nobody in their right mind can argue that Bitcoin is a usable currency. Currencies have to be relatively stable in the short-term. Bitcoin obviously isn't even remotely stable. Not only is it not stable, but it has no mechanisms at all to make it stable.

It's the classic (and obvious) hording / tulip-mania mechanic and I think most investors know this, but greed still rules until the day the whole thing comes crashing down around people's ears.

-Matt

Comment Re:SMP contention basically gone from critical pat (Score 1) 48

I've read the space map work but there are several issues that make them impractical for H2. The main one is that H2 snapshots are read-write (ZFS snapshots are read-only). Secondarily, my experience from H1 is that any complicated ref-counting or space accounting can lead to hidden corruption. Even if we assumed that the media data validation mechanism (a CRC or cryptographic hash) detects all media issues, there is still the problem of software complexity introducing bugs at a higher level.

So H2 is going to use a relatively simple freemap mechanism. In fact, it is using the same dynamic radix tree for the freemap as it does for the file and directory topology. It will still roll-up tiny allocations into a somewhat chunkier granularity to reduce freemap overhead (similar to what H1 did but at a finer 1KB grain vs H1's 2MB granularity), and it will roll-up space availability hints and propagate them up the tree. And it is type-aware so it can do things like collect inode allocations together for burstability. It isn't trivialized. But it does not attempt to implement ref-counting or any complex sub-topology to keep track of who and how many own which blocks.

The actual block freeing is going to be done in the background rather than in the 'rm' path (for a COW filesystem with snapshots which are actually used regularly, there's no point trying to free blocks in real time). This will allow H2 to implement several major features such as de-duplication, writable snapshots, and so forth with far less software complexity. So the complexity of the block allocation and any block duplication becomes trivialized while the complexity of the block freeing code increases. But it's an incrementally solvable problem. I can start with a brute-force memory-bounded scan and slowly add optimizations to recognize duplication in the topology, add topology hints to improve the efficiency of the block freeing scan, and so forth.

-Matt

Comment SMP contention basically gone from critical paths (Score 5, Informative) 48

This release removes almost all the remaining SMP contention from both critical paths and most common paths. The work continued past the release in the master branch (some additional things which were too complex too test in time for the release). For all intents and purposes the master branch no longer has any SMP contention for anything other than modifying filesystem operations (such as concurrent writing to the same file). And even those sorts of operations are mostly contention free due to the buffer cache and namecache layers.

Generally speaking what this means is that for smaller 8-core systems what contention there was mostly disappeared one or two releases ago, but larger (e.g. 48-core) systems still had significant contention when many cores were heavily resource-shared. This release and the work now in the master branch basically removes the remaining contention on the larger multi-core systems, greatly improving their scaling and efficiency.

A full bulk build on our 48-core opteron box took several days a year ago. Today it takes only 14.5 hours to build the almost 21000 packages in the FreeBSD ports tree. These weren't minor improvements.

Where it matters the most are with heavily shared resources, for example when one is doing a bulk build on a large multi-core system which is constantly fork/exec'ing, running dozens of the same process concurrently. /bin/sh, make, cc1[plus], and so on (a common scenario for any bulk building system), and when accessing heavily shared cached filesystem data (a very common scenario for web servers). Under these conditions there can be hundreds of thousands of path lookups per second and over a million VM faults per second. Even a single exclusive lock in these paths can destroy performance on systems with more than 8 cores. Both the simpler case where a program such as /bin/sh or cc1 is concurrently fork/exec'd thousands to tens of thousands of times per second and the more complex case where sub-shells are used for parallelism (fork without exec)... these cases no longer contend at all.

Other paths also had to be cleaned up. Process forking requires significant pid-handling interactions to allocate PIDs concurrently, and exec's pretty much require that locks be fine-grained all the way down to the page level (and then shared at the page level) to handle the concurrent VM faults. The process table, buffer cache, and several other major subsystems were rewritten to convert global tables into effectively per-cpu tables. One lock would get 'fixed' and reveal three others that still needed work. Eventually everything was fixed.

Similarly, network paths have been optimized to the point where a server configuration can process hundreds of thousands of tcp connections per second and we can get full utilization of 10GigE nics.

And filesystem paths have been optimized greatly as well, though we'll have to wait for HAMMER2 to finish that work for modifying filesystem calls to reap the real rewards from that.

There are still a few network paths, primarily related to filtering (PF) that are serialized and need to be rewritten, but that and the next gen filesystem are the only big ticket items left in the entire system insofar as SMP goes.

Well, the last problem, at least until we tackle the next big issue. There's still cache coherency bus traffic which occurs even when e.g. a shared lock is non-contended. The code-base is now at the point where we could probably drop-in the new Intel transactional instructions and prefixes and get real gains (again, only applicable to multi-chip/multi-core solutions, not simple 8-thread systems). It should be possible to bump fork/exec and VM fault performance on shared resources from their current very high levels right on through the stratosphere and into deep space. Maybe I'll make a GSOC out of it next year.

The filesystem work on HAMMER2 (the filesystem successor to HAMMER1) continues to progress but it wasn't ready for even an early alpha release this release. The core media formats are stable but the freemap and the higher level abstraction layers still have a lot of work ahead of them.

In terms of performance... well, someone will have to re-run bechmarks instead of just re-posting old stuff from 5 years ago. Considering the SMP work I'd expect DFly to top-out on most tests (but there's still always the issue of benchmark testers just blindly running things and not actually understanding the results they post about). Database performance with postgresql still requires some work for large system configurations due to the pmap replications (due to postgres fork()ing and using mmap() now instead of sysv-shm, e.g. if you used a large 100GB+ shared memory cache configuration for the test). We have a sysctl to enable page table sharing across discrete fork()s but it isn't stable yet... with it, though, we get postgres performance on-par with the best linux results in large system configurations. So there are still a few degenerate cases here and there that aren't so much related to SMP as they are to resource memory use. But not much is left even there.

Honestly, Slashdot isn't the right place to post BSD stuff anymore. It's too full of immature posts and uninformed nonsense.

-Matt

Comment Pretty obvious evolution (Score 1) 110

With Intel's 14nm so close, and 10nm production in another year or so, they need to use all that chip area for something that doesn't necessarily generate a ton of heat. RAM is the perfect thing. Not only is the power consumption relatively disconnected from the size and density of the cache, but not having to go off-chip for a majority of memory operations means that the external dynamic ram can probably go into power savings mode for most of its life, reducing the overall power consumption of the device.

-Matt

Comment Re:Don't really see the market (Score 1) 240

Happens to me all the time on ALL my android devices. Totally non-deterministic. Sometimes my Nexus 7 (2012) can sit with the display turned off for several days without losing too much battery, other times I turn it off (display off) and it discharges overnight and is dead in the morning. Happens to both my android phones too. I'm not forgetting to turn off Pandora either.

Sometimes my phone is 'hot' just sitting in my pocket with the screen off. Sometimes it is 'cold'. When it's hot it's obviously eating power doing something, despite the display being turned off. It's totally random when this happens... usually the longer I leave the phone between reboots the more likely it is to get into that state. VERY annoying. NO services are running (nothing in the service bar at the top). Doesn't matter. Still random hot or cold.

Doesn't happen to any of my Apple devices. If I leave an Apple device turned off (sleep mode that is), it retains its battery level and drains at a very slow and highly deterministic rate. If I turn off my iPhone (and I'm not playing music on it), it goes to sleep. Always.

Honestly I don't understand why Google thinks this is ok. It's an obvious competitive advantage for Apple when its devices don't kill their battery and Android devices do.

-Matt

Slashdot Top Deals

UNIX was not designed to stop you from doing stupid things, because that would also stop you from doing clever things. -- Doug Gwyn

Working...