Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:Reliability (Score 2) 438

Depends on the application. For a workstation or build box, we configure swap on the SSD.

The point is not that the build box needs to swap, not with 32G or more ram, but that having swap in the mix allows you to make full use of your cpu resources because you can scale the build up to the point where the 'peaks' of the build tend to eat just a tad more ram resources than you have ram for (and thus page), which is fine because the rest of the build winds up being able to better-utilize the ram and cpu that is there. So putting swap on a SSD actually works out quite nicely on a build box.

Similarly, for a workstation, the machine simply does not page enough that one has to worry about paging wearing out the SSD. You put swap on your SSD for another reason entirely... to allow the machine to hold onto huge amounts of data in virtual memory from open applications, and to allow the machine to get rid of idle memory (page it out) to make more memory available for active operations, without you as the user of the workstation noticing when it actually pages something in or out.

A good example of this is when doing mass photo-editing on hundreds of gigabytes of data. If the bulk storage is not a SSD, or perhaps if it is accessed over a network that can cause problems. But if the program caches pictures ahead and behind and 'sees' a large amount of memory is available, having swap on the SSD can improve performance and latency massively.

And, of course, being able to cache HDD or networked data on your SSD is just as important, so it depends how the cache mechanism works in the OS.

So generally speaking, there are actually not very many situations where you WOULDN'T want to put your swap on the SSD. On machines with large ram configurations, the name of the game is to make the most of the resources you have and not so much to overload the machine to the point where it is paging heavily 24x7. On machines with less ram, the name of the game is to reduce latency for the workload, which means allowing the OS to page so available ram can self-tune to the workload.

-Matt

Comment Inevitable (Score 1) 438

Happening a little sooner than I thought, but the trend has clearly been going in this direction for a long time now. Just one year ago I stopped buying 3.5" HDDs a year ago in favor of a combination of (short stroked) 2.5" drives and SSDs. I already use only SSDs in all the workstations and laptops, the HDDs are only used by the servers now.

Now it is looking like I will probably not buy any more HDDs at all, ever again, even for the servers. That is going to do wonders for hardware life and maintenance costs.

It's a bit strange having a pile of brand-new perfectly working 1TB and 2TB 3.5" HDDs still in their static bags, unopened, in my spare drawer that I will likely never use again.

I wonder how long it will take case makers to start giving us 2.5"-only hot swap options without all the 3.5" crap taking up room. Of course, there are some already... I mean for it to become the predominant case style.

-Matt

Comment Difficult problem to solve (Score 1) 224

Part of the problem is that it is very difficult to tell a player using hacks from a player who is simply good at playing the game. I remember, a long time ago (10+ years) my brother was a counter-strike player who specialized in head shots. He was very good at it, but standing behind him while he played there were numerous occasions where he got kicked off a server due to players thinking he was cheating. He wasn't. I was standing right there behind him.

I think the only real solution is to video yourself playing the game so other players can see (after the match) that you were not using any cheats or hacks. Either that or play at an official location with monitors and public hardware.

-Matt

Comment Re:Nice... (Score 1) 147

Motorola did produce a HCMOS version of the 68000 and related cpus. e.g. the 68hc000 and friends, which I used extensively. HCMOS was billed as a 'fast' version of CMOS, trading off some current draw for speed. As people might remember, HCMOS pulls and pushes about the same (though the ground paths are rated higher). About 50 ohms to either rail, more capacitive than resistive so current draw ran more inline with the frequency and you could use 1M pull-down resistors on the tri-state busses and the logic was pretty bullet proof in terms of noise and reflections.

I really loved HCMOS, and hated TTL, but eventually advanced ttl beat it out (at least for the pin interface logic).

-Matt

Comment Re:The original 68000 interrupts were inadequate (Score 2) 147

Interrupts worked fine. It was bus errors (i.e. for off-chip memory protection and/or mapping units) that were a problem. The 68010 fixed that particular issue if I recall. I'm guessing later 68008's also did but I dunno. Doesn't matter since he isn't running with any memory protection.

You could in fact run a real multi-tasking OS on the 68000. I was running one of my own design for my telemetry projects. It didn't have memory mapping but it did have memory protection via an external static ram, 8:1 selector, and some logic. It managed around 20-30 processes.

And, strangely enough, you could also run a RTOS because the 68000 had wonderful prioritized interrupts. Back then, of course, real time response was required for handling serial ports and things like that.

-Matt

Comment Re:Hey, congratulations (Score 1) 147

Er, I meant 8:1 selector (the R/~W bit was fed into one of the select inputs). The function code logic was used to selectively enable/disable the memory protection unit, so supervisor accesses bypassed it while user accesses did not. Which is good because it wouldn't have been able to boot otherwise.

Another use for the FC logic is to speed up the auto-vector code. The 68K had wonderful asynchronous interrupt logic. You basically had 8 priority levels and you could feed your I/O chips into a simple 8:3 priority selector and feed the result into the interrupt priority level pins on the cpu. The 68K would then do an interrupt vector acquisition bus cycle to get the vector (or you could tell it to generate an internal vector). Every once in a while the async logic would screw up and we'd get an uninitialized interrupt vector but the code to deal with that was trivial, and since it was all level logic the hardware would sort it out soon enough and calculate the correct IPL to request from the bus.

The autovectors are slow, though... it was far better to generate vectors from a ram (if I remember right). The FC logic could be used to force the access to the ram or the eprom (since the address lines I think were all 1's except for three bits defining the priority level being fetched, or something like that).

In contrast, even to this day Intel STILL can't get their interrupt logic to work properly. Even the MSI-X logic is broken in a lot of chipsets. Yuch. So much ridiculous and unnecessary complexity with Intel interrupt handling with all the idiotic IOAPIC and LAPIC sub-processors with non-deterministic reaction times and serialization problems and other stupid stuff. The motorola interrupt logic was a dream in comparison.

-Matt

Comment Hey, congratulations (Score 2) 147

Congratulations, you are now in a rare group indeed. But I gotta say, you haven't lived until you've programmed a 6502 directly in machine code. No assembler :-)

One of my telemetry systems which I built and designed 25+ years ago used the 68000 running at around 10 MHz (with a jumper for 20 MHz, which it could actually do though I didn't deploy it at those speeds). The coolest thing about it was that I had built a rudimentary memory protection unit using a static ram and an 8:2 selector. Any user mode accesses pumped the high address bits into the ram with 2 address bits going to the selector along with the R/~W bit. The result was gated into the bus error logic. The top three bits of the static ram were directly controlled by the kernel which allowed the kernel to 'cache' up to 8 process's worth of protection data in the ram at any given moment, so context switches were still very fast.

Before the 68030 and 68040 came out, Sun (I think) was running two 68000's in lockstep, one one cycle behind the other, in order to implement their own MMU. When a fault would occur, they bus-errored the lead chip and paused the second 'behind' chip so they could take the bus fault, resolve the mapping issue, and then resume the behind chip. Then the 68010 came along and fixed the bus error interrupt stacking bugs in the 68000, and the 68020 came along after that.

The 68030 could hold short loops in its chip logic with some tricks, despite not really having a cache. Unfortunately, the 68040's on-chip cache implementation was horrible and created all sorts of problems for implementers, and by then Intel chips were running much much faster.

When Motorola retired the 68K series some of their larger embedded users asked motorola to re-test the 68000 chip specs at a higher clock, since by then the HCMOS process could obviously run the chip much faster than the ~10-12 MHz that was speced. Motorola tested the HCMOS version of the chip to around 50-70 MHz or so. Such a nice 32-bit chip, I was really sorry to see Moto lose to Intel (mostly because Moto gave up).

-Matt

Comment Re:If only history (Score 1) 327

You obviously have not had to deal with customer/user complaints about filesystem corruption. Because if you had, you'd rapidly come to the conclusion that the manpower required to service all of those complaints, let alone track down the cause (which is virtually impossible if TRIM is enabled), not to mention the bad rep that you get whether it is your fault or not, is simply not worth it.

-Matt

Comment Re:Easier solution (Score 1) 327

Complete nonsense. You seem to think that TRIM is some sort of magical command that makes SSDs work better. Spend a little more time on understanding the reality and your opinion will change.

Overprovisioning works extremely well. So well that there is really no reason to use TRIM, and plenty of reasons to avoid using it due to the many reasons stated by myself and a few others here.

Overprovisioning and TRIM do not have linear relationships. You don't get double the value by using both, or even double the value by, say, doubling the amount you overprovsion, or doubling the amount of free space the SSD thinks it has due to using TRIM. Beyond a certain point, overprovisioning and TRIMmed space will have only a minimal effect on actual wear because, as I've said already, a good SSD will do both static AND dynamic wear leveling. Dynamic wear leveling alone using TRIMmed or overprovisioned space, no matter how much space is available, only delays the inevitable need to relocate static blocks.

You can reduce the copying the SSD has to do, but beyond a certain point the amount of copying that remains will be so far under the radar compared to nominal filesystem operation (normal writes to files, etc), that it just won't have any more of an impact.

The non-deterministic nature of TRIM (depending heavily on how full your filesystem is and how fragmented it is), not to mention firmware bugs, filesystem bugs, the impossibility of diagnosing and tracking down corruption when it occurs, problems with feeding TRIMs through block managers which use large-block CRCs, and many other issues wind up creating huge complexities that increases your chance of hitting a bug somewhere that blows you up... it's just not a good trade-off. I will take the *highly* deterministic and generally bullet-proof overprovisioning method over TRIM any day of the week.

The only time I advocate using TRIM is when someone wants to wipe and repartition a SSD from scratch. That's it. I don't even advocate it for cleaning up the swap partition on reboot because then you can't recover crash dumps (and you might already be paging after that point so...). Though I suppose one could add some logic to TRIM the swap partition if no crash dump is present. That's about as far as I would go, though.

-Matt

Comment Re:Why? (Score 1, Insightful) 327

I can't agree with your reasoning. The many computer companies that have lived and died over the years have primarily died because they were producing hardware that could not keep pace with developments in the industry.

Commodore .. which I was a developer for the Amiga (and a machine language programmer in my PET days), commodore died because AmigaOS was 100% dependent on Motorola and Motorola couldn't keep up with Intel, period.

NeXT died for the same reason. NeXT couldn't keep up with Intel and by the time Jobs caved in and went with his dual-architecture 68K/Intel binary format, it was too late. Also, depending on display-postscript for EVERYTHING was a huge mistake for NeXT and having 15+ year old OS tools for a weird Mach/BSD core that they never really updated messed them up too. I was a developer for the NeXT too.

In modern times, everyone runs on similarly powerful hardware and generally can stay up-to-date on the hardware front. OS makers die from a lack of apps or a lack of ease-of-use. Apple certainly does not suffer from either.

Linux and the BSDs are entirely dependent on a relatively common library of ~20,0000 to 30,000 or so (substantial) open source apps in order to stay relevant, but all suffer from the lack of a cohesive GUI that is powerful and easy to use. KDE, Gnome, the many other little window managers available... none hold a candle to either Apple or Windows. Unfortunately. At least as a consumer machine.

I have no problem running linux or a BSD as my workstation, as long as I am only doing programming or browsing. But if I want to play a *real* game or run *real* photo or video software (not something stupid like gimp which is virtually unusable)... then I have to shift my chair over to my Windows box or my refurbished Mac laptop. For that matter, if I want brainless printing which just works, I have to run it through my Windows box because CUPS is an over-engineered piece of crap that only works well on Macs... certainly not on linux or any of the BSDs.

-Matt

Comment Re:Easier solution (Score 1) 327

That is definitely incorrect. TRIM issuance is a filesystem-level operation or a disk partitioning level operation, not an OS-level operation. Due to ordering constraints, the OS cannot safely manage TRIM in the manner you suggest. A filesystem can, but honestly I don't know any filesystems which use TRIM that way. Smart SSD firmware can also delay TRIM in that matter but I don't know any that actually do. The filesystem will either issue the TRIM semi-synchronously or it will issue the TRIM as part of a batch cleanup. If it issues it at all. There is a great deal of complexity involved, because filesystems will often gang small block operations into larger blocks and you can't use TRIM on small blocks if you do that and still hope to have your CRC checks work deterministically on the larger block.

Also, remember that for SATA/AHCI (not SAS), the NCQ stuff is a pretty bad hack based on the command id and the original TRIM could not be issued without waiting for all other I/O to complete, then issuing it synchronously, then starting up the I/O again. For small erase sizes, writing zeros would be much, much faster if only the SSD spec specifically stated that writing zeros would have a TRIM effect. But it doesn't. Also, writing zeros is deterministic (which is good for finding bugs). TRIM is non-deterministic, which makes finding bugs almost impossible.

So, for many reasons, TRIM is about the worst implementation it is possible to have for a SSD. It's only real use is to completely wipe (repartition / blow-away) an SSDs entire storage. Other use cases just don't work as well as you might think and would be better served actually pushing zeros and specing the SSD to detect the zeros and TRIM the block, and if not to ensure that the sector still reads back as all zeros so as not to blow up large-block CRCs and other sanity checks.

-Matt

Comment Easier solution (Score 4, Interesting) 327

It isn't really true that SSD performance goes down by a whole lot if TRIM is not enabled. SSD performance and firmware has undergone radical improvements every year and people have come to the mistaken belief that enabling TRIM is responsible for most of the performance and wear leveling improvements.

TRIM has numerous problems, not the least of which being drives and/or filesystems which do not implement it properly. Because its use and effects can be seriously non-deterministic (even in a proper implementation), any bug in the drive firmware OR the filesystem in the use of TRIM can create serious corruption issues down the line when the drive actually decides to blow away some of the trimmed sectors. The TRIM command was badly conceived from the get-go.

The easiest and safest solution to getting 95% of the benefit of TRIM without actually using TRIM is to simply partition a factory fresh drive to leave a bit of unused space at the end... say another 5-10%. As long as it isn't written to, the drive will use that space as part of its dynamic wear leveling mechanic. As long as the drive also does static wear leveling (which nearly all will do these days), you wind up with nearly all the benefit of TRIM without having to actually use TRIM. TRIM was more important in the days where static wear leveling was not well implemented (or implemented at all). It is less useful these days.

-Matt

Comment Re:Why should I care? (Score 1) 122

My physical credit card (normal mag stripe) is compromised at least once a year and sometimes more often. I might not be liable for the fraud, but it is VERY inconvenient when it happens, the card gets locked out, plus it also causes my bank to start verifying more of the transactions which is just as inconvenient if I don't answer the text message quick enough and the card gets blocked for no reason.

It got so bad that around 2 years ago I got a second credit card so I could file it with trusted sites like Amazon and with charities that I donate to regularly and not have to give them all new card numbers every time my over-the-counter card got compromised. That's how bad it has been.

Chip-and-pin is less convenient than ApplePay. Tap-and-pay cards are nominally the same convenience as ApplePay but still have physical security issues. Mag stripe is clearly going to die soon... the data breaches are occurring so often now that not fixing it is no longer an option for merchants. They will have to go to NFC whether they like it or not.

I use ApplePay wherever I can now.

-Matt

Slashdot Top Deals

Somebody ought to cross ball point pens with coat hangers so that the pens will multiply instead of disappear.

Working...