Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Nice... (Score 1) 147

Motorola did produce a HCMOS version of the 68000 and related cpus. e.g. the 68hc000 and friends, which I used extensively. HCMOS was billed as a 'fast' version of CMOS, trading off some current draw for speed. As people might remember, HCMOS pulls and pushes about the same (though the ground paths are rated higher). About 50 ohms to either rail, more capacitive than resistive so current draw ran more inline with the frequency and you could use 1M pull-down resistors on the tri-state busses and the logic was pretty bullet proof in terms of noise and reflections.

I really loved HCMOS, and hated TTL, but eventually advanced ttl beat it out (at least for the pin interface logic).

-Matt

Comment Re:The original 68000 interrupts were inadequate (Score 2) 147

Interrupts worked fine. It was bus errors (i.e. for off-chip memory protection and/or mapping units) that were a problem. The 68010 fixed that particular issue if I recall. I'm guessing later 68008's also did but I dunno. Doesn't matter since he isn't running with any memory protection.

You could in fact run a real multi-tasking OS on the 68000. I was running one of my own design for my telemetry projects. It didn't have memory mapping but it did have memory protection via an external static ram, 8:1 selector, and some logic. It managed around 20-30 processes.

And, strangely enough, you could also run a RTOS because the 68000 had wonderful prioritized interrupts. Back then, of course, real time response was required for handling serial ports and things like that.

-Matt

Comment Re:Hey, congratulations (Score 1) 147

Er, I meant 8:1 selector (the R/~W bit was fed into one of the select inputs). The function code logic was used to selectively enable/disable the memory protection unit, so supervisor accesses bypassed it while user accesses did not. Which is good because it wouldn't have been able to boot otherwise.

Another use for the FC logic is to speed up the auto-vector code. The 68K had wonderful asynchronous interrupt logic. You basically had 8 priority levels and you could feed your I/O chips into a simple 8:3 priority selector and feed the result into the interrupt priority level pins on the cpu. The 68K would then do an interrupt vector acquisition bus cycle to get the vector (or you could tell it to generate an internal vector). Every once in a while the async logic would screw up and we'd get an uninitialized interrupt vector but the code to deal with that was trivial, and since it was all level logic the hardware would sort it out soon enough and calculate the correct IPL to request from the bus.

The autovectors are slow, though... it was far better to generate vectors from a ram (if I remember right). The FC logic could be used to force the access to the ram or the eprom (since the address lines I think were all 1's except for three bits defining the priority level being fetched, or something like that).

In contrast, even to this day Intel STILL can't get their interrupt logic to work properly. Even the MSI-X logic is broken in a lot of chipsets. Yuch. So much ridiculous and unnecessary complexity with Intel interrupt handling with all the idiotic IOAPIC and LAPIC sub-processors with non-deterministic reaction times and serialization problems and other stupid stuff. The motorola interrupt logic was a dream in comparison.

-Matt

Comment Hey, congratulations (Score 2) 147

Congratulations, you are now in a rare group indeed. But I gotta say, you haven't lived until you've programmed a 6502 directly in machine code. No assembler :-)

One of my telemetry systems which I built and designed 25+ years ago used the 68000 running at around 10 MHz (with a jumper for 20 MHz, which it could actually do though I didn't deploy it at those speeds). The coolest thing about it was that I had built a rudimentary memory protection unit using a static ram and an 8:2 selector. Any user mode accesses pumped the high address bits into the ram with 2 address bits going to the selector along with the R/~W bit. The result was gated into the bus error logic. The top three bits of the static ram were directly controlled by the kernel which allowed the kernel to 'cache' up to 8 process's worth of protection data in the ram at any given moment, so context switches were still very fast.

Before the 68030 and 68040 came out, Sun (I think) was running two 68000's in lockstep, one one cycle behind the other, in order to implement their own MMU. When a fault would occur, they bus-errored the lead chip and paused the second 'behind' chip so they could take the bus fault, resolve the mapping issue, and then resume the behind chip. Then the 68010 came along and fixed the bus error interrupt stacking bugs in the 68000, and the 68020 came along after that.

The 68030 could hold short loops in its chip logic with some tricks, despite not really having a cache. Unfortunately, the 68040's on-chip cache implementation was horrible and created all sorts of problems for implementers, and by then Intel chips were running much much faster.

When Motorola retired the 68K series some of their larger embedded users asked motorola to re-test the 68000 chip specs at a higher clock, since by then the HCMOS process could obviously run the chip much faster than the ~10-12 MHz that was speced. Motorola tested the HCMOS version of the chip to around 50-70 MHz or so. Such a nice 32-bit chip, I was really sorry to see Moto lose to Intel (mostly because Moto gave up).

-Matt

Comment Re:If only history (Score 1) 327

You obviously have not had to deal with customer/user complaints about filesystem corruption. Because if you had, you'd rapidly come to the conclusion that the manpower required to service all of those complaints, let alone track down the cause (which is virtually impossible if TRIM is enabled), not to mention the bad rep that you get whether it is your fault or not, is simply not worth it.

-Matt

Comment Re:Easier solution (Score 1) 327

Complete nonsense. You seem to think that TRIM is some sort of magical command that makes SSDs work better. Spend a little more time on understanding the reality and your opinion will change.

Overprovisioning works extremely well. So well that there is really no reason to use TRIM, and plenty of reasons to avoid using it due to the many reasons stated by myself and a few others here.

Overprovisioning and TRIM do not have linear relationships. You don't get double the value by using both, or even double the value by, say, doubling the amount you overprovsion, or doubling the amount of free space the SSD thinks it has due to using TRIM. Beyond a certain point, overprovisioning and TRIMmed space will have only a minimal effect on actual wear because, as I've said already, a good SSD will do both static AND dynamic wear leveling. Dynamic wear leveling alone using TRIMmed or overprovisioned space, no matter how much space is available, only delays the inevitable need to relocate static blocks.

You can reduce the copying the SSD has to do, but beyond a certain point the amount of copying that remains will be so far under the radar compared to nominal filesystem operation (normal writes to files, etc), that it just won't have any more of an impact.

The non-deterministic nature of TRIM (depending heavily on how full your filesystem is and how fragmented it is), not to mention firmware bugs, filesystem bugs, the impossibility of diagnosing and tracking down corruption when it occurs, problems with feeding TRIMs through block managers which use large-block CRCs, and many other issues wind up creating huge complexities that increases your chance of hitting a bug somewhere that blows you up... it's just not a good trade-off. I will take the *highly* deterministic and generally bullet-proof overprovisioning method over TRIM any day of the week.

The only time I advocate using TRIM is when someone wants to wipe and repartition a SSD from scratch. That's it. I don't even advocate it for cleaning up the swap partition on reboot because then you can't recover crash dumps (and you might already be paging after that point so...). Though I suppose one could add some logic to TRIM the swap partition if no crash dump is present. That's about as far as I would go, though.

-Matt

Comment Re:Why? (Score 1, Insightful) 327

I can't agree with your reasoning. The many computer companies that have lived and died over the years have primarily died because they were producing hardware that could not keep pace with developments in the industry.

Commodore .. which I was a developer for the Amiga (and a machine language programmer in my PET days), commodore died because AmigaOS was 100% dependent on Motorola and Motorola couldn't keep up with Intel, period.

NeXT died for the same reason. NeXT couldn't keep up with Intel and by the time Jobs caved in and went with his dual-architecture 68K/Intel binary format, it was too late. Also, depending on display-postscript for EVERYTHING was a huge mistake for NeXT and having 15+ year old OS tools for a weird Mach/BSD core that they never really updated messed them up too. I was a developer for the NeXT too.

In modern times, everyone runs on similarly powerful hardware and generally can stay up-to-date on the hardware front. OS makers die from a lack of apps or a lack of ease-of-use. Apple certainly does not suffer from either.

Linux and the BSDs are entirely dependent on a relatively common library of ~20,0000 to 30,000 or so (substantial) open source apps in order to stay relevant, but all suffer from the lack of a cohesive GUI that is powerful and easy to use. KDE, Gnome, the many other little window managers available... none hold a candle to either Apple or Windows. Unfortunately. At least as a consumer machine.

I have no problem running linux or a BSD as my workstation, as long as I am only doing programming or browsing. But if I want to play a *real* game or run *real* photo or video software (not something stupid like gimp which is virtually unusable)... then I have to shift my chair over to my Windows box or my refurbished Mac laptop. For that matter, if I want brainless printing which just works, I have to run it through my Windows box because CUPS is an over-engineered piece of crap that only works well on Macs... certainly not on linux or any of the BSDs.

-Matt

Comment Re:Easier solution (Score 1) 327

That is definitely incorrect. TRIM issuance is a filesystem-level operation or a disk partitioning level operation, not an OS-level operation. Due to ordering constraints, the OS cannot safely manage TRIM in the manner you suggest. A filesystem can, but honestly I don't know any filesystems which use TRIM that way. Smart SSD firmware can also delay TRIM in that matter but I don't know any that actually do. The filesystem will either issue the TRIM semi-synchronously or it will issue the TRIM as part of a batch cleanup. If it issues it at all. There is a great deal of complexity involved, because filesystems will often gang small block operations into larger blocks and you can't use TRIM on small blocks if you do that and still hope to have your CRC checks work deterministically on the larger block.

Also, remember that for SATA/AHCI (not SAS), the NCQ stuff is a pretty bad hack based on the command id and the original TRIM could not be issued without waiting for all other I/O to complete, then issuing it synchronously, then starting up the I/O again. For small erase sizes, writing zeros would be much, much faster if only the SSD spec specifically stated that writing zeros would have a TRIM effect. But it doesn't. Also, writing zeros is deterministic (which is good for finding bugs). TRIM is non-deterministic, which makes finding bugs almost impossible.

So, for many reasons, TRIM is about the worst implementation it is possible to have for a SSD. It's only real use is to completely wipe (repartition / blow-away) an SSDs entire storage. Other use cases just don't work as well as you might think and would be better served actually pushing zeros and specing the SSD to detect the zeros and TRIM the block, and if not to ensure that the sector still reads back as all zeros so as not to blow up large-block CRCs and other sanity checks.

-Matt

Comment Easier solution (Score 4, Interesting) 327

It isn't really true that SSD performance goes down by a whole lot if TRIM is not enabled. SSD performance and firmware has undergone radical improvements every year and people have come to the mistaken belief that enabling TRIM is responsible for most of the performance and wear leveling improvements.

TRIM has numerous problems, not the least of which being drives and/or filesystems which do not implement it properly. Because its use and effects can be seriously non-deterministic (even in a proper implementation), any bug in the drive firmware OR the filesystem in the use of TRIM can create serious corruption issues down the line when the drive actually decides to blow away some of the trimmed sectors. The TRIM command was badly conceived from the get-go.

The easiest and safest solution to getting 95% of the benefit of TRIM without actually using TRIM is to simply partition a factory fresh drive to leave a bit of unused space at the end... say another 5-10%. As long as it isn't written to, the drive will use that space as part of its dynamic wear leveling mechanic. As long as the drive also does static wear leveling (which nearly all will do these days), you wind up with nearly all the benefit of TRIM without having to actually use TRIM. TRIM was more important in the days where static wear leveling was not well implemented (or implemented at all). It is less useful these days.

-Matt

Comment Re:They ARE a utility. (Score 1) 706

Regulation can lead to higher prices. But that's generally only when that regulation is restricting competition in some way. Like the airlines, or the telco industry back in the days of AT&T as The Official Regulated Phone Company Monopoly.

However, its the telcos themselves today, in an environment of unprecedented freedom compared to telcos throughout most of the rest of the world, who are keeping the prices high, and that largely by limiting competition on their own. Everyone's basically trying to be Apple -- particularly in wired telecom, they're optimizing for maximum profit per customer, not trying to net the most customers. Verizon's not laying miles of new fiber anymore, trying to reach everyone. And most of these guys are making 40-50% profit margins. Meanwhile, US internet service is #10 in the world... didn't we frickin' invent the Internet?

Regulating certain aspects of the Internet can definitely improve it for every user and most connected companies. There's no need to make things better for Verizon or Comcast... they're doing just dandy. And realistically, an Internet connection is a utility -- this is obvious to everyone. If it weren't for all the money being spent to buy Congresscritters on behalf of the telcom industry, this wouldn't even be a newsworthy thing. Of course it's an utility. Maybe leaving off the Title 2 classification was a useful thing in the early days to make life easier on the ISPs. But twenty years ago, my ISP was a 5 person company run by an old buddy of mine. Now you're probably getting your service from one of the largest communications companies in the country, if not the world. Comcast owns Universal and NBC for f's sake. Verizon made over $30 billion last year.

Comment Re:Why should I care? (Score 1) 122

My physical credit card (normal mag stripe) is compromised at least once a year and sometimes more often. I might not be liable for the fraud, but it is VERY inconvenient when it happens, the card gets locked out, plus it also causes my bank to start verifying more of the transactions which is just as inconvenient if I don't answer the text message quick enough and the card gets blocked for no reason.

It got so bad that around 2 years ago I got a second credit card so I could file it with trusted sites like Amazon and with charities that I donate to regularly and not have to give them all new card numbers every time my over-the-counter card got compromised. That's how bad it has been.

Chip-and-pin is less convenient than ApplePay. Tap-and-pay cards are nominally the same convenience as ApplePay but still have physical security issues. Mag stripe is clearly going to die soon... the data breaches are occurring so often now that not fixing it is no longer an option for merchants. They will have to go to NFC whether they like it or not.

I use ApplePay wherever I can now.

-Matt

Comment NFC alone isn't enough (Score 2, Informative) 122

You need NFC (which many Android devices have had for years)... but you also need an actual secure chip (not a software emulation or intermediary), and the ability to initiate payment without having to turn on the phone or type in a security code (i.e. a fingerprint reader), and you have to be able to do it with the phone locked and turned off (meaning you need low power hardware to detect the NFC and wake the phone up). And then you need the OS integration to make it all work together seemlessly. And it has to not leak information to anyone except your bank which obviously needs to have the information anyway... and there is no smart phone app on the market other than ApplePay which can make that guarantee. Certainly not Google Wallet. Or CurrentC. Or anything else. And it's better than chip-and-pin and tap-to-pay which both have physical security issues (though they are much better than mag stripe).

Android is missing too many pieces and it will be at least 1-2 years before it has them all. And even then there will be such a huge percentage of *new* android phones that won't have all the pieces that it will only create mass confusion for the general consumer.

The reason Google Wallet has been a failure to-date is that it (and all other smartphone-based payment systems except ApplePay) is simply not convenient to use compared to swiping a credit card. The reason ApplePay became the #1 smartphone payment mechanism overnight is because it's utterly trivial and convenient to use.

It took me exactly 3 seconds at the local Whole Foods to pull out my phone, tap it with my finger on the finger print reader, and put it back in my pocket. It takes me about as long to swipe my card if I don't have to sign, but half the time I do have to sign so ApplePay immediately wins because I never have to sign (at least not so far).

Eventually all smart phones will do it the Apple way. For now, though, and for the next 1-2 years at a minimum, Apple is the only smartphone game in town that actually works well. Chip-and-pin and tap-to-pay cards work almost as well... they can even be more convenient in some situations, but they don't cover all the security bases.

-Matt

Slashdot Top Deals

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...