When was TPP signed ?
The oldest CD in my collection was manufactured around 1990.
when I extract it in the computer and checksum it using "accuraterip" the audio is 100%, bit-for-bit, perfect. The CD returns the exact same signal that it did 25 years ago. I'm confident that in 25 years time, if someone wanted to play it, it would still be bit-for-bit perfect.
An LP cannot do that. Each time you play it, the sound gets worse. Scratches, tics, pops all get added. Dust ends up in the grooves which you simply can't completely remove.
So while I get that people prefer the sound of an LP, let's not pretend that this is about audio quality or reproduction, or about longevity. It's about adding artificial - analogue - effects on top of the work originally created by the artist.
Setting aside from the fact that even from brand new a vinyl LP will add pops, ticks, surface noise and other distortion to the sound
"Californication" is an exception in terms of albums from that era in that it was recorded and mixed on analogue tape. If the vinyl is cut from that master, then you are fine (excepting the limitations inherent to LPs). If the vinyl is cut from the next-generation digital master tape used to press the CDs, you've been fooled by hipster exhuberance.
I don't think the gauge is as big a deal as people are making out (not to gainsay the experts
In Ireland we have a non-standard gauge. All the locomotives, rolling stock and track maintenance equipment are all standard designs which have been modified to fit the required gauge. It's certainly more of a pain than it would be if we had standard gauge, but acquiring new rolling stock, as has been done a lot recently, hasn't proved to be a problem at all. Up until recent years when Ireland could afford to buy new rolling stock brand new, the railway company acquired new railway carriages by obtaining used British carriages and modifying the wheelsets in their own workshops. If Ireland - not by any means an industrial powerhouse - can do this, it surely shouldn't be beyond the gift of folks in California
I think it's the 1000VDC electrics that are the real problem. To solve this long term they'd have to deploy a second power distribution system - maybe an additional power rail, set up in such a way not to interfere with existing stock. It'd then be a case of either phasing out or retrofitting the existing stock while deploying new stock. The London and New York systems both run at around half of this voltage, the Paris system at 750VDC which is three-quarters.
I'm sure the clever people at BART have thought of all this. At the end of the day the problem really comes down to money.
How would a modern pressurized water reactor cope with a loss of coolant ?
The problems at Fukushima occurred when the external pumps which maintained coolant flow - required to cool the reactor for several days even after it has been completely shut down - failed due to water ingress. Isn't a PWR susceptible to the same problem ?
I certainly agree that modern PWRs, and other designs such as CANDU, are very safe in the sense that it is impossible for operator error to cause a runaway reaction. However, if a Fukushima-like incident which overwhelms all your backup plans occurs, you're still in trouble
Another day, another "breakthrough in energy storage tech" vapourware article.
That's not a fair characterisation. I tested an upgrade of ZFS; it could do live, in place upgrades just fine. So you could swap out a RAID array of disks just by switching each disk and waiting for the rebuild. Of course it is an enterprise FS, and intended for people who'd rather add a new shelf of disks than run the risk of downtime.
ZFS never made any secret of the fact that it was designed to use lots of (cheap) RAM and an SSD backend to get consistent performance.
Sure. BTRFS is, arguably, a disappointment. It has a lot of the features of ZFS, but it doesn't seem to be anything like as user friendly or logical. When I played with it for the first time about a year ago, I found no out-of-the-box automated snapshot features, and had to install a package to handle this, which didn't seem to work reliably. It's also an enormous missed opportunity that BTRFS doesn't have the ability to natively do RAID-6, or the ability to use an SSD as a fast block cache in front of standard HDDs. ZFS has done all of this for many years now.
I'm sure someone will reply with details of how to do these things. But with ZFS it was obvious and I didn't need to spend a lot of time googling.
That said, there's nothing that says that ZFS on Linux is automatically stable either. It's been necessary to extensively modify it in order to make it work. Running either of these two filesystems in production on Linux would be a risky proposition. It's no wonder the major enterprise vendors haven't switched to use them yet.
Unfortunately at the moment it looks like the short term future of filesystems on Linux is based around XFS. Longer term, bcachefs looks interesting.
Battery research is far more important than building smaller phones and tablets. Increased energy storage density has important implications for household and grid storage, and electric-powered transport.
The problem is that there have been at least a dozen or so stories about new battery tech in the past 12 months. Some of them remind you of the old joke about nuclear fusion; it's always 20 years away. Enough crying wolf; wake me when I can buy one.
Last year the UK finally passed legislation
No it didn't, as the article linked shows. The Government (via the Intellectual Property Office) issued guidance.
After the legislation passed, several groups of rightsholders applied for a judicial review, arguing that the change would cause financial harm to them.
FYI - legislation in the UK cannot be overturned by a court.
The Amiga was not only first, it was still the only multitasking OS on the desktop in 1990. Windows didn't support co-operative multitasking until Windows 95 came out, and if I'm right neither did the Mac.
Google and Backblaze are not typical enterprise deployments. Each company has built what is effectively an entirely in-house proprietary SAN with a large dedicated team to maintain it. Regular enterprises, ie people who are not in the storage business, cannot do this.
FYI 15000RPM SAS drives will provide significantly more IOPs per disk than 7200RPM SATA drives. Depending on the application, that may be important. The firmware on SAS drives also tends to be tuned for heavily random workloads, and for operation within a RAID array. Cheaper SATA drives come with a shorter warranty and conditions on how frequently they are in active use. Outside of these, yes the drives are essentially the same.
I suspect SSDs will eliminate most of the remaining business case for deploying SAS drives in the near future as the cost per gigabyte continues to fall.