Available every 6 monthes : http://www.hardware.fr/article...
Available every 6 monthes : http://www.hardware.fr/article...
Leap seconds are announced months in advance
i.e. with less warning than the revalidation time for a lot of safety-critical systems.
Hmm., hopefully safety-critical systems are implemented so that they have provisions for leap seconds built in already. What should be needed is organizational procedures for setting the appropriate flag in time.
Further, I would expect that many safety-critical systems are more concerned with elapsed time from some epoch (switch on, last firing of engine, last heart-beat) and less about civic(?) calender time (we meet on January 2nd, 2016 11:01:14 EST).
Finally, in really hairy cases things should be referred to a simpler, monotonous scale (TAI or, yuck some domain specific scheme).
As I see it, this is a question about standardizing and implementing systems properly. Leap seconds are announced months in advance.
It can't be such a big problem systems that handle this correctly.
But then, daylight savings time still seems to give problems. Sheesh!
PS. Anybody who knows about problems with leap days?
What seems to have happened is that instead of issuing all test certs for test.verisign.com as the procedure manual required, they had to modify the procedure when Symantec took over and they no longer had verisign.com.
So instead of doing what they should have done and using test.symantec.com or a test domain bought for the purpose, they typed the first name that entered their head.
Actually it doesn't. DANE certificates are not self-signed for a start, they are signed by the DNSSEC key for the zone.
The problem with DANE is that you swap the choice of multiple CAs for a monopoly run by ICANN, a shadowy corporation that charges a quarter million bucks for a TLD because that is what the market will bear. What do you think the price of DANE certification will rise to if it takes off?
ICANN is the Internet version of the NFL only with greater opportunities for peculation and enrichment.
Damn right they should. The CPS has a long section on the use of test hardware.
The problem is that all the original team that built VeriSign have been gone for years. A lot of us left before the sale of the PKI business to Symantec. The PKI/DNS merger was not a happy or successful partnership. The original point of the merger was to deploy DNSSEC. that effort was then sabotaged by folk in IETF and ICANN which has delayed the project by at least 10 and possibly 20 years. ATLAS was originally designed to support DNSSEC.
Unfortunately, in PKI terms what VeriSign was to IBM, Symantec is to Lenovo.
They apparently remember the ceremonies we designed but not the purpose. So they are going through the motions but not the substance.
One of the main criticisms I have heard is that we built the system too well. From 1995 up to 2010 it worked almost without any issues. So people decided that they didn't need things like proper revocation infrastructure. The only recent issue the 1995 design could not have coped with was DigiNotar which was a complete CA breach.
There are some developments on the horizon in the PKI world that will help add controls to mitigate some of the issues arising since. But those depend on cryptographic techniques that won't be practical for mass adoption till we get our next generation ECC crypto fully specified.
A pre-certificate is created for use in the Certificate Transparency system. Introducing pre-certificates allows the CT log proof to be included in the certificate presented to an SSL/TLS server.
The CT system generates a proof that a pre-certificate has been enrolled in it. The proof is then added to the pre-certificate as an extension and the whole thing signed with the production key to make the actual certificate.
If the CT system logged the actual certificate, the proof of enrollment would only be available after the certificate had been created.
I was truly happy when I heard that the Nobel prize had been awarded for the discovery and development of artemisinin. This drug has saved the lives of many.
Sad that substandard preparations of artemisinin has led to spread of resistance in Indochina.
Thanks, tlhIngan. A balanced and sensible, informative post.
RAID5 has n data disks plus one dedicated parity-only disk; ZFS distributes all data and all parity across all disks
RAID-5 also spreads parity among all component disks. Each stripe, the parity disk is switched. This is done to achieve higher throughput on reads, as without it, one disk would always sit idle for read workloads.
ZFS updates metadata before data
Actually, ZFS updates metadata together with user data, but the trick is that the update is never performed in place. So what happens is that we write user data along with nearly all the metadata needed to access it. Then, once everything has finished writing (and has been sync'ed to stable storage), we update the root block pointers to point to the new metadata tree and again, sync those. In this respect ZFS is much more like an ACID-compliant database than just a conventional filesystem.
The write hole in btrfs is AFIAK also present in zfs and listed as a risk of a power failure during write on a raid pool with COW filesystems.
The problem you describe makes no sense in ZFS. ZFS never overwrites in-place and a synchronous write is not acknowledged until all component devices (including parity) have sync'ed to stable storage. ZFS will never ever try to read a partially written stripe block (simply because it has no pointers to it yet). After a synchronous write (O_SYNC) returns, it is guaranteed to have all of its data available, regardless if it was overwriting a portion of a file in place, or appending new data to a file.
I think you're misunderstanding how raid-z actually works. raid-z is kinda like RAID-5, but not completely and it's this difference that allows ZFS to not have a write hole at all. All writes to a raid-z, regardless of size, are "full-stripe". The key in ZFS is that there is no fixed stripe size. I'd recommend Jeff Bonwick's original article on raid-z for a writeup of the principles and Matt Ahrens' article ZFS RAIDZ stripe width, or: How I Learned to Stop Worrying and Love RAIDZ for a nice diagram illustrating the layout.
It's irrelevant whether the farmer contaminated his crops accidentally or deliberately, the problem is Monsanto having the patent in the first place.
So if I'm understanding you right, you want GMO technology to not be patentable subject matter. Do you think that advancement in this avenue of technology is not an overall benefit? Or do you believe there's a better approach to achieving it, for example through publicly (i.e. governmentally) funded R&D projects. In my experience, the anti-genetic-patent camp significantly overlaps with the general anti-GMO camp, simply because they believe GMO not to have significant-enough benefit to society and the anti-genetic-patent stance is merely a vehicle to be used to achieve the goal of ending GMO development. Which is it? Do you support GMO technology and simply want to terminate its patentability (in which case I ask how for alternative means to encourage its R&D), or do you object to GMO as a technology in general and are merely using the patentability argument as a tool to achieve another goal?
If they have the ability to sue anyone who infringes their patent
Yes, that's how patents work.
then depending on their goodwill to only go after people who tell them they've done it deliberately is incredibly naive, regardless of any other evidence of what they've actually done
Wait, so are you saying that nobody should be able to sue for patent infringement, even if the plaintiff believes that willful infringement can be demonstrated in court? I'm probably misunderstanding you here. Can you please rephrase that last sentence?
Unix is the worst operating system; except for all others. -- Berry Kercheval