Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Trust the World's Fastest VPN with Your Internet Security & Freedom - A Lifetime Subscription of PureVPN at 88% off. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. ×

Comment Re:doh! (Score 2, Informative) 528

Obama didn't release his birth certificate for one very good reason, he is very clever and Trump is very stupid.

The fact is that the Republicans will always invent some crazy idiotic 'scandal' that they obsess about and endlessly throw up smoke. The birther conspiracy was mind numbingly ridiculous. It would require someone to go back in time to plant the birth notice in the papers. Or for some group of conspirators to go to an enormous amount of trouble in order to make a particular black kid president.

So rather than release the birth certificate and let the Republicans invent a new scandal, Obama held onto it and let them obsess about a scandal nobody else thought made the slightest sense, knowing that he could knock their house of cards down any time he chose. Which of course he did a week before the Bin Laden raid which was guaranteed to end the story.

George W. Bush opened torture chambers across the world and collected photographs for a sick sexual thrill. Yet nobody ever talks about that. None of the people complaining about Hilary ever complained about GWB refusing to comply with Congressional investigation or the deletion of 5 million emails.

So here is what is going to happen. Trump is going to go down to the biggest defeat since Carter and he is going to drag the rest of his party down with him. And afterwards there is going to be a new civil rights act that prohibits Republican voter suppression tactics and the gerrymandering that give them a 5% advantage in elections. And by the time it is all done the Republican party will have two choices, either boot the racist conspiracy theorists and Trumpists out or face two decades in the wilderness.

Comment This isn't a victory for Behring-Breivik. (Score 3, Insightful) 491

Someone once pointed out that hoping a rapist gets raped in prison isn't a victory for his victim(s), because it somehow gives him what he had coming to him, but it's actually a victory for rape and violence. I wish I could remember who said that, because they are right. The score doesn't go Rapist: 1 World: 1. It goes Rape: 2.

What this man did is unspeakable, and he absolutely deserves to spend the rest of his life in prison. If he needs to be kept away from other prisoners as a safety issue, there are ways to do that without keeping him in solitary confinement, which has been shown conclusively to be profoundly cruel and harmful.

Putting him in solitary confinement, as a punitive measure, is not a victory for the good people in the world. It's a victory for inhumane treatment of human beings. This ruling is, in my opinion, very good and very strong for human rights, *precisely* because it was brought by such a despicable and horrible person. It affirms that all of us have basic human rights, even the absolute worst of us on this planet.

Comment Re:Wny did they need the certificates? (Score 1) 95

Issuing for .test and .local are strictly prohibited by the CABForum EV requirements. They will soon be outlawed for DV under the basic requirements.

What seems to have happened is that instead of issuing all test certs for test.verisign.com as the procedure manual required, they had to modify the procedure when Symantec took over and they no longer had verisign.com.

So instead of doing what they should have done and using test.symantec.com or a test domain bought for the purpose, they typed the first name that entered their head.

Comment Re:Self Signed (Score 1) 95

Actually it doesn't. DANE certificates are not self-signed for a start, they are signed by the DNSSEC key for the zone.

The problem with DANE is that you swap the choice of multiple CAs for a monopoly run by ICANN, a shadowy corporation that charges a quarter million bucks for a TLD because that is what the market will bear. What do you think the price of DANE certification will rise to if it takes off?

ICANN is the Internet version of the NFL only with greater opportunities for peculation and enrichment.

Comment Re:Wny did they need the certificates? (Score 1) 95

Damn right they should. The CPS has a long section on the use of test hardware.

The problem is that all the original team that built VeriSign have been gone for years. A lot of us left before the sale of the PKI business to Symantec. The PKI/DNS merger was not a happy or successful partnership. The original point of the merger was to deploy DNSSEC. that effort was then sabotaged by folk in IETF and ICANN which has delayed the project by at least 10 and possibly 20 years. ATLAS was originally designed to support DNSSEC.

Unfortunately, in PKI terms what VeriSign was to IBM, Symantec is to Lenovo.

They apparently remember the ceremonies we designed but not the purpose. So they are going through the motions but not the substance.

One of the main criticisms I have heard is that we built the system too well. From 1995 up to 2010 it worked almost without any issues. So people decided that they didn't need things like proper revocation infrastructure. The only recent issue the 1995 design could not have coped with was DigiNotar which was a complete CA breach.

There are some developments on the horizon in the PKI world that will help add controls to mitigate some of the issues arising since. But those depend on cryptographic techniques that won't be practical for mass adoption till we get our next generation ECC crypto fully specified.

Comment Re:What is a pre-certificate? (Score 3, Informative) 95

A pre-certificate is created for use in the Certificate Transparency system. Introducing pre-certificates allows the CT log proof to be included in the certificate presented to an SSL/TLS server.

The CT system generates a proof that a pre-certificate has been enrolled in it. The proof is then added to the pre-certificate as an extension and the whole thing signed with the production key to make the actual certificate.

If the CT system logged the actual certificate, the proof of enrollment would only be available after the certificate had been created.

Comment Roll it yourself but take responsibility (Score 1) 219

Super-Micro has 36 and 72 drive racks that aren't horrible human effort wise (you can get 90 drive racks, but I wouldn't recommend it). You COULD get 8TB drives for like 9.5 cent / GB (including the $10k 4U chassi overhead). 4TB drives will be more practical for rebuilds (and performance), but will push you to near 11c / GB. You can go with 1TB or even 1/2TB drives for performance (and faster rebuilds), but now you're up to 35c / GB.

That's roughly 288TB of RAW for say $30k 4U. If you need 1/2 PB, I'd say spec out 1.5PB - thus you're at $175K .. $200k.. But you can grow into it.

Note this is for ARCHIVE, as you're not going to get any real performance out of it.. Not enough CPU to disk ratio.. Not even sure if the MB can saturate a 40Gbps QSFP links and $30k switch. That's kind of why hadoop with cheap 1CPU + 4 direct-attached HDs are so popular.

At that size, I wouldn't recommend just RAID-1ing, LVMing, ext4ing (or btrfsing) then n-way foldering, then nfs mounting... Since you have problems when hosts go down and keeping any of the network from stalling / timing out.

Note, you don't want to 'back-up' this kind of system.. You need point-in-time snapshots.. And MAYBE periodic write-to-tape.. Copying is out of the question, so you just need a file-system that doesn't let you corrupt your data. DEFINITELY data has to replicate across multiple machines - you MUST assume hardware failure.

The problem is going to be partial network down-time, crashes, or stalls, and regularly replacing failed drives.. This kind of network is defined by how well it performs when 1/3 of your disks are in 1-week-long rebuild periods. Some systems (like HDFS) don't care about hardware failure.. There's no rebuild, just a constant sea of scheduled migration-of-data.

If you only ever schedule temporary bursts of 80% capacity (probably even too high), and have a system that only consumes 50% of disk-IO to rebuild, then a 4TB disk would take 12 hours to re-replicate. If you have an intelligent system (EMC, netapp, ddn, hdf, etc), you could get that down to 2 hours per disk (due to cross rebuilding).

I'm a big fan of object-file-systems (generally HTTP based).. That'll work well with the 3-way redundancy. You can typically fake out a POSIX-like file-system with fusefs.. You could even emulate CIFS or NFS. It's not going to be as responsive (high latency). Think S3.

There's also "experimental" posix systems like ceph, gpfs, luster. Very easy to screw up if you don't know what you're doing. And really painful to re-format after you've learn it's not tuned for your use-case.

HDFS will work - but it's mostly for running jobs on the data.

There's also AFS.

If you can afford it, there are commercial systems to do exactly what you want, but you'll need to tripple the cost again. Just don't expect a fault-tolerant multi-host storage solution to be as fast as even a dedicated laptop drive. Remember when testing.. You're not going to be the only one using the system... Benchmarks perform very differently when under disk-recovery or random-scatter-shot load by random elements of the system - including copying-in all that data.

Comment Git for large files (Score 1) 383

Git is an excellent system, but is less efficient for large files. This makes certain work-flows difficult to put into a git-repository. i.e. storing compiled binaries, or when having non-trivial test-data-sets. Given the 'cloud', do you forsee a version of git that uses 'web-resources' as SHA-able entities, to mitigate the proliferation of pack-file copies of said large files. Otherwise, do you have any thoughts / strategy for how to deal with large files?

Comment network-operating systems (Score 1) 383

Have you ever considered a network-transparent OS-layer? If not why? I once saw QNX and and how the command line made little differentiation of which server you were physically on. (run X on node 3, ps (across all nodes)). You ran commands pretty much on any node of consequence.. I've ALWAYS wanted this capability in Linux... cluster-ssh is about as close as I've ever gotten. These days hadoop/storm/etc give a half-assed approximation.

Slashdot Top Deals

Real Users find the one combination of bizarre input values that shuts down the system for days.