Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re: For work I use really bad passwords (Score 1) 136

That's fair, but its also slightly different from your original proposal as it now explicitly requires custom dedicated hardware. You originally just stipulated "hardware assist" and allowed for "trusted desktop" or other otherware (e.g. smartphone/tablet/etc..)

It doesn't require the dedicated hardware, it's just an option (that doesn't exist yet...). I think it's likely a better option than products like the Mooltipass.

I use this approach currently, since I basically trust my desktops. I can also ssh to a server I trust, which is capable of doing it. You could do it now on a smartphone, but that's a tough platform to lock down. If you're desperate, you could find a website that can do it for you (googled quickly): http://pajhome.org.uk/crypt/md.... Regardless of full desktop, smartphone, or keyfob, the general characteristics are always the same: never storing secret, never directly performing authentication, no storing secure keys (although they could be added as another layer).

You definitely never need to worry about compromised sites:
hashlib.sha256('PrivateSimplePass+OnlinePoker.com'.encode('utf-8')).hexdigest[:16] = '2afd111a2ddde285'
When their site gets compromised, your password needs to change:
hashlib.sha256('PrivateSimplePass+YourSecuritySucks'.encode('utf-8')).hexdigest[:16] = 'fead5a3bbde90be3'

I do agree that a password safe combo would be the best option, since it's just not important to really lock down every password.

Comment Re: For work I use really bad passwords (Score 1) 136

They're close to the same thing, but they differ in the important places. An algorithm-on-a-chip (with tiny keypad and LCD) never stores any sensitive data. It's never connected to a potentially-compromised desktop. It can't be brute-forced, since there's nothing present to "unlock".

It could possibly store non-sensitive data, like usernames, "123!@#" modifiers, or notes, but it's not required.

I will admit that it could be inconvenient, but I think it's a reasonable tradeoff for the simplicity and security.

Comment Re: For work I use really bad passwords (Score 1) 136

You're making the mistake of thinking that your password system and their requirements need to integrated: they don't. You can concatenate a strong password system with their weak requirements, and the result is still strong.

The only time it gets weaker is when they enforce a maximum length. Then you have to start dropping your secure input in favor of their weak requirements. But in this situation, your (internal) password/phase isn't compromised, only the public version they get. Too bad for them.

Comment Re: For work I use really bad passwords (Score 1) 136

I discussed the details of how you can do it here: http://it.slashdot.org/comment...

It's really the only solution. There are 2 modern threats to passwords: computationally weak passwords and compromised servers with poor practices.

It's easy to make a computationally strong password, and it's not hard to make it memorable. But poor HR/IT policies such as described here compromise good passwords (forcing rapid changes, disallowing long passwords, etc). So memorable passwords are not easy, in practice.

On the other hand, there is absolutely nothing you can do to fix the possibility of server-side password leakage, aside from avoiding inter-site re-use.

The parameters which solve these two issues is really obvious: never provide any server which is not 1.) unique, and 2.) effectively random.

Once you're that far, it's also obvious how to get from something memorable to something unique and random: you take something simple, salt it, and encrypt/hash it. There is one additional step of complexity: use a non-secure transform to convert your random hash into an IT-approved password. If they want a character and an uppercase, go ahead and add/replace to get those characters. It doesn't matter if those characters are secure, since the rest of your password is: put 123!@# on the end of every password if you want.

The only problem left is that we can't compute hashes in our head, but there are hardware answers to that. The only place this falls short is when you are not permitted by policy to bring a device with you, and there is no trusted hardware on-site (desktop) capable of computing a hash.

Comment The problem isn't the format of the data... (Score 4, Insightful) 23

Although you have a point, you don't understand the realities of science, data, and publishing.

Journal articles never contain sufficient information to replicate an experiment. That's been reported multiple times and also discussed here previously indirectly: in particular there was the study about how difficult/impossible it is to reproduce research. Many jumped into the fray with the fraud claims when that report hit, but the reality is that it's just not possible to lay out every little detail in a publication, and those details matter a LOT. As a consequence, it takes a highly trained individual to carefully interpret the methods described in a journal article, and even then their success rate in reproducing the protocols won't be terrific.

The data is not hidden behind paywalls: there is minimal useful data in the publications. Of course, the paywalls do hide the methods descriptions, which is pretty bad.

There are two major obstacles to dissemination of useful data. This first is that the metadata is nearly always absent or incomplete, and the format issue is a subset of this problem. The second is that data is still "expensive" enough that we can't trivially just have a copy of all of it. This means that it requires careful stewardship if it's going to be archived, and no one is paying for that.

Comment science driven science? (Score 2) 55

This particular push may not be effective, but it's not hype.

Science may be data-driven, but historically scientists have not been trained to be good data custodians. They know reasonably well how to use data, but they don't know how to store it, label it, transfer it, etc. Go pick an article from 5 years ago which is data-heavy and try to get the original dataset from the authors: 95 times out of a hundred you'll spend a month emailing people and you'll end up with nothing. Four more out of the 100 you'll get an Excel spreadsheet without labels on the columns. Scientists desperately need to become better at managing data.

Personally, I think that this program is targeting a small subset of the people who need help, and as such it won't be very effective. These look like infrastructure projects, but infrastructure only drives trends in extremely rare cases. Here's a quote from one funded proposal:

This project develops web-based building blocks and cyberinfrastructure to enable easy sharing and streaming of transient data and preliminary results from computing resources to a variety of platforms, from mobile devices to workstations, making it possible to quickly and conveniently view and assess results and provide an essential missing component in High Performance Computing and cloud computing infrastructure.

Will that project help teach scientists they shouldn't email files to themselves as a method of long-term archival? Yes, that really is extremely common. We should be focusing on building data tools which are extremely simple, extremely broad in scope, and encourage or force adoption of those tools.

Comment Re:Unfamiliar (Score 1) 370

So "p" is the probability of a drive being down at any given time. A hard drive takes a day to replace, and has a 5% chance of going dead in a year. A given hard drive has a "p" of ~1.4e-4.

For RAID6 with 8 drives, you can drop 2 independent drives: failure = 1.4e-10. It's out in the 6+ nines.

It would take 6x sets of mirrors to get the same space. Each mirror has a failure probability of (p^2), 1.9e-8. Striped over the mirrors, all sets have to stay active: success = (1-p^2)^6, failure = 1.1e-7. Way easier to calculate without the binomial coefficient, by the way.

Technically, the mirrors are 3 orders of magnitude more likely to fail, but the odds are still ridiculously good. Fill a 4U with 22 drives (leave some bays for hot-swap) as mirrors and it's failure = 2e-7. Statistically, neither of these is going to happen: you just won't see two drives happen to go down together by random chance.

People already know this. There are much more advanced models that account for the what-happens-next situation after you've already lost a single drive, and of course it non-linearly worse. But just to keep it simple, going back to the naive model, for the RAID6 with 7 remaining drives, the failure probability is now up to 4e-7 during the re-silver time. The mirror model stays at a "huge" failure = 1.4e-4 during a re-silver, but it's brief, predictable, and with low system impact. It's my stance that that kind of probability keeps it in the category of less-important compared to many other factors for a risk analysis.

Comment Re:above, below, and at the same level. ZFS is eve (Score 1) 370

Sorry, I'm not that familiar with OpenSolaris.
Don't the first and second commands create a zpool backed by a file? That's not what's at question here, I want to know if you can back a zpool with a zvol created on that same zpool.

A quick test showed that it does work on FreeBSD to create a zpool upon a zvol from a different zpool. The circular version has made it hang for a not-insignificant amount of time...

Comment Re:Unfamiliar (Score 1) 370

That's a nice writeup.
I'm sure you've chosen that configuration for a reason, but I think it's a good example for why stripes over mirrors can be a better choice for some applications

You are running raidz2(7x4TB)+raidz2(8x2TB). Let's say that instead it was 3x(mirror(2x4TB))+4x(mirror(2x2TB)). Your capacity is 32TB as-is, or 20TB as mirrors: obviously that's a huge loss, and factoring in heat/electricity/performance/reliability it's likely that the raidz is a good choice for a home setup. Bandwidth would also be more that sufficient for home use.

But as you mention, the upgrades either take forever (one drive at a time) or require ridiculous free ports (add 7x at once?!). Even if you were to do them all at once, it would still be a fairly slow process with a massive performance hit.
On the other hand, with mirrors you can increase capacity 2 drives at a time, and at that level it's reasonable to leave both drives active as part of the "mirror" (now, 4-way) for some time. This is my preferred approach: new drives get added to a mirror set and run along with the system for a month or two. This stress-tests them, and if any point there are warning signs the drives can be dropped out immediately. If all is good after the test period, the old 2x of the mirror are removed and the space is immediately available (autoexpand=on). The process can then be repeated. Overall it takes as much or more time than your approach, but the system is completely usable during that time with no real performance hits, and of course the overall system performance is substantially improved with the equivalent of 7 devices running in parallel instead of 2.

There are definitely situations in which raidz2/3 makes more sense than mirrors, but if you're regularly expanding or looking for performance, I think the balance favors mirrors.

Comment Re:above, below, and at the same level. ZFS is eve (Score 1) 370

Have you confirmed using a zvol underneath a zpool, and if so was it a different zpool?
I've wanted to do that in the past, but it was specifically blocked. It's a pretty ugly thing to do, but it does give you a "new" block device that could be imported as a mirror on-demand. With enough drives in the zpool, that new device is nearly independent from its mirror, from a failure perspective.

Comment Re:production ready? (Score 1) 370

Is it a problem to add them by ID? I intentionally use partition IDs because they're stable, and it works well in both Linux and FreeBSD, but the FreeBSD people seem to like labels or raw device names.
Regardless, every import should bring the same pool online in the same way, regardless of the device names.

Comment Re:I agree... (Score 1) 370

Maybe your ZIL comments are specific to Linux? It used to be the case in FreeBSD that you had to have the ZIL present to import, and a dead ZIL was a very big problem, but that was many versions ago (~3-4 years?). I personally went through this when I had a ZIL die and the pool was present but unusable. I was able to successfully perform a zpool version upgrade on the "dead" pool, after which I was able to export it and re-import it as functional without the ZIL.

Note that this was NOT a recommended sequence of operations, and I wouldn't suggest it unless you have no choice.

Slashdot Top Deals

"Money is the root of all money." -- the moving finger

Working...