Forgot your password?
typodupeerror

Comment: Only where PKI is pervasive (Score 1) 601

by ComputerizedYoga (#38431502) Attached to: Do Slashdotters Encrypt Their Email?

on my work network, we've got an integrated PKI that makes it easy for people to exchange their public keys. If I'm sending someone a password or other sensitive information, I'll encrypt it against their keys there. If I'm just talking to someone (ie: not doing anything sensitive), encryption is off, signing is on. If I'm sending from my personal email, the only person I encrypt to is my work email.

I think the big reason that email encryption in general hasn't taken off is that it's a huge pain to exchange keys. Some keyserver attempts have been made, but frankly there's not been enough adoption in any circle I've seen to really call it a success. The only time this stuff seems to really work well is when there's a corporate directory and a mandate from management that says "you will get a pki certificate, and you will publish it on the global address list".

Comment: Re:Has anyone attempted to figure out... (Score 1) 260

by ComputerizedYoga (#37948432) Attached to: Pancake Flipping Is Hard — NP Hard

So the paper talks about SBPR (sorting by prefix reversals) and MIN-SBPR. The question is not "here, sort this stack of pancakes", it's "determine what the minimal number of flips is to sort an arbitrary stack of pancakes of size n".

If I'm following this, then sorting the stack itself is relatively easy (as you said, n^2). Figuring out the optimal sorting is apparently what's hard.

Comment: Re:Has anyone attempted to figure out... (Score 2) 260

by ComputerizedYoga (#37948240) Attached to: Pancake Flipping Is Hard — NP Hard

"size of the problem to some power" is the definition of polynomial time. Polynomial time problems are generally considered "easy" -- for example, your typical sorting algorithm is between n*log(n) and n^2. These grow slowly enough that general polynomial algorithms, even with relatively high exponents (like n^3 and n^4) are doable for reasonably large input sets.

The time it takes to solve an NP-hard problem is more in line with "a constant raised to the power the size of the problem". So doubling the size of the input squares the computation involved. So like ... no known general solution to SAT is better than 2^n.

So what that means is going from input size = 10 to input size = 20 would require a million times more power, and input size = 20 to input size = 21 would require twice the power that it would take to do input size = 20. This is WAY worse than n^x (where x is a constant).

Comment: Re:screen (Score 1) 307

by ComputerizedYoga (#31035456) Attached to: Keep SSH Sessions Active, Or Reconnect?

The answer to the tinfoil-hat question: ssh protocol 2 uses diffie-hellman key exchange to establish a session key before anything sensitive starts flowing. It's pretty resistant to eavesdropping, being that the actual key is never transmitted. You shouldn't be shy about starting up new ssh connections over untrusted links, as long as the destination's server key signature is already in your known_hosts.

Comment: Re:You're probably not that special.. (Score 1) 307

by ComputerizedYoga (#31035402) Attached to: Keep SSH Sessions Active, Or Reconnect?

Two points of note:

First: changes in traffic pattern are also information leakage. It's all a balancing act. If you're worried about your enemy discovering your location, you stay off the radio. If your location is known (eg: you're at a base), and you want to cover any activity changes, you might stay on all the time and lay down a noise floor to mask those changes. If there's scarce spectrum for you to communicate in, that's another consideration. But none of that really applies to non-military use of ssh.

Second: ssh2 handshakes use diffie-helman key exchange to establish a session key. The eavesdropper waiting for the next handshake gets nothing useful -- they're going to have to do an offline brute force of the stream they got, and that'll only reveal the session key to that user. Further, ssh re-keys every hour or every 1gb by default (configurable: RekeyLimit), so an offline brute-force of a collected data stream will be of relatively limited exposure for an attacker. And offline brute force of even a 128 bit aes-ctr stream is still somewhere deep in the realm of impractical.

Probably the only real difference between reconnecting and keeping it alive is that depending on how many times you connect, you may deplete your entropy pools faster doing one versus the other.

Comment: Re:I'd say (Score 1) 264

by ComputerizedYoga (#30192472) Attached to: Best Practices For Infrastructure Upgrade?

It's still dumb. I don't want to belabor the point here, but just from a financial perspective, accounting only for power and cooling, if you look at the 2 year mark, it's better to buy a $7000 "new hottie" that draws 300 watts at the wall than to spend $1000 on ten "old crappies" that draw 200W/ea at the wall*. And that's not even beginning to look at things like labor hours, ownership costs in years 3-5, space, storage, downtime due to a lack of in-system redundancy, software licensing costs. This "total cost of ownership", which is potentially much larger than the sticker price of the system. Especially when the counter-case is based on assumptions that a certain brand of hardware and all components therein are made out of unobtainium.

*: when you do this calculation, I recommend $0.10/Kwh, 1:1 power to cooling cost ratio, assumption that you actually need the systems you're acquiring, and acceptance of the idea that a pair of 2.53ghz quad-core nehalems with 24 gigs of ram is more than a match for 40 early-netburst 2.4ghz xeons with 20 gigs of total ram.

Comment: Re:Trying to make your mark, eh? (Score 1) 264

by ComputerizedYoga (#30192180) Attached to: Best Practices For Infrastructure Upgrade?

Redhat just released a real full-on production competitor to vmware, called RHEV (redhat enterprise virtualization). Like ... last month. It's built on KVM, and designed to do a multi-host setup, with a management console and the more must-have multi-host virtualization features (incl. live migration).

Not saying it's without problems, but my office was in the beta and we're pretty much sold on it.

Basic rhel 5.4 kvm virtualization, yeah ... I'd lean away from that for at least as long as it took to absorb the contents of the virtualization guide...

Comment: Re:I'd say (Score 1) 264

by ComputerizedYoga (#30192052) Attached to: Best Practices For Infrastructure Upgrade?

I'm not sure I'm convinced that it's really a good idea replacing 7 year old hardware with 5-6 year old hardware. Especially given that a single slightly-inexperienced sysadmin doing the system installs and upgrades in question is probably going to have their hands full for a year or so just on the software side. By the time the first wave of upgrades is done with, you're looking at hardware that's older than the stuff you're trying to get rid of was when you started the process.

Further, old cpus have comically bad performance compared to the latest and greatest, with performance literally an order of magnitude worse than current tech. Moreover, they don't support a lot of things that new systems do. That 10-pack you list isn't going to support virtualization extensions that make virt compelling on modern hardware. They're not going to support enough ram to let you do useful virtualization, and they don't ship with enough headroom for any modern os running a decently well-utilized application stack to be very happy. They probably don't even support 64-bit.

If you want cheap throwaway hardware to make a test lab out of, off-lease/lifecycled hardware's great. If you're doing things that live in production space, you might as well just bite the bullet, lay out a bit of capex and do it right the first time.

Comment: Re:Huh? (Score 1) 179

by ComputerizedYoga (#28781037) Attached to: Adobe Chided For Insecure Acrobat Reader

There was a time when we didn't have the internet and software shipped on floppies or CDs, so programmers were expected to get the software working 100% out the door. No second chances. i.e. The same constraints we hardware engineers have to deal with - get it right out the door.

Broken releases that need to be updated in the first couple days out are definitely problematic, as are regressive patches, but the "good old days" when people weren't expected to have internet connections to update stuff still had their (numerous) vulnerabilities.

Writing secure code is hard. In particular, writing code that protects against whole classes of attacks that weren't even around when you wrote your code is ... challenging, to say the least.

While it'd be nice if some of the worst offenders spent a little more time on QA before they start pushing "gold" releases, expecting perfection in nontrivial software at release time, or any other time, is a joke.

And hardware manufacturers aren't immune to that either. Why do you think BIOS and firmware updates and microcode patch mechanisms exist for most nontrivial hardware devices?

Comment: Re:Three options (Score 2, Informative) 1032

by ComputerizedYoga (#26835695) Attached to: How To Keep Rats From Eating My Cables?

Rats will chew through urethane foam like it's made out of ... err ... urethane foam. It's a good step, but it's insufficient against any chewing rodent who thinks there's supposed to be a path there.

As the AC nearby says, steel wool shoved into the gap that you're foaming shut will solve that problem though.

Comment: Re:The Simple Option (Score 1) 1032

by ComputerizedYoga (#26835673) Attached to: How To Keep Rats From Eating My Cables?

There's a lot of reasons to avoid poison that have nothing to do with arguments about humane treatment of animals.

Consider: secondary killing. Lots of things will opportunistically feed on dead or dying rats, and get poisoned and get sick or die as a result. If you succeed in killing half your rat population in a 500 yard radius, at the cost of killing 90% of the predators that eat them in a half a mile radius, the rat population will rebound faster and stronger than the predator count will, and you'll end up fighting bigger waves of insurgent rats. Of course, that only applies to areas where there's both rats and rat predators, so if you're in a massively built-up urban area with little or no green space, you can dismiss that one.

Then there's the whole carcass disposal thing to deal with. Most dying rats aren't going to crawl to a convenient place to die, they'll die in the same places they live, which means wall voids, pipes, brush piles, under raised floors, above suspended ceilings... basically everyplace that's hard to look and inconvenient to clean out. And if you don't find and remove the dead rats, the place is going to REEK for a long time. Of course, that's only really a valid argument in structures in which people spend time... so if you wanted to poison the rats in your barn, that'd be a little less relevant.

If the rats are indoors, you're better off killing them where you live (out in the open) than where they live (in the walls). Snap traps are the way to go for that.

Every successful person has had failures but repeated failure is no guarantee of eventual success.

Working...