If there was life there that escaped the current destruction it had to have left millions (or billions) of years ago (since the star has been a white dwarf for a long time and has been being obnoxious to its inner planets for a long time also). That means they would have likely colonized near space (not at all limited to our own solar system). Keep in mind that even the Voyager probes, which aren't even designed to go to other stars will reach nearest stars on the order of 100,000 years. And systems using ion drives and deliberately timed gravity assists could put that in the range of 30,000 years for something to spread out, or a few hundred with nuclear drives of the right type. See for example the summary here http://www.universetoday.com/15403/how-long-would-it-take-to-travel-to-the-nearest-star/.But of course we see no sign of anyone from a nearby system doing much.
Moreover, if they've had millions of years to spread out, that means that projects like Dyson spheres and ring worlds are obvious things to do. Systematic searches have been done and we're very certain we don't see any Dyson spheres in 300 parsecs (about 1000 light years) http://home.fnal.gov/~carrigan/infrared_astronomy/Fermilab_search.htm. While we can't be as certain, near ring worlds would likely have been noticed by Kepler. Other forms of engineering projects on that scale would be noticed, especially because this is in our back yard. This makes it unlikely.
In this case, the extremely close nature of the system, and the system's current state means that we can make with a high confidence much higher than just "we saw nothing."
Who says we'd even notice them with a 150 year delay between their actions and our ability to perceive them?
I'm not sure what you mean by this. The presence of a delay doesn't interfere with noticing things. It isn't like it is 1 second goes by, wait a 150 years, and then another 1 second goes by. There's just a fixed 150 year delay (just as there's an 8 minute delay from the sun).
I agree that there is no excuse not to use bcrypt.
You can do basically attempt all 8 character passwords in a few minutes per user on modern hardware (the salt adds 0 computation complexity, but as you say, it forces you to actually have to do the calculation instead of doing a lookup).
Also, the whole point is that key derivation is slow. Of course the "secret from which keys are derived" is available (it is necessarily so; it's stored, along with the cost factor, as part of bcrypt's output, for example). But the fact that you have to through 2^N iterations, where N is usually >= 10, throws a meaningful speedbump in front of high-speed cracking. Now instead of brute forcing any given 7-character alphanumeric case-sensitive passwords in ~half an hour, it'll take you > 20 days on average.
This is completely orthogonal to the fact that salted hashed passwords have never been an appropriate means to store a password. http://codahale.com/how-to-safely-store-a-password/
The key derivation functions can be literally several orders of magnitude harder to brute force. And their difficulty can be chosen with simple parameters, with sane defaults. There is really no comparison between a singly salted hashed password and bcrypt/scrypt.
Check out table 1 in this paper to get a sense: https://www.tarsnap.com/scrypt/scrypt.pdf
Assuming the cracker has access to the salt and a GPU, the only thing keeping users safe now is the entropy inherent in the passwords they chose.
It doesn't have to be like that. Instead of plugging in Good Salted Hashed Password Library, you can plug in Bcrypt Library or Scrypt Library *and protect even the users who chose bad passwords*.
Can you explain this a bit more?
If the hackers didn't get the salt, and only have the salted hashes, and let's say the salt is, say, a 20 character random phrase using numbers, letters and symbols, what is the weak spot?
I'm sure many
The size of the salt is relevant only insofar as you want to be sure that each user has their own unique salt. The salt is stored in plaintext (or, I suppose, it could be encrypted, but then the decryption key must then be stored in an accessible place). The point is that the crackers must be assumed to have recovered the salts.
So now those salts protect you against pre-computed hashes. The cracker has to attempt each password individually. But most people use one of the few thousand most common passwords. And inexpensive modern hardware lets you attempt billions of SHA hashes per second. So... Salted and hashed does very little for you at this point.
Instead of salting and hashing, use a key derivation function (e.g., bcrypt, scrypt).
And yet, with no extra effort on Living Social's part -- simply by choosing a bcrypt library instead of a custom hash/salt scheme -- even a user with a weak password would be protected.
So, sure, I might agree with you, but that doesn't absolve Living Social.
Why is it "fortunate" that the passwords were hashed and salted? Unless they've used key derivation functions (e.g., bcrypt, scrypt) and are actually under-selling their sophistication, this seems Very Bad for their customers.