I spent a big part of my life as a Catholic. Fairly early on, I realised that there needed to be "the real me" and "the me I pretended to be to the church".
I spent a big part of my life as a Catholic. Fairly early on, I realised that there needed to be "the real me" and "the me I pretended to be to the church".
Wow. Just wow. I NEVER learn anything from
It's worth noting that this benefit relies on your application server not being compromised, as if an attacker owns that, they can change code to move the client-based hash step to the server (changing the client AND server code), and still see passwords. So you're really only protecting against network-sniffing attacks, which are USUALLY prevented by SSL anyway. This actually gives me an idea, but I'll have to think about it. Something along the lines of using an MD5 of the sign-in page itself as a part of the process, so changing the page will break things. That's obviously vulnerable to exactly the same attack, but perhaps there's an extension to this which might work.
You have also prevented plain-text reveal in the situation where someone somehow intercepts the post-SSL stream but can't alter the application, which is certainly a possible scenario.
There's a major benefit, with this scheme, if you're using a dedicated ssl server and relying on a secure network behind that (which is not uncommon in higher-load applications) - compromise of the ssl server doesn't lead to compromise of plain-text passwords. The attacker would need to take the next step and own the application servers behind that, and given that this scenario only crops up in high-volume load-balanced systems, there are likely lots of identical systems to deal with, and (hopefully) switched-on administrators and security experts, so adding another step like that could vastly decrease the chance of a complete compromise. The attacker would already own login details to the attacked site (they could replay hashes from the owned SSL appliance), so there's every chance they'll take that and never even try to compromise the application code itself, thus never leading to the plain-text reveal.
You've basically described how it usually works, except that instead of having the client perform a hash, we have the client encrypt the communication over SSL. The advantage, that the password can't end up accidentally in a log file, means now that instead of the password, the hash that the client sends would end up in the log file. I'm worried that you're adding to the complexity of your code in order to prevent an avoidable bug - it seems like you'd be better to just ensure that sensitive information isn't showing up in your logs (which is a crucial step in avoiding security holes; it's specifically addressed in, for example, the PCI security standard).
There should be one salt per user, not one per application. This means that the whole effort to generate a rainbow table is only applicable to the one user you're trying to recover the password for; the rainbow table for the next user will be totally different, because there's a new salt. This means that all of your work to hack one account can't be re-used for the next. Salting isn't about preventing this sort of attack; it's about multiplying the effort to compromise n accounts by n. If it takes the attacker 5 days to compute a nice big look-up table, they now have to repeat that per account, instead of having now compromised every account.
Everyone knows not to store passwords in a database. You store hashes in a database instead, which is what your link (and TFA) are talking about.
It doesn't even require a replay attack. We're talking about what happens if the database of stored hashes is compromised, and if the client does the hashing instead of the server, you don't even need the password. You can just submit the hash from your stolen database to sign in as the user.
Uh, what? The browser hashes the password? Now you don't even NEED the password; you have the hash, and that's all the client needs to submit to gain access! You just pretend that you hashed the password and transmit the hash. No password needed! Unless, of course, you submit the password AND the hash, but that doesn't gain anything over just submitting the password (except perhaps proving that the client has a working hash function).
Remember, we're talking about what happens when the database of stored hashes is compromised, and having the browser do the hashing makes this scenario MUCH worse.
A correctly-implemented salted-password scheme uses a different salt per user - it doesn't even matter if it's trivial to predict. The point is that it multiplies the computational load to compromise n users by n. You can't generate a single look-up table any more.
Further, the salt is combined with the key, not the user's password. If it was just combined with the password before the encryption, when you used your look-up table to find out the (password+salt) used to generate a particular hash, you would then de-combine the known salt and have the password! Simple.
Finally, because the salt is combined with the encryption key, using one salt for your whole system would be no different to just using a different key.
With the correct scheme, adding a per-user salt means (even if the salt is trivial to discover) you are using a DIFFERENT key to compute the hash for each user. Now you may still be able to generate a large look-up table of hashes to compromise an individual hash, but it will only work for ONE user account (barring salt collisions), and so a 24-hour run (your number) will be required PER USER ACCOUNT. This means that a few dozen, or even a few hundred, accounts may be compromised, but this will be a much smaller fraction than if you weren't using salts (or were using them incorrectly, as is so common).
I don't care enough to read through and make sure I'm not repeating what's already been done to death. I've worked for a few small companies, and seen some things work and some things fail dismally.
One thing I have definitely seen is that the typical employee has motivation for about 20-30 real, productive work hours per week. Anyone who puts in a real, near-peak 40 hours is a superstar, and I'll do anything to hang onto those people. Regardless of how much someone shines during an interview, it's very hard to judge this, and I find most new hires tend towards about 20 hours.
The absolute worst way to increase this is to just ask them to do it. Especially when they already aren't being particularly productive during part of their week. Their productivity will sit at about the same level. Their 'sitting at their desk pretending to work' time will increase. They'll get home later, have less leisure time, and their productive hours will start to creep down.
What I have seen work is incentive-based volunteering. I worked for one company for a while where I tended to work a few extra hours during the week (I probably averaged 10-hour days, when I only needed 8), and I felt more productive there than anywhere else I've worked. My salary was actually a little below what I could have gotten elsewhere, but the team culture was amazing. 4pm on Friday was officially Beer (/ non-alcoholic alternative) O'clock. There were plates of fruits and pastries in the kitchen every morning. There was an amazing coffee shop across the road, and we had an account there and were encouraged to have small-group meetings there. The boss put on a barbecue once every couple of weeks on the weekend, and he did all the cooking (for 15+ people) himself, and the food was VERY good (like large, high-grade steaks, expensive and well-prepared fish, oysters, and so on). If it weren't for that unfortunate matter involving the FBI, our Federal Police (we're outside the US), and MasterCard investigators, I'd still be happily pulling 10+ hour days there. All of that effort cost MUCH less than paying us for the extra time we put in, and given the salaries were a touch below average, we probably cost less overall than a typical software team who would be less happy, less productive, working 8 hours a day and not really pulling their weight. Another place where I worked took everyone out water- and jet-skiing once a month (the boss owned several boats and jet-skis).
If the boss really won't look at paying you more or giving you stock (and, from what I've seen, there are lots of people who don't seem to be more motivated by more money), he should look at doing something genuine to improve his employees' lives.
The number one lesson I drum into fresh coders, when I work with them, is "email early and often". When things are running a day behind, email your manager. When you're not clear on the specs, email your manager. When you've completely and totally screwed up and affected production client data, email your manager. NOW. Your manager is there for two things (that are relevant for this):
1. Work out issues, keep the team on target, and make sure the (internal or external) clients are happy and don't even realise when the team royally screws up.
2. Weed out (/fire/let go/ditch/whatever) people who get in the way of goal 1.
If you're emailing your manager early, giving him the chance to do damage control, and letting him get you help when you need it, no matter how badly you screw up, you're not too actively getting in the way of goal number 1. If you screw up and don't keep him totally in the loop, you're making sure he can't achieve goal 1.
Email early and often.
I've been the student who desperately thought I wanted to write computer games. I've been the interviewer (for a financial software house) interviewing ex-games-programmers. I've been a team-lead mentoring ex-games-programmers. I've worked with a 1st-level phone support guy who'd spent 6 years as a hardcore C++ game developer but couldn't find any software work and had to take a support job.
First of all: tell them not to do it. The glory isn't what they think. The fun isn't what they think. The hours will suck, and the rewards will be average. Their shop will go under, and they will be competing with their 30 colleagues who are also out of work for whatever local jobs are going. They will come out as hardcore coding junkies with mad skills, and then end up taking jobs as interns under 'developers' with half their talent.
But: they will work with a bunch of young people, on crazy deadlines and massive unpaid overtime. They will meet some crazy people. They will eat a lot of pizza, and they will get free time on their competitors' games. They will be part of a tightly-knit, fast-moving industry which teaches them amazing technical skills. They will get no credit for it.
If they're sans-girlfriend, have few commitments, and want a few years of madness which they'll walk out of at the end with few rewards apart from the experience, they should pursue it. They need to know that it will suck the life out of them, they will feel under-appreciated and over-stressed, and they will probably need to rely on friends and family to get through lean times. It's an option when they're young. It's like traveling. Do it now: you won't be able to when you're older.
I'm speaking purely from a coding perspective, when it comes to skills. Maths, physics, and good coding skills. They need to know all about pointers, recursion, memory-management, event loops, and algorithm efficiency. They should pick an open-source engine or game, and try to contribute (this will help massively in landing a job).
Most importantly... they shouldn't do a FullSail course. Or whatever. Game programming is a long-term prospect for
There you go. Doing a focused course MIGHT land you a game-software job, at massive cost to your future. Doing a CS course also MIGHT land you a game-software job. There's probably a slightly lower chance (or perhaps even a slightly higher chance!) But, your fall-back and long-term career prospects will be massively better off with CS. When you fall in love, buy a house and a puppy, and have kids, you will have career prospects at companies which leave room for those things.
I've seen it. Go the focused-games-programming-course route, and you end up with 6 years of good software development experience and having to take a crappy support job at a company which doesn't give REAL developer jobs to people with games programming degrees, making 10k less than the graduate CS guys. It's shit-unfair, but I've seen it.
I was a software engineer working in a company which had a similar thing done to it by MasterCard (MC from here on). The circumstances may be similar or vastly different, but the program which triggered it was used mainly by online gambling services, and we provided customers with Maestro/Cirrus branded MCs.
The product was ostensibly a prepaid debit card for travellers. The melt-down started when an MC official at an international event received marketing material for the card, and called the help-desk. He was told all about the benefits of the card, including the ‘special’ benefits like being able to load gambling winnings onto it and then withdraw them as cash from US ATMs (I should stress that this was a program operated by our client, not by us; we were just the platform.)
It turns out that this is money-laundering. You just aren’t allowed to market that. So our client (who operated the program) was investigated, and then we (who owned the payment platform) were also investigated. While a handful of people were using the program legitimately, the vast majority were using it for its ‘special’ benefits. MC also found that we should have known about it, and we’d failed to do correct due diligence. The program was shut down immediately, and all cards were de-activated, as its primary purpose was to facilitate money-laundering (we received two hours’ warning, and I had a federal police officer standing behind me while I signed in and deactivated the card range). We lost our licence to access the MC network, and MC gave us 30 days to notify customers of legitimate programs and disconnect. We were successful in getting a court order extending this to 180 days.
MC has strict risk guidelines on this sort of thing. The integrity of the network is paramount: illegal money flows are targeted and stamped out vehemently. They would rather risk disconnecting thousands of legitimate cards than risk losing trust in a network which provides for billions of them.
The real problem is that it’s all private enterprise. Our contract with MC gave them all of these powers: if you don’t want to let MC have this sort of power over you, you don’t use their network. There is no right of appeal, especially for international partners (the court’s authority to even grant the time extension for our genuine legal programs was tenuous, and was only enforceable due to MC wanting to be nice to another party in the chain who was subject to Australian law).
I hope this is interesting information. If you want to know how the story ends, join the club: it’s still going. Perhaps you can visit David Tzvetkoff in a US prison and ask him if he knows.
I suppose what it really gets back to is that VISA is probably not doing this to comply with laws. They know that the best money for an organisation their size is to be made in massive, highly-trusted networks which are beyond reproach. They kick anyone off who might give them any kind of a smudge. Not just pr0n, obviously, I'm talking about money-laundering-style smudges. Which is not, of course, to say that this is what ePassporte was doing: there was neither trial nor opportunity to defend when MC came after us. It was "we're on our way, be ready to turn them off in front of a federal police officer when we arrive." They didn't have to prove that our client was doing anything wrong. They didn't have to prove that we should have known about it. They just decided that they were satisfied, end of story. We only got the extension from the court because they found that MC hadn't met the requirements under the contract to terminate with 30 days notice, and they had to fall back to the "we can kick you off just because we don't like the brand of office chairs you buy" 180 days.
You see, the thing is, this law isn't targeted at journalists, so when you say "... some hypothetical mob of journalists that are vandalizing oil booms and running their boats into the poor defenseless BP and Coast Guard ships
Ah why do I bother? Your language clearly identifies you as a troll. Bioterror attack? Really? Yes, it's a significant event, but it wasn't exactly intentional and targeted. Negligent? Sure. But not an 'attack'.
Asynchronous inputs are at the root of our race problems. -- D. Winker and F. Prosser