I wasn't aware that smart people weren't allowed to engage in non-productive recreation. When was this rule brought in?
Oh, to actually address the parent:
The bulk of programming jobs have nothing at all to do with math beyond the high school level.
Its mostly counting beans and keeping records. Really, it is.
You're right. So, if you want a bulk job counting beans and keeping records, don't learn math. If you want a cool career with lots of interesting stuff, get good at math.
In my field, good math skills mean the difference between running a million iterations at the cost of many hours of computing time, or doing some stochastic calculus and producing a (better) result in seconds. In a past job, it meant the difference between designing a naive algorithm to spot simple patterns in usage data, or doing some fancy math and coming up with actually useful metrics. In another job, it meant the difference between not understanding what the accountant client was trying to explain and having numerous testing iterations before coming up with something that (hopefully) met requirements, and actually following the math and ensuring my algorithms matched the scenarios we were modelling.
In my experience, good math skills have been the difference between being a relatively unproductive base-wage coder and being an innovator with a reputation for really great work - and it also means that someone else gets stuck with shuffling data while I get to work on interesting problems and learn lots about different subject domains. I've gotten to work on anti-money-laundering systems and on weather and pollution modelling - gotten company time to do my own research, and been sent to training programs and conferences (often at swanky hotels!)
In summary, learn math.
The fact is, once most of my group of friends made it onto Facebook, that's where the event organising started happening. There's one girl in my main circle of friends who often misses invites because she only checks Facebook once every few weeks. We try to remember to call her, but our event-organising is so stream-lined and otherwise effective using Facebook that we often forget there's an extra step - and our group isn't organised enough to make sure someone has done it, so often everyone assumes that someone else will have called her.
We don't really deal with smaller enterprise, so I'm not sure how well my experience will relate. We tend to find that our clients treat "our software won't work with your PC" types of problems as OUR problem, not theirs, and something we should address, not them. I'm yet to see a client bring it up - we have to be pretty pro-active, and we've been caught in situations once or twice where we've had to scramble to support older browsers at very short notice, because they were running very old versions and gave us the choice between making it work on their systems or them considering us in breach.
The specific situation that seems to cause it is that site managers are responsible for budget for purchasing IT infrastructure, but central IT manages the infrastructure. You get a few site officers refusing to retire PCs that are well past their expected lifetimes, and central IT says "sure you can keep using it, but forget updates: the latest OS we've tested on that hardware is XP without SPs, and the latest browser we've tested on XP without SPs is IE6, so that's what you stay with." I think central IT are trying to force the site managers to spend budget on IT gear, but IT is often not involved in our proposals (they just manage the infrastructure,) so all that happens is whichever higher-up decided to go with our solution tells us, basically, "I don't care if there's a fight between a site officer and our IT dep't over budget, or if MS have deprecated that technology, or whatever other excuses you have - we've bought your solution for our EXISTING infrastructure, and you need to make it work on that."
I'm a programmer, and I find that far too many of my colleagues assume that any and all meetings are inherently worthless. I've worked in teams who got great value out of well-directed meetings. We avoid double-handling problems, we get better use out of the various experience our team members have... It can just work so well.
It's such a shame that so many places get it so wrong, and so much IT talent has never experienced the increased productivity you can get out of meetings done right.
Anyone who gets that UI overhauls/rewrites happen frequently, but DOESN'T use a layered architecture to keep the UI layer really thin, is an idiot.
We've found entirely the reverse re: enterprise users, albeit with a different plugin. Enterprise users are the ones who force OUR hands. They generally tell us what browser versions and plugins are available in their SOE, and we have to support that or lose the sale. Our clients are exclusively larger enterprises, and our success rate at saying "you just need to install [x] on the machines you're going to use this from" has been zero so far. As a rule of thumb, if it doesn't run on IE7 with Flash installed and nothing else, you're gonna miss some enterprise clients. We've just spent 18 months fighting to get our last client to accept us dropping IE6 support: even though they didn't have any deployed IE6 machines left, they wanted it in the contracts anyway.
Agree completely with you about end users. Most people don't see "you can just install this plugin, restart your browser, and this will work". They see "this doesn't work".
I know I'm just jumping on the band-wagon here, but I'm a
Then again, most products I've worked on with a focus on having a great user experience tend to undergo pretty massive UI overhauls every 18 months to three years, and it's pretty common to use different technologies at each iteration. Being forced into changing UI platforms shouldn't come as any sort of surprise to you.
In the programming world, I always got the impression that, collectively, we respected the self-taught coder more than one who spent four years in school being spoon fed how to code.
You've created a false dichotomy. Spending four years doing a degree, for most of the people I hung out with, was nothing about being spoon-fed how to code - most of the people who needed that failed out and went elsewhere. Some of them struggled through. Lots of us were good coders long before we went to uni, and we breezed through and spent most of our time messing about with stuff that interested us: we often got worse grades than the ones struggling through, because they were focusing on meeting the criteria while we were off messing with something fun.
Most of the (good) coders I've ever worked with have been a mix of the two. They taught themselves to code, then went to uni and learned all sorts of new and interesting stuff about coding, as often from their fellow students as from the courses. I've met good coders without training, but I often (not always!) find that a lack of 'four years in school being spoon fed how to code' leads to all sorts of bad habits, poor practices, and general amateurishness. NOT, by any stretch, always! There are great coders who are entirely self-taught - but I'm gun-shy of them, because I've seen some of the bird-nests these people put together.
Agree, but I have an insight into the 'appearance' of anti-intellectualism. We have no respect for the traditional signs of an intellectual: research papers, degrees, citations, accreditations are meaningless: we just care about how GOOD you are. No degree can tell us that: all it tells us is that you satisfied some panel or series of lecturers who were interested, not in your ability, but in whether you satisfied some set of criteria that probably aren't really all that relevant.
It can easily LOOK like anti-intellectualism, but it's just that we'd prefer to judge for ourselves, thank you very much, and not defer to the opinions of a bunch of people we don't know.
I spent a big part of my life as a Catholic. Fairly early on, I realised that there needed to be "the real me" and "the me I pretended to be to the church".
Wow. Just wow. I NEVER learn anything from
It's worth noting that this benefit relies on your application server not being compromised, as if an attacker owns that, they can change code to move the client-based hash step to the server (changing the client AND server code), and still see passwords. So you're really only protecting against network-sniffing attacks, which are USUALLY prevented by SSL anyway. This actually gives me an idea, but I'll have to think about it. Something along the lines of using an MD5 of the sign-in page itself as a part of the process, so changing the page will break things. That's obviously vulnerable to exactly the same attack, but perhaps there's an extension to this which might work.
You have also prevented plain-text reveal in the situation where someone somehow intercepts the post-SSL stream but can't alter the application, which is certainly a possible scenario.
There's a major benefit, with this scheme, if you're using a dedicated ssl server and relying on a secure network behind that (which is not uncommon in higher-load applications) - compromise of the ssl server doesn't lead to compromise of plain-text passwords. The attacker would need to take the next step and own the application servers behind that, and given that this scenario only crops up in high-volume load-balanced systems, there are likely lots of identical systems to deal with, and (hopefully) switched-on administrators and security experts, so adding another step like that could vastly decrease the chance of a complete compromise. The attacker would already own login details to the attacked site (they could replay hashes from the owned SSL appliance), so there's every chance they'll take that and never even try to compromise the application code itself, thus never leading to the plain-text reveal.
You've basically described how it usually works, except that instead of having the client perform a hash, we have the client encrypt the communication over SSL. The advantage, that the password can't end up accidentally in a log file, means now that instead of the password, the hash that the client sends would end up in the log file. I'm worried that you're adding to the complexity of your code in order to prevent an avoidable bug - it seems like you'd be better to just ensure that sensitive information isn't showing up in your logs (which is a crucial step in avoiding security holes; it's specifically addressed in, for example, the PCI security standard).
There should be one salt per user, not one per application. This means that the whole effort to generate a rainbow table is only applicable to the one user you're trying to recover the password for; the rainbow table for the next user will be totally different, because there's a new salt. This means that all of your work to hack one account can't be re-used for the next. Salting isn't about preventing this sort of attack; it's about multiplying the effort to compromise n accounts by n. If it takes the attacker 5 days to compute a nice big look-up table, they now have to repeat that per account, instead of having now compromised every account.