PS: this is pretty obvious while unit-testing but I'll make it clear to avoid any confusion... the real implementation of SSLHashSHA1.update() and SSLHashSHA1.final() would not be called in this unit test, as that'd be outside of the scope of it.
Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
At least Apple's bug could've been caught with basic unit-testing. This is the snippet of code from Apple's bug:
SSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams,
uint8_t *signature, UInt16 signatureLen)
if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)
if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)
Just implement a unit test with the following logic:
1. When SSLHashSHA1.update() is called, DO NOT return an error.
2. Expect 2 calls to SSLHashSHA1.update() and check the input parameter on each call.
3. Expect 1 call to SSLHashSHA1.final() and check the input parameters are what you'd expect.
That simple unit test would've caught this issue without any need of duplicating code.
What nitwit modded you insightful?
Look, I pay money for a hotel room-- that fee goes to an expected level of security. I pay nothing for Instagram's services-- no expectation of security, or of any service at all. Is instagram an online storage business? No. Therefore, the pictures you upload are not there for you to store-- they're for Instagram to use however they want... your pictures are free, as in beer.
Point of fact, since no one is paying you for your pictures, they are literally worth nothing.
Link to an XKCD in case you're still confused as to what storage, business, and free is.
It's all about leverage. And it looks to me like Instagram doesn't have much of it.
Not (necessarily) with branches.
You're right on this. When I wrote that part of the comment I was thinking about publishing (committing, merging, puslishing, etc) changes to a public / shared / main branch in the repository. The idea being: don't break a branch that was not specifically created for your change.
If rewrites are too complex you should split them up in phases. This is something few developers do, and something that can help you to test that the replacement code that you write does indeed what it's supposed to do. Between phases, testing is necessary - the more you can afford to test the new code, the less bugs you'll find later.
As a general rule, leaving commented code into commits that you make to the main repository is a bad practice. The general idea is that if you do things right, by the time you commit the code you should be pretty confident it does what it's supposed to do - and I might add that this is the main reason why I think rewrites are NOT for any developer. Attention to detail and thorough testing are a must.
When you commit commented code, you confuse other developers and add nothing of value to them. In most cases where I've seen devs do this, it's mainly because they are afraid they might need to roll back due to a lack of testing on their side.
And as a last note: version control is there to offer roll-back support, comments are not. It's about using the right tool for each job.
But I'm Decameron!
And yes I understand what you're saying. Given how you mentioned that writing clear and readable code is important no matter what coding standards you have, I also suspect we may not even think that differently (even if my initial statement sounded a bit more black vs white that I wanted).
One of the coding standards that I require from my teams (whenever the decision falls on me, of course) is to write self-documenting code, and avoid abbreviations as much as possible (among other requirements, but not that many). Usually just enough rules to guide an inexperienced dev towards writing more readable code - better code's another story of course.
Well my argument to man of mister e and first reply to Decameron are two different arguments.
My argument to man of mister e, was simply to imply that using features in your diff program can help you ignore meaningless differences in styles.
My argument to Decameron was in reply to his blanket statement that "the reason why coding styles exist is that they increase the readability of your code." This is an absolute statement about the nature of coding styles, which is factually incorrect, as I have demonstrated by personal experience as well as well known failures from the daily wtf.
I can definitely meet you half way there - I agree that bad coding standards suck (too precise, too complex, unreadable syntax), and make the code a mess. I'm speaking out of experience as well, working in all kinds of projects. Some of them had some pretty awful coding standards that didn't improve readability at all. But I was talking about the _purpose_ of standards. I still stand by my statement that the purpose why they exist is to make the code more readable - even if some implementations suck.
Think of it as something along the lines of: "the reason why testing is important is that it helps you find bugs". Bad testers don't invalidate that statement.
Not sure if you are being serious with your point or not due to your case changes, but I will bite.
Just because a style is standardized doesn't mean your code is more readable using that style. In fact a lot of the styles expected of me made my code less clear, and when I chose to ignore them, my code was never touched in code reviews, because everything was clear and intuitive without conforming directly to the style.
If you personally like clear / readable code, then no standard will ever be a replacement for you.
You're missing the point. I am not claiming a particular coding style is superior, I am claiming a standard coding style across the whole code base is good - personal preferences aside.
PS: I'm talking about basic stuff here, such as having standards on how to name variables, constants, camel case?, self documenting code?, etc.
VULNERABILITY. ATTACK. Different words. Different meanings.
Exactly! That's what I meant with my question, although I think it went unnoticed for some. You just dont find an attack!
THE reaSON WHy coDiNg standards_exist is thatTheyIncrease THE_REaDABILITY oF YOur cODe.
An attack was found in the filesystem? What's that supposed to mean?
Trying to select a language that makes up for poorly written code is a fool's game. A well written program doesn't NEED an error handler. This is nothing but poor programmers whining because they can't write good code. It's kind of like unemployed people whining because the jobless benefits aren't long enough.
Sorry but that can only come from a lack of experience. No offense intended, but your point of view is extremely naive.
For instance: you should use assertions to detect coding errors within your module / your scope of trust (app makes a call to another method in the same app), and other traditional mechanisms outside that scope, when distrust is needed (app shouldn't trust server & vice-versa, app shouldn't trust user input, library shouldn't trust app, config files should never be trusted to be properly formatted or have valid data, etc). If you blindly trusted all components from ever failing, you would be letting an eventual error in one component propagate to all other components. This basically translates to unhappy clients calling you at night because your app is crashing, and you finding out hours later that someone unplugged the server or put a bad configuration value somewhere.
Now, now. Your assumption of validity is subjective, sorry. It's objectively better, because it's optimal: energy efficient, time efficient, transparent and pragmatic. And makes one look clever rather than dependent. Self-reliance FTW.
Once again, you're assuming others share your viewpoint that less human interaction is better. Google searches don't make you a better person. Being kind with replies does make you a better person.
Nope. Irony, sarcasm, even vitriol is an implicit part of the message. It is intended (rational) rather than inadvertent (emotional).
Still, it doesn't add anything good to the message, unless of course you're trying to transmit aggressiveness.
No, it's objectively the best thing to do.
We differ with some very valid points, so it's not objective, sorry.
Explain to me why it is good for the society to keep enabling, supporting or even tolerating the retarded part thereof?
Because what you're calling retarded is human interaction. You might not enjoy it, but others do, and even more in times when face to face interaction is being replaced by forums, chat rooms, etc.
Also even assuming someone makes a stupid question, being aggressive our ironic about it is not the right kind of behaviour. The same message can be delivered without those elements.
Stupid questions from literate adults who obviously have Internet (thus Google) access
... they deserve the snide remarks they receive. When you consider he could have Googled "NPC" in less time than it took to ask a stupid question, the remark was actually rather polite.
Interaction with other humans is greatly underrated by intolerant nerds who think we should replace it with Google searches. There's absolutely no reason why you should first look for things in Google instead of asking them in a forum, other than your personal opinion that it's the right thing to do.
Ignoring the question, or replying to it would've been far more tolerant ways to react to the post.
Now then, go ahead and launch your personal attacks and invective. That's what those of your emotionally-goverened, offense-driven mentality usually do when the following two conditions have been met: a) they cannot formulate an effective counter-point, and b) they are too haughty to admit when a good point has been made.
You sound like a robot, man. Chill out.