Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:The whole approach is wrong (Score 2) 189

A good coder with skills in secure coding will do fine with C.

I conclude from this and the list of security vulnerabilities in real life that there is no such thing as "a good coder with skills in secure coding."

Or at least no such thing as a project that only employs or accepts contributions from such programmers.

Comment Re:Lobbying aside (Score 1) 423

No. The IRS does not know everything required to do your taxes. See also charitable contribution deductions. I could list dozens of other income, deductions, or credits the IRS is incapable of accurately calculating due to the lack of first hand knowledge

The statement was that was true of "For the vast majority of the people in the US", not "everyone."

"Vast" is an overstatement, but it is probably true of a majority of filers. Most deductions don't apply, because only about 1/3 of filers itemize their deductions in the first place; the remaining 2/3s then won't be able to take that charitable contribution deduction. There are still credits and stuff that are more likely to apply to people who don't itemize, but I figure it's still a substantial portion.

I briefly tried to find data on what proportion of returns are 1040EZ vs the others under the assumption that those who file the 1040EZ fall into the "the government has all the info it needs" category, but didn't see any. (Depending on how broadly you interpret things, you could go even broader than the 1040EZ -- e.g. to file for education credits you need the 1040A, but those figures are still reported on a 1098T.)

Comment Re:Lobbying aside (Score 3, Interesting) 423

It amazes me that people *still* give the government interest free loans. Getting money back via your tax return is bad. I strive to owe the government the maximum amount I can each year without penalty.

This is what I said yesterday about this:

Here are a couple reasons why I don't worry too much about this:

1) Especially right now, that money wouldn't earn much elsewhere, especially if you put it into a safe investment. If you just keep it in a bank account, for most people it's probably barely worth it. (The average refund is about $3000 in 2011, the date I happened to see. Put in an online savings account with 0.95% interest (the highest MMA/savings on bankrate.com) and you'd make a whopping $15.48 over the course of the year. I guess that'd buy one person a decent dinner or so.)

If this was in 2007 or something when you could get a 5% account, things would be different. (That'd be $387.)

(I guess that is the federal-only figure. Would be slightly higher with state refunds, though at least for me those have always been much less.)

2) Fewer things to worry about come tax time. There are penalties for under-withholding, at least in some conditions. Overwithholding a little protects you from these.

3) I am not even sure if it's legal to decrease my withholding, for example. I've claimed the exemption that the W-4 instruction allows, and I don't even know if it is legal for me to claim more, or if there is another way to reduce withholding. I've looked into it a little bit, but it's not worth my time to look into the various IRS pubs.

Comment Re:de Raadt (Score 5, Informative) 304

The freelist is not an "exploit mitigation countermeasure",...

He was being somewhat sarcastic, because OpenBSD's allocator is in contrast to

Read overflows don't kill canaries, so you wouldn't detect it except for with an unmapped page--a phenomena that doesn't happen with individual allocations smaller than 128KB in an allocator that uses brk(), like the default allocator on Linux and FreeBSD

and does try to separate allocations specifically to mitigate Heartbleed-style vulnerabilities.

In other words, the OpenBSD allocatior does have exploit mitigation, and the OpenSSL freelist acts as a countermeasure to those mitigation capabilities whether it was intended or not.

The comment even says that it's done because performance on allocators is slow.

It says it's slow on "some platforms", yet they disabled it on all and then didn't test the alternative.

But of course everyone knows it's way better to quickly implement a dramatically awful security vulnerability than to do things slowly and correctly.

Comment Re:What about a re-implementation... (Score 1) 304

(NaCl isn't C, I will point out, and is closer to a better Java implementation than it is to compiling and running C.)

I will weaken this statement a little bit. I assert it's closer to a better Java implementation than it is to a standard industrial C implementation. You could make a C implementation more like NaCl's, but depending on how you look at it (1) it would still make Java and Flash look like Fort Knox because it doesn't even try to protect against Heartbleed-like vulnerabilities, or (2) it would satisfy the constraints of "a safe language", but no one really uses them and I don't know of any industrial compilers that implement NaCl-style protections for standalone programs.

Comment Re:What about a re-implementation... (Score 2) 304

In fact, those two are among the most exploited pieces of userspace software on the OS.

Coincidentally, they're also the two applications that are internet-facing the most. Oh wait, that's not a coincidence at all. If you put C into that role, and let your browser download and run C programs, the result would make Java and Flash look like Fort Knox.

(NaCl isn't C, I will point out, and is closer to a better Java implementation than it is to compiling and running C.)

Comment Re:What about a re-implementation... (Score 1) 304

Language makes no difference In a "safe" language, the bigs are just harder to find.

I think this is a dumb argument. Let's divide up problems into "memory errors" and "logic errors", where we broadly interpret "memory errors" as "errors your language or runtime won't let you make."

This means that if you program in C, you have to deal with memory errors and whatever logic errors you make in C. If you program in another, safe, language, you no longer have to worry about memory errors and only have to worry about logic errors in that language.

That means that unless you can argue that you'll make more logic errors in your safe language, you've already won.

Furthermore, because in C you have to spend time and effort making sure you're not susceptable to memory errors, that takes time and effort away from looking for other errors. Not only that, but automated tools have a harder time dealing with C than they do with many safe languages, which means you have less tool support.

And that's not even getting into more esoteric languages where you can encode non-trivial proofs into the type system and have the compiler prove correctness with respect to certain properties.

Comment Re:What about a re-implementation... (Score 1) 304

While it might be nice to use a safe(r) language, can't we at least have a compile option in C that adds bounds checking?

That's extremely difficult to do for C. People trying to do that has resulted in multiple PhD theses and no one still has a perfect solution. If you actually want that, then use CCured, which is probably as close as it gets.

And while you're at it, how about making it impossible to execute code that isn't in the code segment and write protecting the code segment.

I'm pretty sure that's how things are now, though I could be wrong. Non-writable code has been around for ages, and non-executable data was the whole NX/DEP from a decade ago. I think that's pretty ubiquitous now. (I guess I've almost always heard of NX protecting the stack, but I assume you'd mark heap & static pages NX too.)

Comment Re:What about a re-implementation... (Score 2) 304

First: Many languages are largely or even entirely self-hosted in terms of compiler and/or runtime. This means that if they provide, say, better type safety than C, those benefits carry over to the portions of the language that are self-hosted.

Second: the directness of the problem. It's easy for a C program to allow a very direct exploit, e.g. Heartbeat. I'm not saying easy to find, or that you'll necessarily get what you want to see every time, but the bug itself is about as simple as you can possibly get. If your language runtime has a bug instead, it's much more likely to be a very indirect one, because now not only do you likely have to cause a specific behavior in the program itself, but that behavior has to trip up the runtime in a way that causes that bug to lead to something bad. This isn't really fair to say this, but consider the Heartbeat vulnerability: to have the same thing happen in a safe language, not only would the program have to have the potential for a bug (unchecked input) but you'd also have to trick the runtime into dropping its bounds check.

Sure, it's not guaranteed to cure all ills, and runtimes can have bugs. But at the same time... it dramatically raises the bar.

Comment Re:Refunds indicate bad tax planning (Score 1) 632

Arrange your source deductions and installment payments so that you don't get a refund.

Here are a couple reasons why I don't worry too much about this:

1) Especially right now, that money wouldn't earn much elsewhere, especially if you put it into a safe investment. If you just keep it in a bank account, for most people it's probably barely worth it. (The average refund is about $3000 in 2011, the date I happened to see. Put in an online savings account with 0.95% interest (the highest MMA/savings on bankrate.com) and you'd make a whopping $15.48 over the course of the year. I guess that'd buy one person a decent dinner or so.)

If this was in 2007 or something when you could get a 5% account, things would be different. (That'd be $387.)

(I guess that is the federal-only figure. Would be slightly higher with state refunds, though at least for me those have always been much less.)

2) Fewer things to worry about come tax time. There are penalties for under-withholding, at least in some conditions. Overwithholding a little protects you from these.

3) I am not even sure if it's legal to decrease my withholding, for example. I've claimed the exemption that the W-4 instruction allows, and I don't even know if it is legal for me to claim more, or if there is another way to reduce withholding. I've looked into it a little bit, but it's not worth my time to look into the various IRS pubs.

Comment Re:never understood (Score 1) 371

Big enough to fit a week or more's load of groceries in the trunk, but I guess "Smart" Car owners don't cook anyway.

1) Based entirely on stereotypes I'd guess Smart car owners would be more likely to cook than the general population, but that's beside the point.

2) Lots of people are shopping for 1. When I go to the grocery store, my shopping results almost always go into the passenger seat in the front. I can't even remember the last grocery trip I went on where I bought enough stuff that it wouldn't fit into the trunk.

2) But OK, suppose you have a family large enough that you can't put a grocery load in the trunk. Why does it all have to go into the trunk? You can spill over to the passenger seat as well. Do you really usually shop with someone else? (Maybe you do. My impression is that most of the time only one person goes on grocery trips.)

4) Most of the country owns more than one car. So even if you do have an actual need for a larger car, that still doesn't mean that a Smart car is a bad purchase as long as you don't get two.

I'm not saying the Smart car is good or worth the money. I haven't driven one. But the objections based on size largely boggle my mind. How often do you see people go "I don't see why anyone would buy a motorcycle; there's so little cargo room!"

Comment Re:Summary. (Score 1) 301

You're not listening.

From my perspective, I'm not the one not listening.

My point was, and is, that the "Limited damage" case of Heartbleed is that there's a 99% chance that your keys weren't stolen. Of course, you can't verify that.

So your position is that because you can't be 100% absolutely positive that your keys weren't compromised, you should regenerate them? Because that's what I understand you to be saying.

And if it is, you better go regenerate your keys again. After all, you can't be 100% positive that someone didn't 0-day your system while you were reading my post. Hmm, you need new keys again. How can you be positive that the ones you just made weren't compromised to? Better make some new keys a third time.

Obviously I'm being facetious here, but there's more to it than that: if you're really really paranoid about security, it actually makes sense to change keys occasionally even if you don't change the algorithm or implementation. Why? Contingencies: if your key is compromised, it limits exposure. Even if you assume that once an attacker is able to capture your key they'll be able to capture all future keys, it still limits the amount back in time the attacker will be able to retroactively do it.

Better make another new key.

Because you're not making new keys, you're doing a cost-benefit analysis that generating new keys each time I tell you to is not in favor. Just as most people hopefully do the cost-benefit analysis related to the heartbeat vulnerability and determine it is.

The "benefit" is really more of a "anti-loss" here, but it's related to the risk you assess if you stick with the old keys. If you judge that your risk of exposure is higher, than the benefit of changing keys is higher, and so you're more weighted to change keys.

Suppose you're a relatively low-importance site, and based on the importance (or lack thereof) of the information protected you decide that if you're 95% sure or better that you weren't compromised you're not going to bother to change keys because the cost-benefit ratio isn't in your favor. (Remember, if you needed to be positive you're not compromised, you'll be continuously making keys. Have you made a new key yet?) Now if you have an OpenSSL deployment without a hardened allocator, you say "I'm 90% sure that I wasn't compromised." That's below your threshold, so you make new keys. But if you have a deployment with a hardened allocator, mayde you say "I'm 99% sure that I wasn't compromised." That's above your threshold, so you don't.

I made those numbers up of course, but that's the general idea. A hardened allocator provides a potential benefit. As I said in another reply, I don't see any cogent argument against that. The only objection I see is if you think that people are entirely incapable of making the kind of 90%/99% guesses I said, and thus removing the option for them to use a tighter threshold will artificially push people toward the failsafe option. (Actually that's not a bad argument, to be honest.)

Comment Re:Summary. (Score 1) 301

It would be caught when someone figured out about the exploit. It would be caught when someone decided to test the TLS Heartbeat extension for invalid behavior when given a Payload value larger than the actual payload.

There are two reasons why I stand by my point.

1) Black hats
2) Fuzz testing or accidentally-ill-formed heartbeat requests

Had either of these hit an OpenSSL deployment that used a hardened allocator, it would have given an opportunity for discovery before the recent discovery by white hats. If either of those hit an OpenSSL deployment with OpenSSL's bad wrapper, there is much less opportunity for discovery.

There's of course no guarantee that it would have happened, for a number of reasons, but I never said it, just make arguments re. probabilities. And I honestly fail to see how you think that a hardened allocator isn't valuable.

Comment Re:Summary. (Score 1) 301

Limited damage in the case of heartbleed are "you still don't know if your keys have been stolen, BUT it's less likely that they have." Schrodinger's cat with a longer radioactive half life is still unknown if alive or dead.

So? I'll take a 90% survival cancer over a 10% one.

Everything in security is about constantly/benefit. If it were absolute, you'd be continuously issuing new keys. "Oops, maybe there's an unknown 0-day and our keys are compromised. Better issue new ones. Oops, maybe there's an unknown 0-day and our keys are compromised. Better issue new ones.Oops, maybe there's an unknown 0-day and our keys are compromised. Better issue new ones."

The fact that you're not doing that means you have, at least implicitly, evaluated the cost of continually issuing new keys to be too high given the minimal benefit of doing that. If you choose to reissue new keys in the face of the heartbeat vulnerability you have, at least implicitly, evaluated the benefit of issuing new keys as being worth the cost.

Lowering the risk of compromise because "we run OpenBSD and haven't seen any unexplained OpenSSL crashes (that would be possible indications of heartbeat exploit attempts that were stopped because of the hardened allocator)" means that the benefit of issuing new keys is reduced. Maybe you still decide that the benefit outweighs the cost; in this case, you're in the same situation. But maybe you decide that, given OpenBSD's allocator, the benefit is now below the cost because you judge your risk to be low enough, so you don't reissue keys. In this case, at least in your estimation, you have improved your situation over the first case: you have saved the difference between your evaluation of the benefit and your evaluation of the cost.

In other words, the hardened allocator never makes things worse and it can make things better. That's an improvement.

Slashdot Top Deals

Make sure your code does nothing gracefully.

Working...