Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Are you kidding (Score 1) 818

The Greens putting up the Chancellor isn't even necessary. Hell, it's not even necessary for them to be in government to have an influence. That's the beauty (and in some other cases the shame) of it: The threat that people could move from your party to $party because they have $topic on their agenda already does a lot.

In the 80s, the Greens did not really play a role in European parliaments. At least not in a way that you could see them reach any kind of government position any time soon. Yet still, their positions (i.e. ecology and sustainability) were embraced by the established parties quite quickly when they saw that people left them and voted Green instead just because of those positions. The Greens still didn't have a governmental role in the 90s, but their positions and demands were already being usurped by those that are in power, because they feared more voters would move away from them if they didn't.

It's a common misconception that your party needs to rule or be at least part of the government so you can realize your ideas. All it really takes is that those that are in government fear the loss of votes if they don't pick up your ideas.

Comment Re:I think you're working from a few false assumpt (Score 1) 235

Of course there is the threat that changes in a system will introduce new bugs, but these bugs are not under your control. And whether or not your underlying system changes is not entirely dependent on the system's maker, it also matters whether or not you deploy the new version.

Also, it is very, very rare that changes in an underlying system rip open a critical security hole, at least one that you didn't notice due to the change log info. Looking back I can't really remember an instance where such a thing happened to us. We had quite a few compatibility issues, which of course, due to the necessity of code change, bore the potential to introduce new security holes, but I don't remember any security issues with existing code due to version changes.

Comment Re:So ... (Score 4, Insightful) 93

"Wearable" isn't something bad by definition. It's just that the approach they take to it could not be worse.

Everything that runs towards "wearable" today is basically a reskinned, retooled and reshaped smartphone. That's not really what wearable computing can or even should be. A wristwatch that is essentially a smartphone has nothing to do with wearable. It's a smartphone in a different format. Where is the "wearable" benefit?

If you want to create a wearable, create something where we actually benefit from "wearing" it rather than sticking it in a pocket. The least I'd expect from a wearable is having my hands free and either a HMD or a output interface that doesn't require me to take my eyes off whatever I'm busy with. Else there is exactly zero need to "wear" the gadget, I can as well take it into a hand.

Comment Re:False sense of security (Score 2) 188

What I really don't like about the whole statement behind it is the implied assumption that closed source offered any kind of better protection.

You know what's the main difference between an OSS and a CSS audit? That I can't go "hey, psst, take a look at $code. Maybe you see something interesting..." to you when I find something in CSS software and someone in a badly fitting suit tells me to shut up about it.

Comment I think you're working from a few false assumption (Score 5, Insightful) 235

First, bugs in a given program are not infinite in number. By definition. Because the code itself is finite. Finite code cannot have infinite bugs. Also, due to the nature of code and how it is created, patching one bug usually also takes care of many others. If you have a buffer overflow problem in your input routine, you need only patch it once, in the routine. Not everywhere that routine is being called.

I have spent a few years (closer to decades now) in IT security with a strong focus on code security. In my experience, the effort necessary to find bugs is not linear. Unless the code changes, bug hunting becomes increasingly time consuming. It would be interesting to actually do an analysis of it in depth, but from a gut feeling I would say it's closer to a logarithmic curve. You find a lot of security issues early in development (you have a lot of quick wins easily), issues that can easily even be found in a static analysis (like the mentioned overflow bugs, like unsanitized SQL input and the like), whereas it takes increasingly more time to hunt down elusive security bugs that rely on timing issues or race conditions, especially when interacting with specific other software.

Following this I cannot agree that you cannot "buy away" your bug problems. A sensible approach (ok, I call it sensible 'cause it's mine) is to get the static/easy bugs done in house (good devs can and will actually avoid them altogether), then hire a security analyst or two and THEN offer bug hunting rewards. You will usually only get a few to deal with before it gets quiet.

Exploiting bugs follow the same rules that the rest of the market follows: Finding the bug and developing an exploit for it has to be cheaper than what you hope to reap from exploiting it. If you now offer a reward that's level with the expected gain (adjusted by considerations like the legality of reporting vs. using it and the fact that you needn't actually develop the exploit), you will find someone to squeal. Because there's one more thing working in your favor: Only the first one to squeal gets the money, and unless you know about a bug that I don't know about, chances are that I have a patch done and rolled out before you got your exploit deployed. Your interest to tell me is proportional to how quickly I react to knowing about it. Because the smaller I can make the window in which you can use the bug, the smaller your window gets to make money with the exploit, and the more interesting my offer to pay you to report the bug gets.

Comment Re:Not that good (Score 2) 188

Sorry, but no. Just because it produces them revenue doesn't mean they have an incentive to do it properly. They have an incentive to do it good enough that people buy it. That does not necessarily mean that the software is of high quality.

What is necessary to this end is that the software appeals to decision makers. They are rarely if ever the same people that are by any means qualified to assess the technical quality of code.

For reference, see SAP.

Comment Re:Not that good (Score 3, Insightful) 188

Would you put your life on closed source software not having any bugs that we just don't know about because it's closed source and hence can NOT be reviewed sensibly?

Closed source and open source share one problem: Both can and will have bugs. Open source only has the advantage that they will be found and published. In closed source, usually NDAs keep you from publishing anything you might come across, ensuring that knowledge about these bugs stays within certain groups that have a special interest in not only knowing about it but abusing them.

Slashdot Top Deals

"For the man who has everything... Penicillin." -- F. Borquin

Working...