Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:Redistribution (Score 1) 739

More people means more risk, more risk means more cost, that cost is distributed among the group by taking more of their income.

More people does not necessarily mean more risk. More people can in fact, and often does, mean less risk.

Ergo, more income is being redistributed. So although you are technically correct in your statement about causality; in the context of this scenario your statement is wrong.

Income redistribution plans apply to everyone. Obamacare applies only to those who get health insurance. Therefore, it's not an income redistribution plan.

Comment Re:Non-system Admin Here (Score 1) 863

That aside, will people please stop this constant masturbation about startup times? There are way, way, way more important things to deal with than edging out a few more seconds. Systemd provides me with no perceivable gains.

Since you argue about systemd from a user's perspective, then your own argument implies that you shouldn't care at all what sort of init system is in place, so why be against systemd? Given your view, it's purely a distribution's choice what init system to use because it's largely invisible to the user.

Comment Re: Snowden (Score 1) 221

For those of us living in the US (as in most democracies), I think most of the time they coincide reasonably well.

I doubt that very much. Most likely many of the ridiculous laws on the books are not enforced (like laws on sexual position), but that's not the same as what's illegal being roughly synonymous with what's wrong. It's a much repeated claim that pretty much everyone breaks some law at least once per day in the US.

Comment Re:Since these people still don't get it.... (Score 1) 79

Good luck starting a security company with the slogan "We provide 90% security!"

I don't know what you're talking about. If anything, that would be "90% fewer security vulnerabilities", which sounds like perfectly good marketing.

I do use Haskell myself for certain things, and I can tell you it's no problem creating insecure applications in Haskell.

I never said Haskell was the perfect language, just that it provides good examples of achieving the needed safety properties, in that it can be extended to verify many properties that may be of interest. I didn't define "safe" in my original post, as the requried "safety" properties are domain-specific. Memory safety is the minimum needed, which would automatically handle one of the most common vulnerabilities in single programs (buffer overflows). A language that can be used to specify and check the required properties is a "safe" language for a given domain. Many languages fit most problems, few languages may be safe for all problems (although possibly undesirable for other reasons).

If all we had were Haskell's DoS vulnerabilities, we would be in a much better place.

Most exploits are due to human errors they could have done in any language

Not a chance. Here's a list of the top 25 exploits from 2011. From this list, numbers 3 and 20 would have been solved right away by using any memory safe language. Most memory safe languages also implement overflow checking, so that's 24 off too.

Languages featuring parametric polymorphism can tag unsafe values received as user inputs, so you can easily solve vulnerabilities 1, 2, 10, 14, 22, and 23 (all you really need is parametric polymorphism -- I've even done this in C#).

The crypto entries can be handled with session types that expect encrypted packets, not plaintext. Even the selection of appropriate crypto algorithm can be constrained by various parameters and checked at compile-time, ie. a Haskell type class constraint could specify a whitelist of unbroken crypto algorithms for unrestricted use, and those which are only good in restricted scenarios.

Design by contract can handle precondition violations, ie. #18, and such contracts can be statically checked these days in Haskell, C# and Ada.

A capability-secure language would handle the rest (mainly "porous defense" category remains). Few languages implement full capability security properties, and they remain vulnerable to the extent that they violate those principles.

The point is that the needed safety properties to address most common security vulnerabilities have been known for decades. Capability security was invented in the 1960s, and memory safety has been available since the first Lisp. Unfortunately, many programmers aren't interested in safety properties because they're focused too much on raw speed, but don't want to spend the verification effort to use that speed safely (Frama-C or Ada), or they want to avoid all verification effort period (dynamically typed languages).

Comment Re:Since these people still don't get it.... (Score 1) 79

My point is that you can't depend on the language to protect you. I'm not saying you should ignore good technology choices because you know better than those crazy compiler people. But I do not believe that it is possible to create something that is completely unhackable.

With a theorem prover like Coq, you can statically check any property you want. So you'll have to more precisely define "unhackable" before "it is impossible to create something that is completely unhackable" can have a truth value.

If used extensively, the only bugs you can introduce with a theorem prover are specification bugs, ie. we implemented X but actually need Y. This can certainly introduce exploits in the sense of customer surprise that say, some private information is revealed when they didn't expect it, but I'm not sure I would call this a hack. The device is working perfectly as expected, it's the expectations themselves that were wrong.

Comment Re:Since these people still don't get it.... (Score 2) 79

Don't be naive... security is a deep and subtle problem, full of nasty surprises. There is no magic bullet solution... your "safe programming language" has thousands of bugs in its standard API and run-time

I think you should update your knowledge of this field. Then you should also realize that over 90% of security vulnerabilities in programs written in unsafe languages wouldn't have occurred with safe languages. And of the vulnerabilities among safe languages, 90% of those wouldn't have occurred if they were designed to be capability secure (which is just another safety property most languages ignore).

it won't prevent devs from concatenating SQL with user input

You can't do this in, say Haskell, unless you write your own SQL interface library that builds solely on strings.

misusing threading primitives

You can't do this in concurrent safe languages, like Concurrent ML, Rust and Haskell.

bungling up an authentication protocol

Session types, which Haskell can verify too. Of course, all of these safety properties are encodable in even more powerful systems, like Agda or Coq.

you must at minimum use an approach where (1) security is a primary design concern thru the entire product lifecycle, (2) security solutions are deployed in a structured/layered approach using (3) actual expertise, and (4) security is an ongoing program with both proactive and reactive elements.

So basically, safety properties have importance on par with domain requirements, and must be subject to the same rigour that domain features get, ie. testing, verification, etc. So basically, the safer the language, in the sense that the more properties can be assured at compile-time, the more features and safety properties you can verify, and the fewer security vulnerabilities.

Comment Re:Since these people still don't get it.... (Score 2) 79

Last I checked, programming languages are designed and implemented by human beings. Even if a programming language can decrease your attack surface, there could still be an exploit associated with the interpreter/compiler or a mistake in implementation of the language.

That's what theorem provers are for. The seL4 microkernel was just formally verified as correct, we have verified C compilers, we have C verification tools (Frama-C for instance), and we have higher level, safer languages even at the systems level (Ada and Spark-Ada). This isn't an open theoretical CS question anymore, these technologies can and have been used very successfully to produce formally verified software, but the inertia behind outdated technologies and the hubris of developers who think they know better will continue to result in exploitable software.

The idea that there's a non-zero probability that your compiler, the theorem prover used to certify it, and the theorem prover used to certify that theorem prover, may all have a bug that coincidentally permit an exploit is about as meaningful as the argument that hypothetically, QM implies there's a non-zero probability that you could spontaneously be transported to the surface of the sun.

Comment Re:The $50,000 question... more energy out than in (Score 1) 315

Costs are a big issue, but the problem with fusion is getting more energy than is put in... and keeping that reaction sustained indefinitely.

I think the real problem is how much we've fixated on only one or two fusion reactor designs for decades. Plasmas are hard to control, hence why it's taking so long to materialize real fusion power. They've pursued the Tokamak too long I think, but they keep going after it because they're already so heavily invested. Time for some fresh thinking.

Slashdot Top Deals

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...