Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re:Writing On The Wall Folks (Score 1) 167 167

What won't be happening within 10 years is having a cute GUI that a technically unskilled business guy can use to *specify* what he wants. The pointy haired boss will still need to speak a computer's language, or be able to intelligently respond to disambiguation questions from the computer. What is already happening, and what will continue is the extremely rapid improvement of tooling. We are reaching the point where a hundred cowboys writing in C will not be able to keep up with a compiler tool chain that is producing binaries that meet a specification that is both checked for logical consistency, and has (locally) optimal performance. In short, we will start producing reliable software. There are some amazing things going on with respect to Coq these days; and they most definitely require an extremely skilled person to get the specification written. A huge improvement in getting these specifications written would require that it be simple enough for a mortal programmer to do (rather than a PhD mathematician).

Comment Re:Yes (Score 1) 1067 1067

The real problem is that the arguments to Divide are *not* of the exact same type. Assuming denominator and numerator are both multiplied times -1 to normalize them, the numerator is an integer, and the denominator is a positive non-zero number. In code, you should not be able to invoke a/b without being inside code that proves that b!=0 (ie: inside an if(b!=0), or b's type is non-zero integer. In functional languages, there would usually be a match or switch statement for this.

Comment Re:Bah! Media! (Score 5, Interesting) 173 173

SF86 data is extraordinarily sensitive. What they mean is that the attackers made off with a database of the financial problems, drug habits, family problems, hidden crimes, and sex fetishes of anybody that's working on anything sensitive. This data will determine who comes home to a hooker in his bed with requests for information and a crowbar in one hand and a bag of illegal drugs in the other. I'd say that the information is so sensitive, that it may actually weaken security to continue with this practice of having all of these confessions written down. I mean... if you can approach your boss and say "hey, i need to take a few weeks off to go to jail!" to which he responds "ok. you have plenty of leave!"; then that may leave you far less open to coercion then if you go into a panic over being found out by your boss for adultery. ("gah! i'll lose my clearance and never ever work again!")

Comment Re:Oversimplified (Score 1) 74 74

Exactly. Encryption hides the conversation from external observation, which won't prevent one party from sending malicious data to the other. In fact, it weakens security in the sense that visibility into these kinds of problems is lost. This is why in a corporate setting, you may be asked to surrender to surveillance of your network connections for legitimate security reasons.

Comment Re:Oversimplified (Score 1) 74 74

We have been trying to handle security by wrapping various "condoms" around software that doesn't defend itself from bad input. That allows it to be used without fixing it. But this whole strategy is about to break with the widespread use of encryption. We currently protect traffic by inspecting it to observe abuse of the recipient of a message; and yes, it's functionally identical to surveillance in how it works. Ultimately, we need to do something like what LANSEC suggests, and require very strong input handling that is limited to "in the language" inputs. It's an admission that Postel's Law needs an update. We need to be extremely conservative in what we accept, and presume that all out of spec inputs are designed to put us into an illegal state.

Comment Re:Big Data != toolset (Score 1) 100 100

Actually, the biggest problem with RDBMS and similar tools is the fact that you are expected to mutate data in place, and mash it into a structure that is optimized for this case. Most of the zoo of new tools are about supporting a world in which incoming writes are "facts" (ie: append-only, uncleaned, unprocessed, and never deleted), while all reads are transient "views" (from combinations of batch jobs and real-time event processing) that can be automatically recomputed (like database indexes).

Comment Re:Big Data != toolset (Score 1) 100 100

Except, if you are talking about a centralized database tool, you already know that the default design of "everybody write into the centralized SQL database" is a problem. Therefore, people talk about alternative tools; which are generally designed around a set of data structures and algorithms as the default cases. A lot of streaming based applications (ie: log aggregation) are a reasonable fit for relational databases except for the one gigantic table that is effectively a huge (replicated, distributed) circular queue that eventually gets full - and must insert and delete data at the same rate. Or the initial design already rules out anything resembling a relational schema, etc.

Comment Re:let's be real for a second (Score 5, Informative) 429 429

That's a pretty ridiculous statement. My actual experience intuitively says just the opposite. I work at a security company that is largely made of guys who just got out of Israeli SIGINT (their mandatory service). The older guys write kernel code know what C compiles to, and see the vulnerabilities intuitively. The new ones have quite a bit more experience in high level languages, while being almost oblivious to abstraction breakage that leads to security holes. At best, I'd say that the older developers get stuck dealing with older code bases (that are making the money) and tools (because the newer ones can't deal with it anyway). But on security.... Prior to the mid 1990s, everybody in the world seemed to be working on a compiler of some kind. This deep compiler knowledge is the most important part of designing and implementing security against hostile input; ie: LANGSEC.

Comment Re:Well done! (Score 1) 540 540

Perhaps not directly. But the difference between public school and private schools is impossible to overstate; and it is strongly correlated to houses with one full-time working parent and one part-time or flex-schedule parent. The tuition (almost regardless of how much it is) immediately filters out financially overwhelmed and un-involved parents. Then even for the parents that can afford it, some schools also have involvement quotas that will cause a pair of full-time parents to drop out. Morale and motivation in private schools is extremely high, akin to that of people working in good jobs; which counts for about two or three grade levels. The end result is that you have a kid who is surrounded by children who know nothing other than a 7-day week of school, getting up at 5am to wrap up missed studies, music lessons, sports. Even if they do spend a bit of time goofing on iPads and watching TV, it is nothing like what happens with parents who can only show up long enough to sleep and go back to work. Even people who are poor will try to move their kids into the better school districts. A few will even break the law to do it, with few regrets when they get caught. (You can get sued by the county for doing this.)

Comment Re:It depends (Score 1) 486 486

The silliness of the paper is that there is no reason at all to keep previously submitted chunks in memory, and it's like somebody discovered that naive string appends are quadratic in memory allocation. On day 2 of everybody's first job, they learn to just append strings to a list and either flatten them to the one big string you need at the end, or evict the head of the list out somewhere (disk?) when a reasonable chunk size (optimize for block size) or amount of time (optimize for latency) has passed. I would imagine that in this case, you should simply queue up writes in memory into a constant-sized and pre-allocated buffer, and flush to disk as soon as it is the size of a disk block.

Comment What an Idiotic Paper (Score 1) 486 486

Holding the string in memory serves no purpose at all if you are just appending to it. Frankly, this += of strings issue is the most common "Smart but Green Self-Taught" versus "Computer Sci grad" problem you will see with new hires. Appending strings can be O(n^2) when the strings are immutable, and it applies to most high level environments. Even Metasploit had this issue at one point, and it was written by some very smart people. So everybody learns to keep appending to a list, and then flatten it to a string at the end. But this tie in with disk just makes the paper totally dumb. If you won't be reading the queue of string chunks, just flush them to disk immediately so that the code runs in constant space - relieving the memory allocator.

Comment Re:Number 5 (Score 1) 51 51

By what technical means do you prevent data leakage though? You need to specify what the system (and its users) will NOT do. Defending against bad input (and bad state transitions) is the foundation for everything else, because there is no technical means of enforcing any other security property otherwise. The game of the attacker is to reach and exploit states that are undefined or explicitly forbidden. Think of the Heartbleed bug as an example of a failure in 5 mooting 6. Bad input causes a web server to cough up arbitrary secrets, which is used to violate every other security constraint. For 5 mooting everything, including data leakage protections: SQL injections can be used to extract sensitive data out of web sites (ie: SilkRoad user lists presented back to the administrator with ransom demands). I work on a data leakage protection system, and it's based on earlier intrusion detection and prevention systems for a reason. I regard Intrusion Detection and Intrusion Prevention systems as essentially trying to force a fix of number 5 over a zoo of applications that didn't get it right; amounting to taking action on a connection that looks like it's not safely following protocol.

Comment Number 5 (Score 1) 51 51

Number 5 is the most important. It is about defending against bad input. When an object (some collection of functions and a mutable state) has a method invoked, the preconditions must be met, including message validation and current state. A lot of code has no well defined interfaces (global states). Some code has state isolated behind functions, but no documented (let alone enforced) preconditions. The recommendation implies a common practice in strongly typed languages: stop using raw ints and strings. Consume input to construct types whose existence proves that they passed validation (ex: a type "@NotNull PositiveEvenInteger" as an argument to a function, etc). DependentTypes (types that depend on values) and DesignByContract are related concepts. With strong enough preconditions, illegal calling sequences can be rejected by the compiler and runtime as well. If secure code is ever going to start being produced on a large scale, people have to get serious about using languages that can express and enforce logical consistency.

Comment Re:Whoa 1.3x (Score 1) 636 636

Bad algorithms are the major difference between a totally self-taught programmer and a programmer who has learned some actual computer science. Yes, "1000x speedup" is not ridiculous at all. Use the wrong algorithm, and you can make this speedup number as large as you want by feeding it a larger data set.

Comment Or deal with pointer arithmetic properly (Score 1) 125 125

This is only an issue because of unchecked pointer arithmetic. For garbage collected and range checked items, you can't take advantage of co-location of data. In a JVM, if you try to cast an address to a reference to a Foo, it will throw an exception at the VM level. Indexing arrays? Push index and array on the stack, and it throws an exception if index isn't in range when it gets an instruction to index it. In these cases, pointer arithmetic isn't used. In some contexts, you MUST use pointer arithmetic. But if the pointer type system is rich enough (See Rust) then the compiler will have no trouble rejecting wrong references, and even avoiding races involving them. In C, an "int*" is not a pointer to an int. It is really a UNION of three things until compiler proves otherwise: "ptr|null|junk". If the compiler is satisfied that it can't be "junk", its type is then a union of "ptr|null". You can't dereference this type, as you must have a switch that matches it to one or the other. The benefit of this is that you can never actually deref a null pointer, and you end up having exceptions at the point where the non-null assumption began, rather than deep inside of some code at some random usage of that nullable variable. As for arrays, if an array "int[] y" is claimed, than that means that y[n] points to an int in the same array as y[0] does. Attempts to dereference should be proven by the compiler or rejected; even if that means that like the nullable deref, you end up having to explicitly handle the branch where the assumption doesn't hold. You can't prove the correctness of anything in the presence of unlimited pointer arithmetic. You can set a local variable to true, and it can be false on the next line before you ever mention it because some thread scribbled over it. Pointers are ok. Pointer arithmetic is not ok except in the limited cases where the compiler can prove that it's correct. If the compiler can't prove it, then you should rewrite your code; or in the worst case annotate that area of code with an assumption that doubles as an assert; so that you can go find this place right away when your program crashes.

Time-sharing is the junk-mail part of the computer business. -- H.R.J. Grosch (attributed)

Working...