Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:The libraries we choose (Score 4, Insightful) 239

Uh, isn't that true of any native library? Most C and C++ libraries don't do anything you couldn't do yourself

You've selectively quoted him. The full complaint was:

many people use jQuery (which is a large CPU-heavy library) to do things that can be done in fewer lines of straight javascript

Why use a library and 10 lines of library calls to do something that you could do in 5 lines of code? You should use libraries when the cost of reimplementing the functionality is higher than the cost of using the library.

Comment Re:More on this please! (Score 1) 495

There's a lot of ongoing work. Some of it is in formal models of programming languages: we had a paper in PLDI last year on a model for C (which has to be parameterised, because it turns out the C standards committee, C programmers, and C compiler writers have almost totally disjoint ideas about what C means). We're looking at combining this with formal models of CPUs to allow black-box testing of compilers (if we can prove that the output corresponds to the input then we don't need to verify a few million lines of compiler). There's white-box stuff, such as the Alive project from Utah and MSR looking at proving correctness of a number of the peephole optimisations in LLVM (although, unfortunately, it's so far managed to prove incorrectness of a depressing number of them).

Beyond that, there's a lot of ongoing work improving the proof tools. Coq, Isabelle and HOL4 are still in active development and AGDA has gained an increasingly large following in recent years. The more that you can automate the proofs, the more feasible this kind of thing becomes and, in particular, the easier you can make it to write modular proofs the easier it is to maintain the code. The CEO of Intel just after the Pentium FDIV bug started to pour money into verification and made quite an insightful comment in this regard: He didn't care how much it cost to verify something the first time (it had to be cheaper than doing a full recall), but he needed it to be cheap to re-verify after some changes. Centaur (formerly VIA's x86 chip division) incorporates a lot of verification into their CI system: they're a long way away from full verification, but their verification tools help catch and pinpoint a lot of things that would otherwise (probably) show up in testing in the simulator as 'something is broken in the design'.

There's also a growing push towards proof-carrying code. The F* language (MSR and INRIA) is probably the best example in this space. It's a version of F# that allows you to write proofs in the same language that you write the code and only compiles when the code matches the proofs. Unfortunately, F* still has a very long way to go before simple changes are easy. For example, they have a talk where they walk through verifying a quicksort implementation. If you want to change the quicksort to a heapsort, then you have to throw away most of the proof (and the proof is more complex than the implementation, though at least the input / output specifications are simple and so you can be sure that the proof is correct if it compiles). They're also using an SMT solver (Z3) on the back end, so there's no guarantee that it will actually terminate: you will either get a counterexample, a success message, or a timeout in the solver and a timeout doesn't necessarily mean that the proof is correct or incorrect, just that it's too complex to machine check.

Comment Re:Does formal verification really prove correctne (Score 1) 495

Formal verification proves that the program corresponds to the specification. It doesn't prove that the specification is correct. It's really hard not to run into Goedel's incompleteness theorem and discover that the specification is actually more complex (and therefore error prone) than the program.

Comment Re:An error? (Score 1) 142

I can think of two ways. Way one: you contract out the development to a company that has a paid version and an ad-supported version. They accidentally give you the ad-supported apk to push to your customers instead of the paid one. Way two: you have an internal project to see if you can push down the up-front cost of a phone by pushing ads in various places (as the Kindle Fire does on the lock screen) and using that to subsidise the cost. You have the same people working on it as the normal system and one of them accidentally pushes to the wrong repo.

Comment Re:Whatya gonna do? (Score 2) 142

The difference is a lot lower at the higher end. My partner has just replaced her ageing Nokia 1020 (really nice phone and the hardware is still pretty solid, but Windows Mobile 8.1 doesn't get security updates and has almost no third-party apps) with a second-hand Samsung. A similar iPhone is only 20-30% more expensive, but the killer is the lack of third-party OS support. When an iPhone stops getting security updates, it's basically dead. When a moderately popular Android device stops getting security updates, you can install LineageOS on it and it remains useable (my first-gen Moto G is older than her Nokia and a lot slower, but it's still getting fortnightly updates from LineageOS and I expect to keep using it until it physically dies).

Comment Re:well, he's not wrong (Score 1) 495

What if the pointer points beyond the end of some chunk of malloc'd memory?

Then you will get a trap. The bounds are set at malloc() time and cannot be extended (except by the code in malloc, which holds a pointer to a larger area which it can subset).

Merely checking array bounds is insufficient for any non-trivial C program.

Checking the array bounds is sufficient. Checking the bounds of the static type, is not.

Comment Re:Idiots everywhere... (Score 1) 325

Forcing the companies to provide the means to unencrypt all the data passing through it's services provides very little benefit

It provides no benefit, because the bad people will not use the backdoor'd encryption, they will use something else (if they buy a copy of Applied Cryptography second hand then they can just type in the cost listings for some secure algorithms and use their own version). On the other hand, the existence of a backdoor intrinsically makes a system insecure, so everyone else suffers from making it easier for criminals to gain access to their messages.

Comment Re:Cheap bounds-checking is possible: Cheri CPU (Score 1) 495

I am the author of the CHERI C/C++ implementation (and a bunch of the instructions and key parts of the abstract model). A few corrections:

does pointer bounds and 'const' qualifier checking and some protection of C++ private member variables in hardware

We don't enforce const in hardware, because it broke too much code. For example, the C standard library function strstr takes a const char* argument and returns a char* derived from it. Our hardware doesn't permit you to add permissions to a pointer (or increase its bounds), and so the returned pointer also lacks the store permissions. Instead, we add __input and __output qualifiers that allow this to work: if you cast a pointer to an __input-qualified one then the compiler removes store permissions and no pointer derived from that pointer can be used to modify an object.

sufficient to replace the security properties of virtual memory (with the caveat you can't unmap a page without killing the process)

CHERI doesn't require that you throw away the MMU and some things work a lot better when you compose the two. The MMU is good for coarse-grained isolation, CHERI is good for fine-grained sharing. You can use the MMU to revoke objects (unmap the pages), without having to find and invalidate all pointers (which we can do - a couple of my students and I have added accurate garbage collection to C).

It's implemented in FPGA and only requires a few gates over a standard MIPS and is working with comparable performance without any exotic kinds of cache memory.

Note that we do rely on tagged memory, though we are able to efficiently implement this in commodity DRAM via a tag cache (some of my colleagues have done some great work on improving the efficiency here). We need one tag bit per 128 bits of memory (the tag bit tells you whether an aligned 128-bit value contains a pointer or normal data), so 256 bits per page. You can read 256 bits from a single DRAM read, so anything with vaguely good cache locality rarely needs to pull the tags from DRAM.

It's only "approaching" because I don't think it can solve use-after-free

We can implement an accurate garbage collector in C with CHERI. Some of my current work is attempting to push the performance of this up to where C programmers will be willing to just leave it on, because they won't be able to measure a slowdown vs malloc / free. For the Java interop work that I published at ASPLOS this year, we did very coarse-grained revocation, allowing the JVM to invalidate pointers that the native code held for longer than it claimed that it would, thus preventing any spatial or temporal memory safety violations in C code from affecting the Java heap in any way.

Comment Re:well, he's not wrong (Score 2, Interesting) 495

It's just way way way too easy to write buffer overflows in C

Technically: it's too easy to write buffer overflows in most implementations of C. The C specification makes accessing beyond the end of a buffer undefined behaviour. That means that an implementation is allowed to do whatever it wants (and doesn't have to do the same thing consistently). Most implementations implement this as 'trample an arbitrary memory location', but it's completely valid to transform it into a recoverable trap. My research group has proposed a small set of CPU extensions that we're talking to vendors about adopting that allows an implementation of C (which I've created and which we're able to build complex C programs with) that provides these guarantees. Any out-of-bounds array access for us is a CPU trap, which the OS turns into a signal. You can then either kill the process, unwind the stack and kill the thread, or recover in some other way.

Comment Re:Ada (Score 1) 495

Image processing routines are exactly the sort where compilers are good at eliding bounds checks. The input size is fixed, the filter size is fixed, and scalar evolution lets you calculate up-front the maximum index using well-known analysis techniques. It's then easy to hoist the checks out of the loop and leave a single 'will this loop ever overflow' check, with no bounds checks during the calculation.

Slashdot Top Deals

The trouble with being punctual is that people think you have nothing more important to do.

Working...