Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:An error? (Score 1) 142

I can think of two ways. Way one: you contract out the development to a company that has a paid version and an ad-supported version. They accidentally give you the ad-supported apk to push to your customers instead of the paid one. Way two: you have an internal project to see if you can push down the up-front cost of a phone by pushing ads in various places (as the Kindle Fire does on the lock screen) and using that to subsidise the cost. You have the same people working on it as the normal system and one of them accidentally pushes to the wrong repo.

Comment Re:Whatya gonna do? (Score 2) 142

The difference is a lot lower at the higher end. My partner has just replaced her ageing Nokia 1020 (really nice phone and the hardware is still pretty solid, but Windows Mobile 8.1 doesn't get security updates and has almost no third-party apps) with a second-hand Samsung. A similar iPhone is only 20-30% more expensive, but the killer is the lack of third-party OS support. When an iPhone stops getting security updates, it's basically dead. When a moderately popular Android device stops getting security updates, you can install LineageOS on it and it remains useable (my first-gen Moto G is older than her Nokia and a lot slower, but it's still getting fortnightly updates from LineageOS and I expect to keep using it until it physically dies).

Comment Re:well, he's not wrong (Score 1) 487

What if the pointer points beyond the end of some chunk of malloc'd memory?

Then you will get a trap. The bounds are set at malloc() time and cannot be extended (except by the code in malloc, which holds a pointer to a larger area which it can subset).

Merely checking array bounds is insufficient for any non-trivial C program.

Checking the array bounds is sufficient. Checking the bounds of the static type, is not.

Comment Re:Idiots everywhere... (Score 1) 323

Forcing the companies to provide the means to unencrypt all the data passing through it's services provides very little benefit

It provides no benefit, because the bad people will not use the backdoor'd encryption, they will use something else (if they buy a copy of Applied Cryptography second hand then they can just type in the cost listings for some secure algorithms and use their own version). On the other hand, the existence of a backdoor intrinsically makes a system insecure, so everyone else suffers from making it easier for criminals to gain access to their messages.

Comment Re:Cheap bounds-checking is possible: Cheri CPU (Score 1) 487

I am the author of the CHERI C/C++ implementation (and a bunch of the instructions and key parts of the abstract model). A few corrections:

does pointer bounds and 'const' qualifier checking and some protection of C++ private member variables in hardware

We don't enforce const in hardware, because it broke too much code. For example, the C standard library function strstr takes a const char* argument and returns a char* derived from it. Our hardware doesn't permit you to add permissions to a pointer (or increase its bounds), and so the returned pointer also lacks the store permissions. Instead, we add __input and __output qualifiers that allow this to work: if you cast a pointer to an __input-qualified one then the compiler removes store permissions and no pointer derived from that pointer can be used to modify an object.

sufficient to replace the security properties of virtual memory (with the caveat you can't unmap a page without killing the process)

CHERI doesn't require that you throw away the MMU and some things work a lot better when you compose the two. The MMU is good for coarse-grained isolation, CHERI is good for fine-grained sharing. You can use the MMU to revoke objects (unmap the pages), without having to find and invalidate all pointers (which we can do - a couple of my students and I have added accurate garbage collection to C).

It's implemented in FPGA and only requires a few gates over a standard MIPS and is working with comparable performance without any exotic kinds of cache memory.

Note that we do rely on tagged memory, though we are able to efficiently implement this in commodity DRAM via a tag cache (some of my colleagues have done some great work on improving the efficiency here). We need one tag bit per 128 bits of memory (the tag bit tells you whether an aligned 128-bit value contains a pointer or normal data), so 256 bits per page. You can read 256 bits from a single DRAM read, so anything with vaguely good cache locality rarely needs to pull the tags from DRAM.

It's only "approaching" because I don't think it can solve use-after-free

We can implement an accurate garbage collector in C with CHERI. Some of my current work is attempting to push the performance of this up to where C programmers will be willing to just leave it on, because they won't be able to measure a slowdown vs malloc / free. For the Java interop work that I published at ASPLOS this year, we did very coarse-grained revocation, allowing the JVM to invalidate pointers that the native code held for longer than it claimed that it would, thus preventing any spatial or temporal memory safety violations in C code from affecting the Java heap in any way.

Comment Re:well, he's not wrong (Score 2, Interesting) 487

It's just way way way too easy to write buffer overflows in C

Technically: it's too easy to write buffer overflows in most implementations of C. The C specification makes accessing beyond the end of a buffer undefined behaviour. That means that an implementation is allowed to do whatever it wants (and doesn't have to do the same thing consistently). Most implementations implement this as 'trample an arbitrary memory location', but it's completely valid to transform it into a recoverable trap. My research group has proposed a small set of CPU extensions that we're talking to vendors about adopting that allows an implementation of C (which I've created and which we're able to build complex C programs with) that provides these guarantees. Any out-of-bounds array access for us is a CPU trap, which the OS turns into a signal. You can then either kill the process, unwind the stack and kill the thread, or recover in some other way.

Comment Re:Ada (Score 1) 487

Image processing routines are exactly the sort where compilers are good at eliding bounds checks. The input size is fixed, the filter size is fixed, and scalar evolution lets you calculate up-front the maximum index using well-known analysis techniques. It's then easy to hoist the checks out of the loop and leave a single 'will this loop ever overflow' check, with no bounds checks during the calculation.

Comment Re:Yes, go ahead! (Score 4, Informative) 487

Except that Rust is kinda a bit like C except with a formal verifier built in so you can prove you don't have memory errors

Please stop repeating this. Rust has a type checker. Most languages have a type checker. The type system in Rust is stricter than that in C (though it is possible to implement the same thing in the library in C++), but it is not a formally verified type system and the implementation of the type checker (which is not a formal verifier) is also not verified (and can't be until the type system itself is verified). If you want a language with a formally verified type system, look at Pony. If you want a language that integrates formal verification, look at F*. If you want to use Rust, that's fine, but stop claiming that it has features that it doesn't.

Comment Re: Yes, go ahead! (Score 4, Informative) 487

I work with a group that does formal verification and you seem also to be talking from a position of ignorance. Currently, the record for low-cost formally verified software is held by the NICTA team behind seL4. Their number is around 30 times the cost of using best practices for normal software development. A few caveats for this number:

  • The baseline is assuming the effort involved in creating a test suite with full coverage of the specification and a detailed specification. Few projects actually have this level of QA, so add another factor of 2-4 to the cost relative to most projects.
  • They're employing mathematicians on fairly low (academic-level) salaries for the verification. In an industrial setting, with skills in so much demand, you'd find it difficult to pay them less than double this.
  • Their entire software stack is around 10,000 lines of code and they've not yet shown that their verification effort scales linearly with the complexity of the software.
  • The cost of refactoring is close to the cost of initial development, as the proofs are not often reusable after modifications to the code.
  • It was a whole 6 hours between the first public release of their formally verified microkernel and someone finding the first exploitable security vulnerability.

There's a lot of ongoing research in this area (I quite like F*, though it has some significant issues with proof reuse and usability of its error messages), but the tools for formal verification are currently as appropriate for large-scale modern software development as punch cards.

Comment Re:Yes, go ahead! (Score 1) 487

you instead use a language with some form of god damn formal verification

I think you're confusing Rust with Pony (which does have formal verification of its type system). Rust's type system has not been formally verified (let alone any of the implementation, though someone did verify its binary search implementation).

Slashdot Top Deals

All warranty and guarantee clauses become null and void upon payment of invoice.

Working...