Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Compare cell phone plans using Wirefly's innovative plan comparison tool ×

Comment Re: Weirdly specific statement (Score 1) 55

What is the limiting factor? Buildup of CO2?

People need a certain amount of oxygen for their metabolism, you need to carry that much. CO2 effects the blood pH: too little and the body is too alkaline, too much and it's too acidic. So, you need to maintain a precise amount of CO2 and remove the rest. The scrubbers in the space shuttle were able to regenerate the CO2-absorbent material after use, so there was use of power but material wasn't consumed.

Beyond this, you need to control temperature and humidity. The other requirements than atmosphere for crew survival are that you water, feed and shelter the crew, maintain orientation, and maintain a G-force envelope that doesn't injure the crew.

Submission + - September 19th SpaceX Launch will be visible across California, Nevada. (reddit.com)

Bruce Perens writes: The nighttime launch of a SpaceX Falcon 9 containing Iridium satellites at 9:49 PM PST Monday September 19th from Vandenberg AFB SLC-4 is likely to be visible across California and in some Nevada locations. Although Vandenberg has a landing pad for the Falcon under construction, this will probably be a drone-ship landing and some California observers might see two of the landing burns.

Comment Re:Next up O'Google (Score 1) 190

The problem is that they don't bring many jobs and the ones that they do are low-skill, low-pay. For example, Apple runs a call centre and a distribution centre in Ireland. Callcentre employees are just reading through a script, the shipping centre is moving boxes around. They're not bringing the engineering and R&D jobs that come with high salaries that translate to higher income tax revenues and knock-on benefits in the local economies from increased spending.

Comment Re:Moronic Subject for an Article (Score 2) 190

Java isn't a bad language. It's a constrained language, but in general it's constrained in a good way. It may make it difficult to write the best solution, but it makes it impossible to write the ten worst solutions and easiest to write a not-too-bad solution to any given problem. It also strongly encourages modularity and provides tools for reducing privilege for parts of a program so that you don't need to trust all programmers in your address space equally. It's certainly not the best tool for all jobs, but if you have a complex business application that you want to support for a long time with relatively high programmer turnover, it's far from the worst tool.

Comment Re:It's not a popularity contest (Score 1) 190

That's a good reason for providing a C interface, but there's no reason not to use C++ (or Objective-C) inside your library. That said, if you provide a C++ interface that uses smart pointers and conveys explicit ownership semantics, then it's much easier to machine generate interfaces for other languages (even for C) that care about memory management.

Comment Re:problems, lol (Score 4, Informative) 190

The real problem with C is that WG14 sat on its fingers between 1999 and 2011. C11 gave us:

_Generic - Useful for a few things (mostly tgmath.h, which I've rarely seen used in real code because C's type promotion rules make it very dangerous, but it was quite embarrassing that, for 12 years, the C standard mandated a header that could not be implemented in standard C). Existing compilers have all provided a mechanism for doing the same thing (they had to, or they couldn't implement tgmath.h), but it was rarely used in real code. Oh, and the lack of type promotion in _Generic makes it annoyingly verbose: int won't be silently cast to const int, for example, so if you want to handle both then you need to provide int and const int cases, even though it's always safe to use const int where an int is given as the argument.

_Static_assert - useful, but most people had already implemented a similar macro along the lines of:

#define _Static_assert(x) static int _assert_failed_ ## __COUNTER__ [x ? 1 : -1];

This gives a 1 or -1 element array, depending on whether x is true. If x is true, the array is optimised away, if x is false then you get a compile-time failure. _Static_assert in the compiler gives better error diagnostic, but doesn't actually increase the power of the language.

And then we get on to the big contributions: threads and atomics. The threading APIs were bogged down in politics. Microsoft wanted a thin wrapper over what win32 provided, everyone else a thin wrapper over what pthreads provided. Instead, we got an API based on a small company that no one had ever heard of's library, which contains a clusterfuck of bad design. For example, the timeouts assume that the real-time clock is monotonic. Other threading libraries fixed this in the '90s and provide timeouts expressed relative to a monotonic clock.

The atomics were lifted from a draft version of the C++11 spec (and, amusingly, meant that C11 had to issue errata for things that were fixed in the final version of C++11). They were also not very well thought through. For example, it's completely permitted in C11 to write _Atomic(struct foo) x, for any size of struct foo, but the performance characteristics will be wildly different depending on that size. It's also possible to write _Atomic(double) x, and any operation on x must save and restore the floating point environment (something that no compiler actually does, because hardly anyone fully implements the Fortran-envy parts of even C99).

In contrast, let's look at what WG21 gave us in the same time:

Lambdas. C with the blocks extension (from Apple, supported by clang on all platforms that clang supports now) actually gives us more powerful closures, and even that part of blocks that doesn't require a runtime library (purely downward funargs) would have been a useful addition to C. Closures are really just a little bit of syntactic sugar on a struct with a function pointer as a field, if you ignore the memory management issues (which C++ did, requiring you to use smart pointers if you want them to persist longer than the function in which they're created). C++14 made them even nicer, by allowing auto as a parameter type, so you can use a generic lambda called from within the function to replace small copied and pasted fragments.

Atomics, which were provided by the library and not the language in C++11. Efficient implementations use compiler builtins, but it's entirely possible to implement them with inline assembly (or out-of-line assembly) and they can be implemented entirely in terms of a one-bit lock primitive if required for microcontroller applications, all within the library. They scale down to small targets a lot better than the C versions (which require invasive changes to the compiler if you want to do anything different to conventional implementations).

Threads: Unlike the C11 mess, C++11 threads provide useful high-level abstractions. Threads that can be started from a closure (with the thread library being responsible for copying arguments to the heap, so you don't have the dance of passing a pointer to your own stack and then waiting for the callee to tell you that it's copied them to its stack). Futures and promises. Locks that are tied to scopes, so that you don't accidentally forget to unlock (even if you use exceptions).

Smart pointers. C++11 has unique_ptr and shared_ptr, for exclusive and shared ownership semantics. unique_ptr has zero run-time overhead (it compiles away entirely), but enforces unique ownership and turns a whole bunch of difficult-to-debug use-after-free bugs into simple null-pointer-dereferences. shared_ptr is thread safe (ownership in the presence of multithreading is very hard!) and also allows weak references.

C++14 and C++17 both made things even better. I've already mentioned generic lambdas in C++14, C++17 adds structured binding (so you can return a structure from a function and in the caller decompose it into multiple separate return values). It also adds optional (maybe types), any (generic value type) and variant (type-safe union) to the standard library. Variant is going to make a lot of people happy.

With C++11, the language moved from being one I hated and avoided where possible, to my go-to language for new projects. With a rich regular expression library, threads, smart pointers, and lambdas, it's now useable for things that I'd traditionally use a scripting language for as well (and an order of magnitude faster when crunching a load of data). In contrast, C has barely changed since the '80s. It still has no way of doing safe and efficient generic data structures (you either use macros and lose type safety, or you use void* and lose type safety and performance). It still has no way of expressing ownership semantics and reasoning about memory management in multithreaded programs. The standard library still doesn't provide any useful data structures more complex than an array (not even a linked list), whereas C++ provides maps and sets (ordered and unordered), resizable and fixed-size arrays, lists, stacks, queues, and so on.

C11 didn't really address parallelism and definitely didn't address reliability or security. Microsoft Research's Checked-C provides some very nice features, but they initially prototyped them in C++ where they could implement them all purely as library features.

Comment Re:Is he going for irony, here? (Score 4, Informative) 204

In terms of Linux, it's not classical security through obscurity, it's security through diversity. One of the reasons Slammer was so painful a decade ago was that most institutions had a Windows monoculture. The time between one machine being infected on your network and every machine on your network being infected was about 10 minutes (a fresh Windows install on the network was compromised before it finished running Windows Update for the first time). If you'd had a network that was 50% Windows and 50% something else, then it would only have infected half of your infrastructure and you'd have been able to pull the plug on the Windows machines and start recovery. It's possible to write cross-platform malware, but it's a lot harder (though there's some fun stuff out of one of the recent DARPA programs writing exploit code that is valid x86 and ARM code, relying on encodings that are nops in one and valid in the other, interspersed with the converse). Writing malware that can attack half a dozen combinations of OS and application software is difficult.

This is why Verisign's root DNS runs 50% Linux, 50% FreeBSD and of those they run two or three userland DNS servers, so an attack on a particular OS or particular DNS server will only take out (at most) half of the machines. Even an attack on an OS combined with an independent attack on the DNS server will still leave them with about a quarter functional, which will result in a bit more latency for Internet users, but leave them functioning.

Comment Re:AV only helps if you are bad (Score 5, Interesting) 204

You got lucky. There are two problems with most Antivirus software:

Most of them still use system call interposition. They're vulnerable to a whole raft of time-of-check to time-of-use errors, so the only part that actually catches things is the binary signature checking, and that requires you to install updates more frequently than malware authors release new versions - it's a losing battle.

They run some quite buggy code in high privilege. In the last year, all of the major AV vendors have had security vulnerabilities. My favourite one was Norton, which had a buffer overflow in their kernel-mode scanner. Providing crafted data to it allowed an attacker to get kernel privilege (higher than administrator privilege on Windows). You could send someone an email containing an image attachment and compromise their system as long as their mail client downloaded the image, even if they didn't open it. It's hard to argue that software that allows that makes your computer more secure.

Comment Re: It's research... (Score 1) 145

Tee hee! Back in the day, one of the points I made to the old farts was that I had passed the 20 WPM exam and had my K6BP call to show for it, but refused to use the code on the air until the requirement was gone. Nobody spat at me or punched me out, the worst that ever happened was a poor behaving slim using my call and a postcard from the ARRL observer who thouht it was me.

Slashdot Top Deals

The ideal voice for radio may be defined as showing no substance, no sex, no owner, and a message of importance for every housewife. -- Harry V. Wade

Working...