Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re:Inadequate Buffer (Score 1) 88 88

100 feet of buffer is inadequate. How the hell do you measure your AGL when you're flying? You either use a radar altimeter ($25K installed on an airplane worth $20K) or you use the baro altimeter, which has an acceptable calibration error, plus the local altimeter setting (atmospheric pressure) which has an error band, and there's error because you're not right over the reporting station.

Well, if you had a good GPS receiver and sufficiently detailed topographic maps on board you could also guesstimate AGL that way--but I agree that it's still a dubious and non-robust approach. And your radar altimeter doesn't have to run $25K if it only needs to work up to a few hundred feet and only be "hobbyist" or "drone" rated.

But really, forget measurement--that's probably not even the biggest problem. I suspect that it would be very technically challenging for these craft to physically maintain their permitted altitude. A good gust, an up- or down-draft, and your plus-or-minus 100 feet goes by in no time.

Comment Re:Slashdot summary, as usual, misses the point (Score 1) 103 103

OTOH, my fascist firewall blocks blog posts such as Callaway's, so I really appreciate the hop through an unblocked source. I take it from context that article covers some stuff that isn't in the blog post, as well.

You're thinking of this as an either-or situation, when it really isn't. Hyperlinks are cheap. There's no reason for the summary not to clearly say, e.g."Here is the original blog post in its entirety, and here is an article which discusses some points from the blog post along with some other stuff." If they can't even manage that, then the link should at least clearly indicate that it isn't to the content described in the summary.

Instead, the Slashdot summary fails to link to the original blog post and implies misleadingly that the link in the summary actually does do so.

Comment Re:Correct link to TRA (Score 1) 103 103

An alarming number of those hold for Chromium and they all stem from one core issue: Google developers do not understand how to design APIs. A lot of the bundled projects could be entirely separate repositories and shipped as shared libraries if they did, but gratuitous API churn means that they have to keep copies of things like v8 and Skia for Chrome and build the whole thing at once. It's fine to do the aggregate build thing if you want LTO, but it should be a performance optimisation, not a requirement of the software engineering workflow.

Comment Re:I disagree with some of these points (Score 2) 103 103

It depends a lot on the codebase. Codebases tend to accumulate cruft. Having people refactor them because their requirements are different to yours can help, as can having a project developed without key product ship dates as the driving force. The bigger barrier is culture though. It's really hard to have a group of developers that have been working on a project for 10 years in private move to developing in public. In the list, he actually gives different numbers of fail points, more for projects that were proprietary for longer than they were open, which makes a lot more sense than the summary in the 'article'.

The one that I disagree with is 'Your source builds using something that isn't GNU Make [ +10 points of FAIL ]'. I disagree for two reasons. The first is that it implies using GNU make features, which likely means that you're conflating building and build configuration (which should gain some fail points). The projects that I most enjoy hacking on use CMake and Ninja for building by default (CMake can also emit POSIX Makefiles that GNU Make can use, but I take his point to mean that gmake is the only command you need to build, so the CMake dependency would be a problem). LLVM still more or less maintains two build systems, though the autoconf + gmake one is slowly being removed in favour of the CMake one. If I make a small change, it takes Ninja less time to rebuild it than it takes gmake to work out that it has nothing to do if I don't make any changes.

I'd also disagree with 'Your code doesn't have a changelog' - this is a GNU requirement, but one that dates back to before CVS was widely deployed. The revision control logs now fill the same requirement, though you should have something documenting large user-visible changes.

Comment Re:No kidding. (Score 1) 248 248

As for "web page", AJAX apps do exactly this

AJAX provides a mechanism for delivering the XML. How many popular web apps can you name that completely separate the back end and the front end and provide documentation for users to talk directly to the back end and substitute their own UI or amalgamate the data with that from other services? Of those, how many provide the data in a self-documenting form?

Comment Slashdot summary, as usual, misses the point (Score 5, Informative) 103 103

If we're going to talk about Callaway's Points of Fail, and create a link in the Slashdot summary that *looks* like it takes you to that list, then perhaps there should actually be a link to the list.

Callaway's original Points of Fail blog post.

You know, instead of the usual Slashdot way of pointing to an article wrapper that talks briefly about some of the points and then eventually links to the real list.

Comment Re:How soon until x86 is dropped? (Score 1) 146 146

There's no problem with the decoder. The A8 is an older chip. The A7 is an updated version of the A8 (smaller, more power efficient due to various tweaks and extended to support a newer version of the instruction set so that it can be used in big.LITTLE configurations with the A15. Oh, and with SMP support, which the A8 lacked, though the A9 had). The A8 is not faster than the A7.

Comment Re:Not the best summary... (Score 1) 190 190

As I was saying: If your kids are immunocompromised, they have a lot more to worry about than measles. That is, there are many other diseases they have to worry about besides the few we can vaccinate against.

Why do you keep talking about immunocompromised people? The measles vaccine, for example, only works in about 95% of cases, the other people are not immunised. They have no other autoimmune issues and, unless exposed to the measles virus, will have no issues.

Almost everybody in "the entire population" who is vaccinated is protected by the vaccine and hence not "vulnerable". So "the entire population" doesn't become more vulnerable.

If immunity drops below about 93% for measles, then the population no longer benefits from herd immunity. This means that anyone who is not immune (including those 5% who were vaccinated but didn't receive the benefit) is at a much higher risk of being infected. It also means more infections, which increases the probability of the disease mutating, which affects everyone. People who are infected then have compromised immune systems and so are likely to suffer from other infections, which can then spread to the rest of the population.

Comment Re:Not the best summary... (Score 4, Insightful) 190 190

Most vaccines are not 100% effective. You need a certain percentage of the population to be immune for herd immunity to mean that they have little chance of contracting the disease (and, if they do, a good chance of being an isolated statistic rather than the centre of an outbreak). It only takes a few percent opting out of the vaccine to eliminate the herd immunity and make the entire population more vulnerable.

Comment Re:Wow, end of an era. (Score 1) 146 146

When people talk about an n-bit CPU, they're conflating a lot of things:
  • Register size (address and data register size on archs that have separate ones).
  • Largest ALU op size
  • Virtual address size
  • Physical address size
  • Bus data lane size
  • Bus address lane size

It's very rare to find a processor where all of these are the same. Intel tried marketing the Pentium as a 64-bit chip for a while because it had 64-bit ALU ops. Most '64-bit' processors actually have something like a 48-bit virtual and 40-bit physical address space, but 64-bit registers and ALU ops (and some have 128-bit and 256-bit vector registers and ALU ops). The Pentium Pro with PAE had a 36-bit physical but 32-bit virtual address space, so you only got 4GB of address space per process, but multiple processes could use more than 4GB between them. This is the opposite way around to what you want for an OS, where you want to be able to map all of physical memory into your kernel's virtual address space and is one of the reasons that PAE kernels came with a performance hit.

Luck, that's when preparation and opportunity meet. -- P.E. Trudeau

Working...