Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?

Comment Re:I don't get it (Score 1) 132 132

Why don't publishers put the ads in a section of the page that can allow the rest of the page to load and render before the ad loads and renders?

Because you could stop the loading once the content you wanted was rendered, thus skipping the ad.

So the pages are set up so the ad loads and renders first.

Comment Re:Correct link to TRA (Score 1) 99 99

An alarming number of those hold for Chromium and they all stem from one core issue: Google developers do not understand how to design APIs. A lot of the bundled projects could be entirely separate repositories and shipped as shared libraries if they did, but gratuitous API churn means that they have to keep copies of things like v8 and Skia for Chrome and build the whole thing at once. It's fine to do the aggregate build thing if you want LTO, but it should be a performance optimisation, not a requirement of the software engineering workflow.

Comment Re:I disagree with some of these points (Score 2) 99 99

It depends a lot on the codebase. Codebases tend to accumulate cruft. Having people refactor them because their requirements are different to yours can help, as can having a project developed without key product ship dates as the driving force. The bigger barrier is culture though. It's really hard to have a group of developers that have been working on a project for 10 years in private move to developing in public. In the list, he actually gives different numbers of fail points, more for projects that were proprietary for longer than they were open, which makes a lot more sense than the summary in the 'article'.

The one that I disagree with is 'Your source builds using something that isn't GNU Make [ +10 points of FAIL ]'. I disagree for two reasons. The first is that it implies using GNU make features, which likely means that you're conflating building and build configuration (which should gain some fail points). The projects that I most enjoy hacking on use CMake and Ninja for building by default (CMake can also emit POSIX Makefiles that GNU Make can use, but I take his point to mean that gmake is the only command you need to build, so the CMake dependency would be a problem). LLVM still more or less maintains two build systems, though the autoconf + gmake one is slowly being removed in favour of the CMake one. If I make a small change, it takes Ninja less time to rebuild it than it takes gmake to work out that it has nothing to do if I don't make any changes.

I'd also disagree with 'Your code doesn't have a changelog' - this is a GNU requirement, but one that dates back to before CVS was widely deployed. The revision control logs now fill the same requirement, though you should have something documenting large user-visible changes.

Comment Re:No kidding. (Score 1) 248 248

As for "web page", AJAX apps do exactly this

AJAX provides a mechanism for delivering the XML. How many popular web apps can you name that completely separate the back end and the front end and provide documentation for users to talk directly to the back end and substitute their own UI or amalgamate the data with that from other services? Of those, how many provide the data in a self-documenting form?

Comment Re:How soon until x86 is dropped? (Score 1) 146 146

There's no problem with the decoder. The A8 is an older chip. The A7 is an updated version of the A8 (smaller, more power efficient due to various tweaks and extended to support a newer version of the instruction set so that it can be used in big.LITTLE configurations with the A15. Oh, and with SMP support, which the A8 lacked, though the A9 had). The A8 is not faster than the A7.

Comment Re:Not the best summary... (Score 1) 189 189

As I was saying: If your kids are immunocompromised, they have a lot more to worry about than measles. That is, there are many other diseases they have to worry about besides the few we can vaccinate against.

Why do you keep talking about immunocompromised people? The measles vaccine, for example, only works in about 95% of cases, the other people are not immunised. They have no other autoimmune issues and, unless exposed to the measles virus, will have no issues.

Almost everybody in "the entire population" who is vaccinated is protected by the vaccine and hence not "vulnerable". So "the entire population" doesn't become more vulnerable.

If immunity drops below about 93% for measles, then the population no longer benefits from herd immunity. This means that anyone who is not immune (including those 5% who were vaccinated but didn't receive the benefit) is at a much higher risk of being infected. It also means more infections, which increases the probability of the disease mutating, which affects everyone. People who are infected then have compromised immune systems and so are likely to suffer from other infections, which can then spread to the rest of the population.

Comment Re:Not the best summary... (Score 4, Insightful) 189 189

Most vaccines are not 100% effective. You need a certain percentage of the population to be immune for herd immunity to mean that they have little chance of contracting the disease (and, if they do, a good chance of being an isolated statistic rather than the centre of an outbreak). It only takes a few percent opting out of the vaccine to eliminate the herd immunity and make the entire population more vulnerable.

Comment Re:Wow, end of an era. (Score 1) 146 146

When people talk about an n-bit CPU, they're conflating a lot of things:
  • Register size (address and data register size on archs that have separate ones).
  • Largest ALU op size
  • Virtual address size
  • Physical address size
  • Bus data lane size
  • Bus address lane size

It's very rare to find a processor where all of these are the same. Intel tried marketing the Pentium as a 64-bit chip for a while because it had 64-bit ALU ops. Most '64-bit' processors actually have something like a 48-bit virtual and 40-bit physical address space, but 64-bit registers and ALU ops (and some have 128-bit and 256-bit vector registers and ALU ops). The Pentium Pro with PAE had a 36-bit physical but 32-bit virtual address space, so you only got 4GB of address space per process, but multiple processes could use more than 4GB between them. This is the opposite way around to what you want for an OS, where you want to be able to map all of physical memory into your kernel's virtual address space and is one of the reasons that PAE kernels came with a performance hit.

Comment Re:How soon until x86 is dropped? (Score 1) 146 146

Videogame programmer here. It wasn't really a compiler optimization issue. There's no compiler on the planet that can perform high-level optimizations like that.

Compiler engineer here. The vectorisation for the Cell wasn't the hard part, it was the data management. Autovectorisation and even autoparallelisation are done by some compilers (the Sun compiler suite was doing both before the Cell was introduced), and can be aided by OpenMP or similar annotations. If the Cell SPUs had been cache-coherent and had direct access to DRAM, then there's a good chance that a bit of investment in the compiler would have given a big speedup. The problem of deciding when to DMA data to and from the SPUs and where you need to add explicit synchronisation into the PPU was much, much harder. I've worked on a related problem in the context of automatic offload to GPUs and it turns out to be non-computable in most nontrivial cases (it depends heavily on accurate alias analysis).

Comment Re:How soon until x86 is dropped? (Score 1) 146 146

MIPS and PowerPC are still huge in embedded. MIPS is used on a huge number of cheap routers and a lot of these are in dire need of a better OS than they ship with (and many of them ship with a hacked-up Linux). PowerPC is mostly big in automotive, but IBM still sells machines and is willing to keep funding a lot of the software support. The same goes for S/390: a big part of IBM's sales pitch there is that you can spin up Linux VMs on it easily and run the OS that you're used to. SPARC these days basically means Oracle appliances. You don't buy a SPARC machine if you want to run Linux, you buy one if you want to do the vertical integration thing with Oracle (i.e. Oracle arranges you vertically with your head downwards and shakes until all of the money is integrated with their wallet).

Comment Re:ran debian on sparc for over 10 years (Score 1) 146 146

Someone needs to develop the software. The difference between open source and proprietary software is that open source software is developed by and for people who want to use it, proprietary software is developed by people who want to sell it. Successful projects are ones where the people who want to use it want to use it enough to fund development.

You are in a maze of little twisting passages, all different.