Forgot your password?

Comment: Re:We get cancer because we have linear DNA (Score 1) 173

by jd (#47726213) Attached to: New Research Suggests Cancer May Be an Intrinsic Property of Cells

That's easy to fix. If a cell has not just the existing error correction codes but also digital ones as well, then mutagenic substances (of which there are a lot) and telemere shortening can be fixed. Well, once we've figured out how to modify the DNA in-situ. Nanotech should have that sorted soonish.

The existing error correction is neither very good nor very reliable. This is a good thing, because it allows evolution. You don't want good error correction between generations. You just want it in a single person over their lifespan, and you want it restricted so that it doesn't clash with retrotranspons and other similar mechanisms. So, basically, one whole inter-gene gap/one whole gene protected by one code. Doable. You still need cell death - intercept the signal and use a guaranteed method.

Comment: Exploit that which you cannot defeat (Score 1) 173

by jd (#47726171) Attached to: New Research Suggests Cancer May Be an Intrinsic Property of Cells

Here, in the year Lemon Meringue, we decided to solve the problem once and for all.

Instead of trying to kill cancer, we hijack its techniques. We start by having nanocomputers in the vaccuelles of each brain cell. These keep a continuous backup copy of the state of the brain up to death. Cancers disable the hard limit on cell duplication that cannot otherwise be avoided. By using the techniques of cell-devouring microphages, the cancer "consumes" the old cells and replaces them with new ones. They can't spread anywhere else, because that's how the cancer is designed to spread. Once the body has been fully replaced, the cancer is disabled. The brain is then programmed by the nanocomputers and the remaining cells are specialized by means of chemical signal.

This does result in oddly-shaped livers and three-handed software developers, but so far this has boosted productivity.

Comment: Re:It's not a kernel problem (Score 1) 688

by jd (#47726057) Attached to: Linus Torvalds: 'I Still Want the Desktop'

The free market didn't provide alternatives. The free market created Microsoft and the other monopolies. Adam Smith warned against a free market.

The majority do not create alternatives, either. The majority like things to not change. The familiar will always better the superior in the marketplace.

Alternatives are created by small groups of people being disreputable, commercially unproductive and at total odds with the consumer. These alternatives will typically take 7-14 years to develop. Adoption will typically reach peak after another 7-14 years. By the 30th year after first concept, the idea will be "obvious" and its destiny an "inevitable consequence" of how things are done.

In reality, it takes exceptional courage and a total disregard for "how things are done". 7-14 years with guaranteed losses is not how the marketplace works. Even thinking along those lines is often met with derision and calls of "Socialism!" by the market. No, real inventors are the enemy of the free market.

If you want a Linux desktop, you must forgo all dreams of wealth. You must subject yourself to the abject poverty that is the lot of an inventor in a market economy, or move to somewhere that supports the real achievers.

Comment: The problem isn't X. (Score 1) 688

by jd (#47725933) Attached to: Linus Torvalds: 'I Still Want the Desktop'

The problem is corruption. OSDL were working on a Linux desktop environment, but a key (financial) figure in the organization worked hard to kill off success and left around the time the unit went bankrupt. Several organizations they've been linked to have either gone belly up or have suffered catastrophic failure.

I won't name names, no point. What is the point is that such people exist in the Linux community at all, parasites that destroy good engineering and good work for some personal benefit of their own.

X is not great, but it's just a specification. People have developed Postscript-based GUIs using it. It's merely an API that you can implement as you like (someone ported it to Java) and extend as you like (Sun did that all the time). The reference implementation is just that. Interoperability of just that set of functions used by Glib/Gtk and Qt would give you almost all the key software.

Alternatively, write a GUI that has a port of those three libraries. You could use Berlin as a starting point, or build off Linux framebuffers, or perhaps use SDL, or write something unique. If it supports software needing those libraries, then almost everything in actual use will be usable and almost everything written around X in the future will also be usable. If what you write is better than X, people will switch.

Comment: Re:Nobody else seems to want it (Score 1) 688

by jd (#47725801) Attached to: Linus Torvalds: 'I Still Want the Desktop'

Binary drivers exist and are loadable so long as they are properly versioned.

Block drivers can always use FUSE.

Automatic builders can recompile a shim layer with new kernels (or even the git tree version), automatic test harnesses or a repurposed Linux Test Project can validate the shim. You don't need to validate the driver for everykernel, if it's totally isolated from the OS and worked before then it'll remain working.

Automated distributors can then place the binaries in a corporate yum/apt repository.

What has an ABI got to do with it? Only gets in the way of writing clean code.

Comment: Why? (Score 1) 688

by jd (#47725719) Attached to: Linus Torvalds: 'I Still Want the Desktop'

The commands to the bus don't change.
The commands sent to the hardware don't change.
The internal logic won't change.

That leaves the specific hooks to the OS and the externally visible structures.

Nobody is insane enough to use globals directly and structures are subject to change without notice. So external stuff will already be isolated.

If the hardware is available for any two of HyperTransport, PCI Express 2.x, VME/VXI or one of the low-power busses used on mobile hand-warmers, err, smart devices, then the actual calls to the bus hardware will be compartmentalized or go through an OS-based abstraction layer.

So 95% of a well-written driver is OS-agnostic and the remaining 5% is already is isolated.

So either drivers are very badly written (which is a crime against sanity) or the hardware vendor could place the OS-dependent code in its own DLL at bugger-all cost to them. Since the OS-dependent code has nothing trade secret in it, they can publish the source for the shim at no risk. Since the shim isn't the driver, there's no implication of support for OS' they don't know or understand. It's not their problem what the shim is used for.

Everyone's happy. Well, happier. The companies don't get harassed, the Linux users get their drivers, Microsoft gets fewer complaints about badly-written drivers killing their software. It's not open, it's not supported, but it's good enough.

Comment: Re:100+F or 38+C typical annual high (Score 0) 62

by jd (#47698809) Attached to: The Data Dome: A Server Farm In a Geodesic Dome

Portland is cool, yes. But that's mostly down to the bookshops and tea shops. Temperature-wise, it doesn't get "hot" per-se, but it does get very humid. And the air is horribly polluted. Spent the time moving up there reading about dangerously high levels of mercury in the air, the deadly pollutants in the river, the partially dismantled nuclear reactor and highly toxic soil (the reactor has since gone, the soil remains drenched in contaminants), the extremely high levels of acid rain due to excessive cars (which are driven by maniacs) and the lethal toxins flowing through the rivers that have been built over to level out the ground.

In short, I landed there a nervous wreck.

Things didn't improve. I saw more dead bodies (yes, dead bodies) in Portland and heard more gunfire in my five years there than I heard in the suburbs of Manchester, England, in 27 years. You will find, if the archives let you get back that far, that I was almost normal before that time.

Comment: Re:Souinds like the data center of the future, cir (Score 3, Interesting) 62

by jd (#47698749) Attached to: The Data Dome: A Server Farm In a Geodesic Dome

1955. The Manchester Computing Centre was designed to be one gigantic heat sink for their computers in the basement, using simple convection currents, ultra-large corridors and strategically-placed doors to regulate the temperature. It worked ok. Not great, but well enough. The computers generated enormous heat all year round, reducing the need for heating in winter. (Manchester winters can be bitingly cold, as the Romans discovered. Less so, now that Global Warming has screwed the weather systems up.)

The design that Oregon is using is several steps up, yes, but is basically designed on the same principles and uses essentially the same set of tools to achieve the results. Nobody quite knows the thermal properties of the location Alan Turing built the Manchester Baby in, the laboratory was demolished a long time ago. Bastards. However, we know where his successors worked, because that's the location of the MCC/NCC. A very unpleasant building, ugly as hell, but "functional" for the purpose for which it was designed. Nobody is saying the building never got hot - it did - but the computers didn't generally burst into flames, which they would have done if there had been no cooling at all.

Comment: Re:Seems simple enough (Score 1) 168

by jd (#47690387) Attached to: Processors and the Limits of Physics

Let's start with basics. Message-passing is not master-slave because it can be instigated in any direction. If you look at PIC Express 2.1, you see a very clear design - nodes at the top are masters, nodes at the bottom are slaves, masters cannot talk to masters, slaves cannot talk with slaves, only devices with bus master support can be masters. Very simple, totally useless.

Ok, what specifically do I mean by message passing? I mean, very specifically, a non-blocking, asynchronous routable protocol that contains an operation and a data block as an operand (think: microkernels, MPI-3). If you're clever, the operand is self-describing (think: CDF) because that lets you have overloaded functions.

The CPU is a bit naff, really. I mean, at least some operations can be pushed into a Processor In Memory, you have a fancy coprocessor for maths that you're repeatedly (and expensively) calling to create the functions that exist as a limited subset in FFTW, BLAS and LAPack. Put all three, in optimized form, along with your basic maths operations into a larger piece of silicon. Voila, massive speed boost.

But now let's totally eliminate the barrier between graphics, sound and all other processors. Instead of limited communications channels and local memory, have distributed shared memory (DSM) and totally free communication between everything. Thus, memory can open a connection to the GPU, the GPU can talk to the disk, Ethernet cards can write direct to buffers rather than going via software (RDMA and OpenSockets concepts, just generalized).

You now have a totally open network, closer to Ethernet than PCI or HyperTransport in architecture, but closer to C++ or Java in protocol, since the data type determines the operation.

What room, in such a design, for a CPU? Everything can be outsourced.

Now, move onto Wafer Scale Integration. We can certainly build single wafers that can take this entire design. Memory and compute elements, instead of segregated, are mixed. Add some pipelining and you have an arrangement that could blow most computer designs out the water.

Extrapolate this further. Instead of large chunks of silicon talking to each other, since the protocol is entirely routable, get as close to individual compute elements as you can. Have the router elements take care of heat and congestion issues, rather than compilers. Since packet headers can contain whatever label information you want, you have a notion of processes with independent storage.

It doesn't (or shouldn't) take long to figure out that a true network, rather than a bus, architecture will let you move chunks
of the operating system (which is just a virtual machine, anyway) into the physical computer, eliminating the need for running an expensive bit of simulation.

And this is marketspeak? Marketspeak for what? Name me a market that wants to eliminate complexity and abandon planned obsolescence in favour of a schizophrenic cross between a parallel Turing machine, a vector computer and a Beowulf cluster.

Comment: Re:Seems simple enough (Score 1) 168

by jd (#47687241) Attached to: Processors and the Limits of Physics

OpenCL is highly specific in application. Likewise, RDMA and Ethernet Offloading are highly specific for networking, SCSI is highly specific for disks, and so on.

But it's all utterly absurd. As soon as you stop thinking in terms of hierarchies and start thinking in terms of heterogeneous networks of specialized nodes, you soon realize that each node probably wants a highly specialized environment tailored to what it does best, but that for the rest, it's just message passing. You don't need masters, you don't need slaves. You need bus switches with a bit more oomph (they'd need to be bidirectional, support windowing and handle multipath routing where shortest route may be congested).

Above all, you need message passing that is wholly target-independent since you've no friggin' clue what the target will actually be in a heterogeneous environment.

Comment: Re:Can you fit that in a laptop? (Score 1) 168

by jd (#47687191) Attached to: Processors and the Limits of Physics

Hemp turns out to make a superb battery. Far better than graphene and Li-Ion. I see no problem with developing batteries capable of supporting sub-zero computing needs.

Besides, why shouldn't public transport support mains? There's plenty of space outside for solar panels, plenty of interior room to tap off power from the engine. It's very... antiquarian... to assume something the size of a bus or train couldn't handle 240V at 13 amps (the levels required in civilized countries).

Comment: Re:Yes, no, maybe, potato salad (Score 1) 291

by jd (#47685909) Attached to: The Technologies Changing What It Means To Be a Programmer

Very true, but without it, we're doomed to reinventing wheels, redoing research and coming up with suboptimal solutions that are harder to program, harder to maintain and bloated with helper functions that would have come as standard otherwise.

Such a table can be written once then updated every 5 years. Reading it simply amounts to feeding into a parametric search routine what you know you will need to be able to do. You will then get a shortlist of languages ideal for the task.

Now, it comes down to two simple questions: are your requirements ever stable enough or clear enough for such a shortlist to be useful? Do you risk overoptimizing on a set of criteria that may have no resemblance to the reality of the problem or the reality of any solution the customer will sign off on?

If the answers are "yes"and "no" respectively, I'll start the list today.

Comment: Seems simple enough (Score 1) 168

by jd (#47685853) Attached to: Processors and the Limits of Physics

You need single isotope silicon. Silicon-28 seems best. That will reduce the number of defects, thus increasing the chip size you can use, thus eliminating chip-to-chip communication, which is always a bugbear. That gives you effective performance increase.

You need better interconnects. Copper is way down on the list of conducting metals for conductivity. Gold and silver are definitely to be preferred. The quantities are insignificant, so price isn't an issue. Gold is already used to connect the chip to outlying pins, so metal softness isn't an issue either. Silver is trickier, but probably solvable.

People still talk about silicon-on-insulator and stressed silicon as new techniques. After ten bloody years? Get the F on with it! These are the people who are breaking Moore's Law, not physics. Drop 'em in the ocean for a Shark Week special or something. Whatever it takes to get people to do some work!

SoI, since insulators don't conduct heat either, can be made back-to-back, with interconnects running through the insulator. This would give you the ability to shorten distances to compute elements and thus effectively increase density.

More can be done off-cpu. There are plenty of OS functions that can b e shifted to silicon, but where the specialist chips have barely changed in years, if not decades. If you halve the number of transistors required on the CPU for a given task, you have doubled the effective number of transistors from the perspective of the old approach.

Finally, if we dump the cpu-centric view of computers that became obsolete the day the 8087 arrived (if not before), we can restructure the entire PC architecture to something rational. That will redistribute demand for capacity, to the point where we can actually beat Moore's Law on aggregate for maybe another 20 years.

By then, hemp capacitors and remsistors will be more widely available.

(Heat is only a problem for those still running computers above zero Celsius.)

After all is said and done, a hell of a lot more is said than done.