Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:100+F or 38+C typical annual high (Score 0) 62

Portland is cool, yes. But that's mostly down to the bookshops and tea shops. Temperature-wise, it doesn't get "hot" per-se, but it does get very humid. And the air is horribly polluted. Spent the time moving up there reading about dangerously high levels of mercury in the air, the deadly pollutants in the river, the partially dismantled nuclear reactor and highly toxic soil (the reactor has since gone, the soil remains drenched in contaminants), the extremely high levels of acid rain due to excessive cars (which are driven by maniacs) and the lethal toxins flowing through the rivers that have been built over to level out the ground.

In short, I landed there a nervous wreck.

Things didn't improve. I saw more dead bodies (yes, dead bodies) in Portland and heard more gunfire in my five years there than I heard in the suburbs of Manchester, England, in 27 years. You will find, if the archives let you get back that far, that I was almost normal before that time.

Comment Re:Souinds like the data center of the future, cir (Score 3, Interesting) 62

1955. The Manchester Computing Centre was designed to be one gigantic heat sink for their computers in the basement, using simple convection currents, ultra-large corridors and strategically-placed doors to regulate the temperature. It worked ok. Not great, but well enough. The computers generated enormous heat all year round, reducing the need for heating in winter. (Manchester winters can be bitingly cold, as the Romans discovered. Less so, now that Global Warming has screwed the weather systems up.)

The design that Oregon is using is several steps up, yes, but is basically designed on the same principles and uses essentially the same set of tools to achieve the results. Nobody quite knows the thermal properties of the location Alan Turing built the Manchester Baby in, the laboratory was demolished a long time ago. Bastards. However, we know where his successors worked, because that's the location of the MCC/NCC. A very unpleasant building, ugly as hell, but "functional" for the purpose for which it was designed. Nobody is saying the building never got hot - it did - but the computers didn't generally burst into flames, which they would have done if there had been no cooling at all.

Comment Re:Seems simple enough (Score 1) 168

Let's start with basics. Message-passing is not master-slave because it can be instigated in any direction. If you look at PIC Express 2.1, you see a very clear design - nodes at the top are masters, nodes at the bottom are slaves, masters cannot talk to masters, slaves cannot talk with slaves, only devices with bus master support can be masters. Very simple, totally useless.

Ok, what specifically do I mean by message passing? I mean, very specifically, a non-blocking, asynchronous routable protocol that contains an operation and a data block as an operand (think: microkernels, MPI-3). If you're clever, the operand is self-describing (think: CDF) because that lets you have overloaded functions.

The CPU is a bit naff, really. I mean, at least some operations can be pushed into a Processor In Memory, you have a fancy coprocessor for maths that you're repeatedly (and expensively) calling to create the functions that exist as a limited subset in FFTW, BLAS and LAPack. Put all three, in optimized form, along with your basic maths operations into a larger piece of silicon. Voila, massive speed boost.

But now let's totally eliminate the barrier between graphics, sound and all other processors. Instead of limited communications channels and local memory, have distributed shared memory (DSM) and totally free communication between everything. Thus, memory can open a connection to the GPU, the GPU can talk to the disk, Ethernet cards can write direct to buffers rather than going via software (RDMA and OpenSockets concepts, just generalized).

You now have a totally open network, closer to Ethernet than PCI or HyperTransport in architecture, but closer to C++ or Java in protocol, since the data type determines the operation.

What room, in such a design, for a CPU? Everything can be outsourced.

Now, move onto Wafer Scale Integration. We can certainly build single wafers that can take this entire design. Memory and compute elements, instead of segregated, are mixed. Add some pipelining and you have an arrangement that could blow most computer designs out the water.

Extrapolate this further. Instead of large chunks of silicon talking to each other, since the protocol is entirely routable, get as close to individual compute elements as you can. Have the router elements take care of heat and congestion issues, rather than compilers. Since packet headers can contain whatever label information you want, you have a notion of processes with independent storage.

It doesn't (or shouldn't) take long to figure out that a true network, rather than a bus, architecture will let you move chunks
of the operating system (which is just a virtual machine, anyway) into the physical computer, eliminating the need for running an expensive bit of simulation.

And this is marketspeak? Marketspeak for what? Name me a market that wants to eliminate complexity and abandon planned obsolescence in favour of a schizophrenic cross between a parallel Turing machine, a vector computer and a Beowulf cluster.

Comment Re:Seems simple enough (Score 1) 168

OpenCL is highly specific in application. Likewise, RDMA and Ethernet Offloading are highly specific for networking, SCSI is highly specific for disks, and so on.

But it's all utterly absurd. As soon as you stop thinking in terms of hierarchies and start thinking in terms of heterogeneous networks of specialized nodes, you soon realize that each node probably wants a highly specialized environment tailored to what it does best, but that for the rest, it's just message passing. You don't need masters, you don't need slaves. You need bus switches with a bit more oomph (they'd need to be bidirectional, support windowing and handle multipath routing where shortest route may be congested).

Above all, you need message passing that is wholly target-independent since you've no friggin' clue what the target will actually be in a heterogeneous environment.

Comment Re:Can you fit that in a laptop? (Score 1) 168

Hemp turns out to make a superb battery. Far better than graphene and Li-Ion. I see no problem with developing batteries capable of supporting sub-zero computing needs.

Besides, why shouldn't public transport support mains? There's plenty of space outside for solar panels, plenty of interior room to tap off power from the engine. It's very... antiquarian... to assume something the size of a bus or train couldn't handle 240V at 13 amps (the levels required in civilized countries).

Comment Re:Yes, no, maybe, potato salad (Score 1) 294

Very true, but without it, we're doomed to reinventing wheels, redoing research and coming up with suboptimal solutions that are harder to program, harder to maintain and bloated with helper functions that would have come as standard otherwise.

Such a table can be written once then updated every 5 years. Reading it simply amounts to feeding into a parametric search routine what you know you will need to be able to do. You will then get a shortlist of languages ideal for the task.

Now, it comes down to two simple questions: are your requirements ever stable enough or clear enough for such a shortlist to be useful? Do you risk overoptimizing on a set of criteria that may have no resemblance to the reality of the problem or the reality of any solution the customer will sign off on?

If the answers are "yes"and "no" respectively, I'll start the list today.

Comment Seems simple enough (Score 1) 168

You need single isotope silicon. Silicon-28 seems best. That will reduce the number of defects, thus increasing the chip size you can use, thus eliminating chip-to-chip communication, which is always a bugbear. That gives you effective performance increase.

You need better interconnects. Copper is way down on the list of conducting metals for conductivity. Gold and silver are definitely to be preferred. The quantities are insignificant, so price isn't an issue. Gold is already used to connect the chip to outlying pins, so metal softness isn't an issue either. Silver is trickier, but probably solvable.

People still talk about silicon-on-insulator and stressed silicon as new techniques. After ten bloody years? Get the F on with it! These are the people who are breaking Moore's Law, not physics. Drop 'em in the ocean for a Shark Week special or something. Whatever it takes to get people to do some work!

SoI, since insulators don't conduct heat either, can be made back-to-back, with interconnects running through the insulator. This would give you the ability to shorten distances to compute elements and thus effectively increase density.

More can be done off-cpu. There are plenty of OS functions that can b e shifted to silicon, but where the specialist chips have barely changed in years, if not decades. If you halve the number of transistors required on the CPU for a given task, you have doubled the effective number of transistors from the perspective of the old approach.

Finally, if we dump the cpu-centric view of computers that became obsolete the day the 8087 arrived (if not before), we can restructure the entire PC architecture to something rational. That will redistribute demand for capacity, to the point where we can actually beat Moore's Law on aggregate for maybe another 20 years.

By then, hemp capacitors and remsistors will be more widely available.

(Heat is only a problem for those still running computers above zero Celsius.)

Comment Less power?? (Score 1) 96

Power is governed by change of states per second. It varies by the voltage, but by the square of the current. There's only so much saving from reducing voltage, too, as you run into thermal issues and electron tunnelling errors.

You are much, much better off by saying "bugger that for a lark", exploiting tunnelling to the limit, switching to a lower resistance interconnect, cooling the silicon below 0'C and ramping up clock speeds. And switching to 128-bit logic and implementing BLAS and FFT in silicon.

True, your tablet will now look like a cross between Chernobyl, a fridge-freezer, and the entire of engineering on the NCC-1701D Enterprise, but it will now actually have the power to play those 4K movies without lag, freeze or loss of resolution.

Comment Yes, no, maybe, potato salad (Score 2) 294

Modern programming languages are a fusion of older programming languages, with chunks taken out. Often, it's the useful chunks.

There is no table, that I know of, that lists all the features ("significant" depends on the problem and who cares about solved problems?) versus all the paradigms versus all the languages. (Almost nothing is pure in terms of paradigm, so you need a 3D spreadsheet.)

Without that, you cannot know to what extent the programming language has affected things, although it will have done.

Nor is there anything similar for programming methodology, core skills, operating systems or computer hardware.

Without these tables, all conclusions are idle guesses. There's no data to work with, nothing substantial to base a conclusion on, nothing to derive a hypothesis or experiments from.

However, I can give you my worthless judgement on this matter:

1) Modern methodologies, with the exception of tandem/test first, are crap.
2) Weakly-typed languages are crap.
3) Programmers who can't do maths or basic research (or, indeed, program) are crap.
4) Managers who fire the rest of the staff then hire their girlfriends are... ethically subnormal.
5) Managers who fire hardware engineers for engineering hardware are crap.
6) Managers who sabotage projects that might expose incompetence are normal but still crap.
7) If you can't write it in assembly, you don't understand the problem.
8) An ounce of comprehension has greater value than a tonne of program listing.
9) Never trust an engineer who violates contracts they don't like.

Comment Dark matter and dark energy (Score 3, Interesting) 225

These theories have their own problems. As noted on Slashdot previously, neither exist around dwarf globular clusters or in the local region of the Milky Way. It is not altogether impossible that our models of gravity are flawed at supermassive scales at relativistic velocities, that there's corrections needed that would produce the same effect as currently theorized for this new kind of matter and energy.

Remembering that one should never multiply entities unnecessarily, one correction factor seems preferable to two exotic phenomena that cannot be directly observed by definition.

But only if such a correction factor is theoretically justified AND explains all related observations AND is actually simpler.

There is just as much evidence these criteria are true as there is for dark stuff - currently none.

Slashdot Top Deals

"No matter where you go, there you are..." -- Buckaroo Banzai

Working...