Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:Torvalds is half right (Score 1) 449

Note well that historically, MOST parallel computers have profited the MOST from parallelizing totally linear tasks. Not the tasks themselves -- embarrassingly parallel tasks, simply running many instances of completely independent code or many instances of code that is extremely coarse grained so that one can run almost all of the task as linear code with only infrequent communications with a "central" controller. Classic examples are plain old multitasking of the operating system with code that doesn't make heavy use of bottlenecked resources (the reason most users see some small benefit from e.g. quad core vs single core processors, as there is enough often enough work being done to keep 3-4 cores busy at least some of the time without much blocking, and this keeps the processor itself from thrashing by providing the illusion of parallelism through multitasking with time slices. It works best if the cores have independent caches and contexts and if there is sufficient task affinity. Also, classic "master-slave" parallel computing, where e.g. a Monte Carlo computation might spawn N slaves, each one with its own random number generator seed, and run N "independent" samplings of some process that are only infrequently aggregated back to the master. Again, the characteristic is lots of nearly independent serial computation with only short, infrequent, non-blocking, non-synchronous communications back to some collection point. Two programs that often were used to demonstrate the awesome advantages of scaling at the limits of Amdahl's law were parallel povray (rendering can be broken up into nearly independent subtasks in master-slave) and a parallel Mandlebrot set generator/displayer (where each point has to be tested independently, so whole subsets of the relevant parts of the complex plane could be distributed to different processors and independently computed, with the master collecting and displaying the results.

Sadly (well, not really:-) modern processors are so damn fast you can get to the accessible bottom of the Mandlebrot set with almost no perceptible delay from rubber banding even with a single core, so the latter isn't so dramatic, but the point remains -- quite a lot of work that can be done with multiple cores (arguably MOST of the work that can efficiently and easily be done with multiple cores) is trivial parallelism, not parallel programming. Instance 1 is the richest source of advantage for a parallel system, and tasks that will scale out to 1000 cores are almost certainly ONLY going to be trivially/embarrassingly parallel tasks because Amdahl's law and the complexity of unblocking communications between subtasks is a royal bitch at 1000 processors no matter how you architect things. SETI at home, maybe. Solving a system of partial differential equations on a volume with long range interactions not so much.

The fundamental problem with 2 and 3 is that they have to be hand coded. Really pretty much period. Sure, you can get away with getting some advantage from using e.g. a parallel linear algebra program as a link step in a program that can run on serial resources, but typically the gains you can get are limited and will not scale well, certainly not to anywhere near 1000 cores, even for case 2. To use 1000 cores for a tightly coupled parallel computation where every core talks to every other core per step of the computation -- well, that just isn't going to happen without an incredible (literally) boost in interprocessor communication speed, reduction in communication latency, elimination of resource blocking at both the hardware and kernel level. The problem at some point becomes NP complete (I suspect, of course pending the issue of whether P = NP etc) and simply working out ways for the communications to proceed in a self-avoiding pattern to eliminate collisions or delays due to asynchronicity is itself a "hard problem", forget the problem you're actually trying to solve.

So I'm largely with Linux on this one. Advantages to parallelism at the OPERATING SYSTEM level probably saturate in almost all contexts long before one can put 1000 cores to use. Even if you provision every core with its own L1, its own L2, its own L3, even if you have unprecedented mastery at the hardware level of locking in parallel pathways to main memory, you have some serious hardware limitations and economic tradeoffs to consider. Here is a nice little article outlining some of the tradeoffs between comparatively expensive L1 and less expensive L2:

http://www.extremetech.com/ext...

I can believe that 64 cores can still be manipulated in ways that are beneficial, in server environments where there are likely to be 64 semi-independent threads (that only rarely seek to access shared blocking resources) that can be kept on-chip and in both chip context and and cache. 128 cores I start to get to be very skeptical. By 1024 cores, there just isn't going to be MUCH benefit except for a tiny fraction of the work people might buy CPUs to do, and the effort of writing sufficiently efficient code to put all those cores to work is itself a daunting, expensive thing to consider.

rgb

Comment Re:Morons that cannot do math.... (Score 3, Interesting) 363

Agreed that "greenies" aren't the only ones making billions off of CO2 hysteria -- see the Koch brothers in the article below:

http://www.huffingtonpost.com/...

but there are lots of people seeking to make money in the carbon and carbon trading game, and IIRC Gore is indeed one of them. A description of the billions at play already can be found here:

http://en.wikipedia.org/wiki/C...

where the number given is "60 billion dollars" which certainly counts as "billions" in any marketplace where people make a margin on all trades. The bulk of the people making money of of CO2 hysteria are, however, not Greenpeace volunteers or the like -- they are the same extremely wealthy individuals and companies who both "run civilization" and incidentally own the big energy companies worldwide. If you looked at where directly invested money intended to combat CO_2 goes (e.g. research money) a substantial fraction goes directly to the energy industry in the form of research grants, another substantial fraction goes to the energy industry in the form of subsidies. But the real payoff for the big carbon-based energy companies is, paradoxically, in the artificial inflation of carbon based energy costs to the consumer. Again, power companies make marginal profits, generally at what amounts to a fixed (publicly regulated) margin. The only way for them to increase profits at fixed production is to raise prices. The only way to raise prices in a world where coal is plentiful and cheap is to create an artificial scarcity, which has the added benefit of stretching out the lifetime of profitability of the resource to the owner. I would argue -- although it is difficult to put specific numbers to this since it is difficult to see just what fraction of the cost of a kilowatt-hour is directly attributable to the global warming hysteria, and because the media is strangely reluctant to follow the money (perhaps because they are predominantly owned by the same wealthy people, perhaps because they profit from things that rouse strong feelings, like an impending global catastrophe) -- that the increased marginal profits to the global energy industry due to catastrophe-driven price increases dwarfs all other money being made in association with the hysteria and is the great invisible elephant in the debate.

As Br'er Rabbit once said, "Don't through me into that briar patch, oh please no no no..."

I am, however, curious as to why you'd ask for citations and then refer to the billions being made off of "denying" climate change by (specifically) two large oil companies. Surely you understand that oil companies are nearly irrelevant to global warming, a small fraction (around 13%) of greenhouse emissions relative to coal fired electrical plants, industrial energy production, etc:

http://en.wikipedia.org/wiki/L...

and

http://www.epa.gov/climatechan...

The oil companies are perfectly happy to skim billions off of the artificial renewables industry that has been created by the hysteria, and until this year have been both investing and making billions from it. But the bottom has apparently fallen out of this:

http://www.eenews.net/stories/...

very likely driven by the increased supply of oil and gasoline that is reflected in oil prices dropping by nearly 1/3 this year. They are suffering far more from a SURPLUS of oil that leads to low prices and hence a serious hit on their profits than they ever suffered from global warming hysteria in a world where demand is nearly copmletely inelastic and generally growing. It also appears that the profitability of sustainables is taking a (in my opinion) well-deserved hit as it becomes clearer that a number of the technologies are simply not ready for prime time and can be broadly implemented only with a substantial subsidy.

Following the money is a very good idea, actually -- quite independent of what you think about global warming and whether or not a total climate sensitivity under (possibly well under) 2 C is likely to lead to catastrophe, especially a futures catastrophe relative to the ongoing right-now catastrophe of misdirected economic resources in a world where 1/3 of the population live in a state of 18th century abject energy poverty around the globe.

Personally, after "Follow the money" I echo most the sentiments "Do the math" put down above. Planting trees is not the answer. However, the EPA does rank "Land Use Change and Forestry" at 17% and "Agriculture" at 14% higher than transportation at 13%, so perhaps there are greater absolute reductions in CO_2 production to be reaped in altering the ways we interact with the biosphere than there are in picking on Exxon, much as it is the company everybody loves to hate except when they are filling their car.

In a few years, solar technology, energy storage technology, and possibly energy transportation technology will have all improved to the point where trying to armtwist their adoption is no longer necessary -- the key parameters are the amortization schedule on the investment (which is still too long in most locations), the fact that without cheap and energy-density efficient storage OR a globe-spanning energy transportation network that can reach from the sunlit tropics to the sun-poor arctic circle, solar is never going to replace fuel-based energy production where I'm deliberately non-specific about what fuel -- nuclear fission, nuclear fusion (if Lockheed-Martin is correct in their assertion that they've got fusion licked and will go commercial within five years), or coal, and reliability (no point in investing if the cells are poorly made and fail at a rate that eats ROI post amortization). Trying to force adoption of an immature technology by fiat is both unlikely to work and actively counterproductive as it encourages profiteering on the subsidies and passes artificially high prices back to consumers where JUST WAITING for the technology to mature will eventually make it a no-brainer that is adopted as fast as companies can adopt it not because it is "green" but because THEY MAKE THE MOST MONEY that way.

In many places, solar is right on that margin already. I could probably put rooftop solar on my own house (in North Carolina) and recover my amortized investment in around 15 years, a time that (again, paradoxically) is stretched out by the fact that I already have super insulation, energy efficient windows, high efficiency heaters and AC so that the dollar cost of electricity per month is barely sufficient to pay for the money needed to install the system. I'd be a lot happier if I could drop the initial investment by a factor of two, cut my amortization time down to 5-10 years at most, and have a lot more assurance that the solar cells used are going to be warranted reliable for 20 years or more with minimal maintenance. Regardless, inside 10 to 20 years the threshold is going to be passed that makes this a no-brainer pretty much everywhere in the lower temperate through the tropical zones, and this would have been the case even if nobody even hypothesized a CO2-climate connection. Power companies, of course, can get better economies of scale, and Duke power is installing solar grids all over the state already and it will be interesting to see if they cut the rooftop solar market off at the knees.

That's the last thing worth pointing out. Economies are complex entities. An investment that would have been solidly profitable back when my house was energy inefficient and energy costs were high is unprofitable if either energy costs drop or I increase the energy efficiency of the house. If Lockheed-Martin DOES have fusion licked -- and they aren't the kind of company to make egregious public claims as they'll be punished in the marketplace if they do and they don't pan out -- then every single cent invested in renewable energy production due to CO_2 hysteria is going to be a dead loss waste of time, a trillion dollar global hit on the economy, even as everybody benefits enormously in the long run. If we FAIL to develop an economically viable truly global carbon-free energy grid, be it based on PV solar, LFTR, fusion, whatever, then if the Bern model for CO_2 is correct, there is no foreseeable point in the future where we will not be adding more CO_2 to the atmosphere based on current technologies or models. RCP8.5 is probably unlikely and always has been, and RCP4.5 or RCP6.5 are likely to lead to at most around 2 C warming by the late 21st century given the observation that a total climate sensitivity of around 1.8 C best fits HadCRUT4 from 1850 to the present. But there are huge, nay, absolutely enormous uncertainties, and markets hate uncertainties. These are real risks, and have real impacts on real lives.

Even if in the long run, fusion is an enormous blessing and kicks us into a type 2 civilization likely enough to last more than a few hundred more years, it could have a devastating effect on the global economy in the short run. The same is true of many other enabling or critical technologies. Led lights are great, but insanely expensive. If their cost dropped to $1 per 100 watt equivalent bulb, if one could build LED based lights as bright as existing streetlights, it would change everything -- my household energy footprint would go down by another 1/3 (making it even harder to amortize solar, incidentally), humanity could stop pissing away carbon by the ton lighting the night even when there are no humans present to see or use the light. Super batteries would change everything, where by super battery I mean something with energy storage density comparable to gasoline or coal, reversible, inexpensive, no memory effect, easily manufactured or remanufactured. We are getting remarkably close to this already -- IIRC within a factor of 10, working on within a factor of 3. Give me a thousand dollar pile that fits in a cubic meter of my back yard and can hold and deliver 100+ kW-hours and solar is instant no-brainer, not just for me but for nearly everybody. A solar cell that costs $0.25/watt installed and that will last for 50 years. High (enough) temperature superconductors. Affordable electronics smart enough to turn on lights in your house and heat your house and cool your house to the precise extent that you actually use the light, the heat, the cool air. In the long run, MANY of these technologies are likely to mature. If they are allowed to mature at their own pace, the economy will probably bend around them without breaking, and any of them will have a large impact on at least some of the carbon dioxide production worldwide.

rgb

Comment We gotta get NASA to stop smoking crack.... (Score 1) 200

and then writing science fiction. I don't even disbelieve what they say, it's just being said without any sort of consideration of either the cost or the benefit. Hey, I can write novels about mining the asteroid belt, extracting He3 from moon rock for fusion fuel, building orbital space cities, and settling the moon too, except that Heinlein and many others already did most of this, and all of their novels presuppose some method of getting around that doesn't cost a gazillion dollars and thousands of megajoules per kilogram moved. With that kind of cost, why hire crack smokers to write SF? There is a lot of work a lot closer to home that is ALREADY too expensive for the benefit.

In the meantime, time to write another SF novel: "The Floating Cities of Venus". Yeah, got a nice ring to it.

rgb

Comment Re:Efficiency??? (Score 1) 103

You're probably right. Although I've had a standard transmission car go through 100% of its clutch plate and they are not cheap to replace. But what is? And how many cars have standard transmissions any more? And of those, how many go through a whole clutch plate before they die from some other cause. Toyota's magnetic regenerative braking system suggests that one "can" mass produce the requisite magnetic coupling, but there probably isn't a compelling reason to do it in this case.

Comment Re:Efficiency??? (Score 4, Insightful) 103

The other point being that it could be designed only to replace the kinetic friction parts of a transmission, the parts that synchronize the system. The gearing itself can probably still be mechanical. Not having to replace clutch plates, for example, might be a nice and relatively easily doable thing.

Comment Re:Sigh. Or rather Sci...Fi (Score 1) 153

Precisely! In fact, I'm thinking of rewriting Plato's Republic except replacing all instances of Philosophers with Science Fiction Writers. Think of the advantages! Instead of neurosing over healthcare and global warming we can have replacement organs, dinosaurs and space aliens! We can build our own space habitat! The Stars are Ours! No longer will mankind be limited by silly little things like physical law and economics, not with SF writers in control.

Best of all, SF writers tend to be pretty nerdy and (if we carefully exclude the horror contingent and zombie squad) inclined towards epic-heroic monumental happy endings. Life could never be boring with them in charge.

On to the asteroids! Don't worry about cost or whether or not the risks are worth the benefits! Damn the space torpedos! So what if another million or two of small children die of easily preventable causes this year! It helps reduce the rate of population growth, and how can that be a bad thing?

rgb

Comment Sigh. Or rather Sci...Fi (Score 3, Informative) 153

Science fiction authors have totally solved this problem a zillion different ways. They all share certain features. First you go to the asteroid. Second, you set up some sort of mass driver on the asteroid or ion driver, ideally one that uses solar electricity or heat and not imported fuel, but if you don't mind a bit of radioactivity, propulsion by nuke is OK (Orion).

Depends on the mass of the asteroid as well, and how long you want to wait to get it home, and how much of it you want to have left when you get there. If you don't mind waiting a VERY long time, you could even use an angled light sail for propulsion. Third, you drive it home, or rather, have your fully automated computer tools do it for you. Fourth, you get it into Earth Orbit and then use it to threaten the hegemony running Earth, insisting that they send you dancing girls and exotic foods or you'll drop it on their heads -- it makes you way more money than actually selling the metal.

Optionally, you can have your robots smelt the asteroid in place first, using large mirrors to concentrate solar energy to melt the asteroid rock into slag plus metal, perhaps even collecting the slag (with a thin metal coating) to use in your linear accelerator or solar heated rocket as reaction mass. Some asteroids are really comet heads and might be covered with solid gases and ice and might support making real fuel on the spot as well. And fusion would no doubt shift the plan a bit as well.

But the final stage is always to drop them on Earth, not use them for good. Otherwise there isn't any real plot. Sometimes they don't even bother dropping them per se, they just fall by accident. But nobody can resist an umpty teraton-of-TNT explosion: not invading space aliens, not Dr. Evil, not the asteroid mining company's board of directors, not even the grizzled old asteroid miner whose sainted mother was put out onto the street to starve during the housing riots of 2057.

rgb

Comment Re:Fucking magnets, how do they work? (Score 3, Informative) 26

You mean, as in "read a physics textbook"?

Seriously. Depending on how much physics you've already studied, the right place to start will vary. A passable (free) intro is in my free online physics textbook http://www.phy.duke.edu/~rgb/C..., or wikipedia articles. A good intermediate treatment might be Griffiths' Classical Electrodynamics. If you want the pure quill uncut stuff, J. D. Jackson's Classical Electrodynamics is excellent, but it is not for the faint of heart or the wussy of PDE-fu.

In a nutshell, parallel currents of electric charge attract; antiparallel charged currents repel, changing charged currents radiate electromagnetic energy, and there are electrostatic forces happening in there somewhere too, in the cases where the currents are produced by unbalanced moving charge. Oh, and there is a fair bit of twistiness to the magnetic fields (called "curl") and forces, and the currents in question in "magnets" (or the general magnetic susceptibility of materials) tend to be non-dissipative (quantum) nuclear, atomic, or molecular circulations of charge, not Ohm's law type currents in a resistor. Ferromagnets in particular are what is being referred to, and they are characterized by long range order and a "permanent" magnetization in the absence of an external field below a certain temperature.

Hope this fucking helps:-)

rgb

Comment Re:Not exactly (Score 3, Interesting) 161

Besides, the invention of accelerators order of 12" in size is very, very old news. The Betatron:

http://physics.illinois.edu/hi...

is, as one can see, order of a foot in diameter and could produce electrons at order of 6 MeV in 1940. Yes, that is actually before the US entered WWII and long before the invention of the cyclotron. That is gamma ~12, or v ~ 0.997 c. So if the top presentation were at all relevant to TFA it would actually be boring. One might safely conclude that it is wrong and boring.

The betatron was damn near the first particle accelerator truly worthy of the name, and was just about exactly 12" in diameter (a bit larger than that including the frame for the magnets etc) as one can clearly see in the second photo on this page if not the first.

rgb

Comment Re: How about we hackers? (Score 4, Insightful) 863

Yeah, I've done a fair bit of time as sysadmin of several networks AND enjoy the cool stuff that comes with change and improvement in hardware and software over time.

Systemd no doubt will have growing pains associated with it, but I still remember the "growing pains" associated with kernel 2.0 (the first multiprocessor kernel) and issues with resource locking and ever so much more. Anybody want to assert that this wasn't worth it, that "single core/single processor systems were good enough for my granddad, so they are good enough for me"? Server environment or not?

Decisions like this are always about cost/benefit, risk, long term ROI. And the risks are highly exaggerated. I'm pretty certain that one will be able to squeeze system down to a slowly varying or unvarying configuration that is very conservative and stable as a rock, even with systemd. I -- mostly -- managed it with kernels that "could" decide to deadlock on some resource, and when the few mission critical exceptions to this appeared, they were aggressively resolved on the kernel lists and rapidly appeared in the community. The main thing is the locking down of the server configurations to avoid the higher risk stuff, and aggressive pursuit of problems that arise anyway, which is really no different than with init, or with Microsoft, or with Apple, or with BSD, or...

But look at the far-side benefits! Never having to give up a favorite app as long as some stable version of it once existed? That is awesome. Dynamical provisioning, possibly even across completely different operating systems? The death of the virtual machine as a standalone, resource-wasteful appliance? Sure, there may well be a world of pain between here and there, although I doubt it -- humans will almost certainly keep the pain within "tolerable" thresholds as the idea is developed, just as they did with all of the other major advances in all of the other major releases of all of the major operating systems. Change is pain, but changes that "wreck everything" are actually rare. That's what alpha/beta/early implementation are for, and we know how to use them to confine this level of pain to a select group of hacker masochists who thrive on it.

On that day, maybe just maybe, systemd will save their ass, keep them from having to replace some treasured piece of software and still be able to run on the latest hardware with up to date kernels and so on.

I've been doing Unix (with init) for a very long time at this point. I have multiple books on the Unix architecture and how to use systems commands to write fully complex software, and have written a fair pile of software using this interface. It had the advantage of simplicity and scalability. It had the disadvantage of simplicity and scalability, as the systems it runs on grew ever more complex.

Everybody is worried about "too much complexity", but Unix in general and linux in particular long, long ago passed the threshold of "insanely complex". Linux (collectively) is arguably one of the most complex things ever build by the human species. The real question is whether the integrated intelligence of the linux community is up to the task of taming the idea of systemd to where it is a benefit, not a cost, to where it enables (eventually) the transparent execution of any binary from any system on a systemd-based system, with fully automated provisioning of the libraries as needed in real time as long as they are not encumbered legally and are available securely from the net.

We deal with that now, of course, and it is so bloody complex and limiting that it totally sucks. People are constantly forced to choose between upgrading the OS/release/whatever and losing a favorite app or (shudder) figuring out how to rebuild it, in place, on the new release -- if that is even possible.

I'll suffer a bit -- differently, of course -- now in the mere hope that in five years I can run "anything" on whatever system I happen to be using and have it -- just work.

rgb

Comment Why bother... (Score 1) 272

a) It's already done, and is called "wikipedia". The problem of accessing wikipedia after the solar flare in a few days wipes out human technological civilization is left as an exercise for the reader.

b) OK, so it's not really done, and is going to be even less done as paper books more or less disappear from the world and people stop learning how to read because their personal digital implant delivers content directly into your cortex in full sensory mode, all of which goes away when a nuclear war followed by a space alien invasion reduces humans to a marginal species living in abandoned mines and sewage tunnels and living on rats. Brevity is then the soul of wit. We need three things:

1) How to make and blow glass.
2) How to turn glass into lenses and lenses into microscopes and telescopes.

These two things are already sufficient. They extend human senses into the microscopic and macroscopic, otherwise hidden, Universe, and nothing but common sense and observation is required from that point on. However,

3) How to build a printing press.

is also good, provided that people can still read.

Oh, you want to rebuild civilization QUICKLY? Either we're restarting from a partial, not full, reboot (that is, we still have easy access to things like unburned oil and coal, iron, maybe a few undamaged nuclear power plants with the engineers to run them) or it's just not happening!

The problem, you see, is easy access to those resources. The more we deplete the Earth's crust of readily minable resources, the harder it is to reboot civilization on a collapse. We just don't have a lot of places where oil still comes oozing up to the surface of the Earth, for example, so why and how exactly are people going to go looking for it a kilometer or two down? How easy is it going to be to find any? Steel requires iron (still fairly plentiful, granted) and coal. Hmmm, easy coal isn't so easy any more. Easy copper, not so much. Easy aluminum? No such thing, needs massive amounts of electricity (although ore is still plentiful enough. Even making chemical reagents like sulphuric or nitric or hydrochloric acid (key to building nearly anything interesting) require sulphur, salt, electricity.

This is what is going to be tough. Bootstrapping directly from type 0 pre-civilization to type 2 civilization is going to be very difficult if we've depleted all of the easy pathways to 2 while we are type 1, even if we preserve usable copies of wikipedia, the CRC handbook, the library of congress science section, the entire proceedings of the IEEE, and a complete copy of all patents ever filed in the US patent office (and have people who can read them, and who have managed to learn calculus and build stuff). Hydroelectric power, maybe. Alcohol can drive simple motors. But going straight to nuclear or photovoltaics is going to be pretty much impossible, and going the coal/oil route we've followed the first time is going to be much, much harder.

The best thing, therefore, is to take care of the civilization we've got...

rgb

Slashdot Top Deals

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...