Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Astronomy@Home (Score 1) 398

Ok, here is a suggestion in the astronomy venue.

Standard astronomy/astrophysics is not going to be looking for signs of an "engineered" universe (because astronmers/physicists really want the Universe to be "dead" (otherwise things get extremely complicated). At the same time classical SETI research largely wants "them" to be talking to us. Pick the middle ground -- the universe may have potentially many (intelligent, advanced, technological civilizations) and they have no interest in talking to us. As one would presume that said ATC take their stars "dark" (this is the Kardashev pre KT-type I to KT-type II civilization transition) This has been expressed in theories involving Dyson shells and subsequently Matrioshka Brains.

Now the point to understand is that the rate of conversion of a solar system from a KT-I to a KT-II level depends a lot on the nature of the solar system and the technology the ATC has at its disposal. Within our solar system if we have full nanotechnology capabilities it would probably take place in months. So the key point is that a civilization transitioning from KT-I to KT-II level generally makes its star disappear. Astronomers don't like to watch things like this, presumably they view them as anomalies -- stars don't "go dark" they turn into supernovas or white dwarfs because that is stellar theory unencumbered by the details of intelligence, technology, etc.

There is not currently to the best of my knowledge a survey of the entire sky looking for the rate at which "stars go dark". But it is the kind of exercise one can conduct at home using simple 35mm cameras and then expand to larger cameras, telescopes, recruit people from around the world, etc.

One would simply take pictures every night, download the data, do the image analysis (roughly an inverse of looking for supernovas), plot trends, etc. This work cannot produce a negative result as even the lack of stars going dark begins to constrain the f_i and f_c parameters of the Drake Equation which provides very useful information for SETI in general and exobiologists more generally.

It is also a project which scales quite readily as one recruits people looking at different parts of the sky, employs better cameras, telescopes, etc. It is also a bit different from "classical" astronomy in that it is more about how the universe "is" rather than how the universe "was". Presumably if there is a "rate at which stars go dark" it should diminish with the age of stars/galaxies studied. That in turn tends to specify rate at which civilizations can evolve to an intelligent technological state. Also another useful piece of information.

If you would like to go further in this direction feel free to contact me.

Comment Javascript is evil (Score 5, Funny) 286

And I want to run a game programmed in Javascript on my computer WHY? I helped write a simulator for a PDP-10 that ran on a PDP-11 (36 bit machine on a 16 bit machine) 30+ years ago. And there was a concrete corporate need for it (we were modifying the Bliss-11 compiler which was written in Bliss-10). And even though I like PacMan (lord knows how many quarters I plugged into it at the local video game parlors in the 80's) I would still pause before I open my machine(s) up to running Javascript games.

If only from the simple perspective that an interpreted, garbage collected language (such as Javascript) is inherently less efficient than a compiled language (C, Pascal, whatever) -- and it therefore is going to burn more CPU cycles than are required to perform the functionality the game provides. AND IT IS THEREFORE NOT GREEN!

The goal of programmers (world-wide) should not be on "how do I implement something clever and cool". It should instead be on how do I reduce the CO2 footprint of my program? It is a sad state when one is promoting programs which may increase wasteful expenditure of energy (via Javascript). If /. is a "good" forum, should they not be promoting good directions?

Comment A flawed perspective... (Score 2, Insightful) 100

So Microsoft has the flaws, the governments have the flaws, but we, the purchasers of windows software do not have the flaws. What is wrong with this model? Could it (cough) perhaps be that the software isn't open source (in which environments the flaws tend to be published openly on an extremely short time scale)?

IMO the last bastions of the purveyors of a flawed model would tend to recruit those in power to perpetuate said model. (Oh its OK that there is a flaw because the powers that be know about it and we are going to fix it... eventually...)

Please please somebody, study the serious flaw correction rate in closed source vs. open source software (i.e. time from flaw discovery until flaw correction availability). I would hope that if this has not already been done someone is attempting to do it.

And shame on a majority of city, state and U.S. governments for operating on closed source software and not having concrete data with respect to flaws and vulnerabilities. If you worked for a corporation (at least one which knew the value of open source perspectives) your head would be on on a "silver platter" for allowing the corporation to be open to be open to the vulnerabilities of closed source software.

Simple. Ask Microsoft to warranty its products to be free of defects. And if it does not do so you are most probably utilizing products which probably contain defects. And that is a sad situation -- we are running reality with no more knowledge than we have of that of a "can-o-worms" [1].

1. To the best of my knowledge the genome sequence of the common garden worm is not known and even if it were there are probably few if any systems biologists who could explain in detail how it really works. Programs that have worked for hundreds of millions of years (e.g. worms) are probably fairly safe (even if we cannot explain how they work). Programs which have operated for less than 30 years and are driven by monetary criteria (profit margins, ROI, etc.) are probably an open source for concern.

Comment It is as could be expected... (Score 1, Interesting) 318

The Gestapo was never eliminated it just reinvented itself as the German Government. While one can argue that Google was fairly stupid to collect the data in the first place (why create a potential liability since we know that the "powers that be", not just in Germany, always want more information). What was Google hoping to do with the data -- market locations of open WiFi spots? First of all the *information* is "public" -- anyone citizen is free to collect it (if they want to drive around a fleet of vans with the appropriate receivers). Second of all I believe several countries have criminalized open WiFi hubs (so their days are probably numbered -- most probably because the governments want to climb into bed with the providers to know who is using "anonymous" internet access -- look for them to attempt to compromise any "anonymous" software next).

The only way Google climbs into the equation is that they happen to have collected (concentrated) public data. So they represent an easy target for governments to go after to gain an information source which it might be illegal to obtain (May depend on jurisdiction. In the U.S. spying on ones own citizens is extremely problematic one hopes). Far easier to issue a subpoena for the data from a foreign company than to actually collect the data oneself. I would have no objections if any such data releases were being subjected to a joint oversight commission by the EU and the U.S. to ensure that it was not being subjected to misuse. (The recent ACTA exposures and such agreements as the EU-India Free Trade Agreement which is in part trying to protect the EU from Indian generic drugs suggest that the EU is as "in bed" with corporations as people in the U.S. know is the default reality). One has to ask why would governments seek information from private organizations which they could collect themselves? I at least would ask serious questions regarding why they need or want such information. Perhaps seeking an end-run around issuing subpoenas to all ISPs for citizen browsing habits? And even more importantly a criteria for selecting an ISP -- those whom DO NOT KEEP RECORDS.

Comment Stupid sprint.com (Score 1) 182

I like the idea of 4G phones and banishing Comcast & Verizon to the netherworld... (actually I'd probably be happy with just the 4G service to a USB/Ethernet port -- forget the pricey "phone") Points 1 to Sprint for promoting competition and offering alternatives.

However, to view the sprint.com evo announcement requires that ones browser have Javascript enabled and Flash 9 installed. And yet the web page is lots of "text" which requires only simple HTML to display [1]. A lot of people don't want to browse the web that way because because it isn't Green (requires more user CPU power, router overhead betwixt server and user, generates more CO2 emissions, is more likely to crash ones browser [like Javascript | Flash would *never* do that], etc.). So the marketing people at Sprint get -5 for not selling their product on the merits rather than how it looks.

1. The Javascript/Flash requirement is so they can display a fancy Sprint Logo and allow one to rotate the phone on the screen (both things that are for people who love to be marketed to and are incapable of judging technology on the merits of the pricing or its actual capabilities).

Comment Re:Adobe -- you are wearing no clothes! (Score 1) 731

While I appreciate the humor of the comment the problem is that I was running the strace's using chrome (quite non-Apple) and Linux (also non-Apple as I believe the MacOS and presumably iPhone/iPad OS are evolutions from Mach (which served as the base for NeXT systems to the best of my recollection) on a libflashplayer.so plugin (directly from Adobe). So the only excuse for Adobe (Flash) is that they generally have no clue (or interest) across a variety of operating systems/hardware -- another nail in the coffin of non-open source monopolistic software solutions.

Just because you somehow managed to force (or by market trends got) people to adopt it doesn't mean its a good, much less the best, solution -- (witness Windows).

1. Though in fairness to Microsoft, one could attribute the current problems with Windows (host of viruses to the world) as being due to the fact that the primary host hardware for Windows 95 was the non-memory managed 8086 (and related hardware). So in theory one could blame much of the current "state-of-the-world" as we know it on choices made by Intel.

Comment Adobe -- you are wearing no clothes! (Score 4, Interesting) 731

If Adobe Flash (which Adobe did not even develop BTW) were an really usable product, e.g. open source, able to be enhanced by the end-user, GREEN(!) and secure they would have a case to stand on (in critiquing Apple).

But Apple has a very good point with respect to their two main products -- the iPhone and the iPad. These are *battery* based devices and power consumption is a major concern. Right now I've got a "single process" [1] chrome session with the libflashplayer.so sub-process running and playing *NOTHING* the Flash Process is sucking down 25+% of my CPU (Pentium IV Prescott) [2]. This isn't just chrome, one sees the same behavior in Firefox its just more difficult to see because it runs as a single process.

GREEN programs take steps to minimize their CPU consumption, recognize when they are doing nothing and adapt, allow the O.S. to go into various power saving modes (ACPI, P4-clockmod adjustments, suspend to ram, etc.) and as far as I can tell Flash is designed so as to prevent that. If one strace's the chrome flash plugin process one discovers that in 10 seconds it issues 56,000 system calls -- 53,000 (95%) of them are useless gettimeofday() calls. Maybe Flash hoping that someone has requested that it play something... Seems like Adobe doesn't know what a "poll()" call is useful for.

So I'll do my best to avoid Flash entirely on the basis of its CPU use and CO2 emissions footprint and not even bother to open the potential security problems can-o-worms.

1. A "single process" chrome session is more often a 4-5 process session (given extensions, plugins, etc.) but it is far better (from a memory use standpoint) than the typical 35-process sessions one gets under Linux once one has exceeded the Google/Chrome "imposed" process limit.
2. Fortunately one can either "kill -s STOP" or entirely kill the libflashplayer.so plugin and chrome will keep right on functioning (with the possible informational messages in certain tabs/windows that there was a problem with Flash. Often times it isn't even clear that those tabs/windows were using Flash.

Comment Single element? (Score 1) 326

This isn't accurate, nor is it new. "Suddenly, I wondered, what if we could assemble materials like the abalone does -- but not be limited to one element?" The problem with this is that abalone isn't limited to a single element. All organisms which produce common shells are dealing with molecules of calcium carbonate (CaCO3). Many plankton produce silica shells (SiO2). Some magnetotactic bacteria produce magnetite crystals (Fe3O4). There are ~20 proteins in the human genome involved in manipulating or using selenium (Se) not to mention many more involved in dealing with iron (Fe), copper (Cu) and sulfur (S).

Life has actively used available resources (in terms of ions or molecules) for several billion years. Nor is it new that one could use biological systems to assemble nanoscale parts. That was anticipated in a paper I wrote in 2001 [1] and if one goes back in Drexler's writings the concepts were clear in papers he wrote as early as 1981 (bacteria and eukaryotic cells are nanoscale manufacturing plants -- though not general purpose nanoassemblers). Further the applications for synthetic genomes and nanoscale assembly were seen and incorporated into a business plan as early as 2002 (Robiobotics, LLC). Unfortunately, in terms of fund raising, that was about the same time as the dotcom crash and all the VC's were trying to seek out a rock to hide under.

1. Bradbury, R.J. "Protein Based Assembly of Nanoscale Parts" (2001).
http://www.aeiveos.com:8080/~bradbury/Papers/PBAoNP.html

Comment Re:Planetary visits are an obsolete idea (Score 1) 262

I didn't say we should literally dematerialize it, I said we should disassemble it. There are multiple paths for doing this [1,2]. All of the planets are at the bottom of various sized gravity wells -- if you have sufficient energy to move the matter out of the well you can "quickly" disassemble the planet. If one has a significant fraction of the Sun's power available (~10^26 W) then the disassembly of Mars takes ~176 days [3]. In actual practice it is likely to take longer due to the fact that one would have to divert power from Matrioshka Brain "thought" into planetary disassembly so there is a fair amount of politics involved ("whether to think or disassemble that is the question..."). The likely path in our solar system disassembles the asteroids first and then uses the swarm resulting from that to bootstrap the disassembly of Mars and/or Mercury. I discuss this further in the chapter "Under Construction" from the collection of essays in "Year Million". In solar systems lacking an asteroid belt one would probably start with the planet closest to the star (e.g. Mercury) since it has the largest planetary solar insolation as starting energy base.

Yes, I agree that humans have pursued a lot of things just for the fun of it. On the other hand I take a fairly "moral" approach here. Humanity looses ~40+ million lives a year due to aging. If the long term choices include saving that number of lives (each year) and providing them with either multi-thousand year lifespans (in biological form) or multi-billion year lifespans (as minds uploaded into a Matrioshka Brain) vs. sending a few dozen people to drive around or build "Quonset huts" on Mars then I chose the first as a more noble goal. One can easily incorporate the "colonization" adventure into a Matrioshka Brain vision -- just survey Mars completely before you disassemble it (or as you disassemble it) and construct a simulation of it to play on/in once ones mind is uploaded (you have seen the Matrix series I presume). Or if you were addicted to playing in a "real" world reconstruct a Mars-like mini-planetoid with the leftovers from the Mars disassembly process (there is likely to be a lot of iron and oxygen left over from inner planet disassembly which isn't particularly useful from a nanotechnology standpoint). People who are choosing romantic colonization notions need to reconcile whether to dedicate intellectual and financial resources to those notions or whether they should be used to solve real problems (people lacking choices with respect to how, when and if they die). My personal preference is solving real problems.

1. Incineration, highly parallelized rail gun launches, extreme mountain building, spinning up the planet, etc. Kaku's approach to building a "Death Star" (really "Stars" if you want to disassemble the planet quickly), as seen on the Science Channel, is close -- he just doesn't realize that you can have the entire solar power output at your disposal if you have nanotechnology enabled solar power satellite construction methods.
2. http://www.stardestroyer.net/Empire/Tech/Beam/DeathStar.html
3. http://www.aeiveos.com:8080/~bradbury/MatrioshkaBrains/OSETI3/4273-32.html

Biotech

FDA Approves Vaccine For Prostate Cancer 194

reverseengineer writes "The US Food and Drug Administration has given its first first approval for a therapeutic cancer vaccine. In a clinical trial 'involving 512 men, those who got Provenge (sipuleucel-T) had a median survival of 25.8 months after treatment, while those who got a placebo lived a median of 21.7 months. After three years, 32 percent of those who got Provenge were alive, compared with 23 percent of those who got the placebo. ... "The big story here is that this is the first proof of principle and proof that immunotherapy works in general in cancer, which I think is a huge observation," said Dr. Philip Kantoff, chief of solid tumor oncology at the Dana-Farber Cancer Institute in Boston and the lead investigator in Dendreon's largest clinical trial for the drug. "I think this is a very big thing and will lead to a lot more enthusiasm for the approach."'"

Comment Planetary visits are an obsolete idea (Score 4, Interesting) 262

The entire concept of planetary visits, colonies, etc. is the one of the most out-of-date (read waste-of-time) ideas currently circulating. The only people that promote it are those with misguided romantic ideas about humans exploring Mars as they did the Earth in the 16th-18th centuries. They should be discarded as out of date given that (a) humans are not designed (due to insufficient and error prone DNA repair systems in their genome) to endure long term space voyages or planetary habitation outside of the magnetosphere of the Earth (where high radiation doses are a constant threat); (b) progress in robotics and AI is likely to make sending robotic explorers much more productive and less hazardous than sending humans by 2030; and (c) if we pushed on molecular nanotechnology just a little harder by 2030 we would be disassembling Mars for material to build the Matrioshka Brain rather than thinking about growing food on it for colonists (no point building a farm if you are only going to disassemble it).

I like the romantic exploration ideas just as much as the next person -- but it just isn't justified given current rates of technological progress. It is also worth pointing out if we ever get to the point where we modify our genomes (or those of astronaut explorers) to be radiation tolerant we can also engineer them to be lack-of-gravity tolerant [1]. In which case living at the bottom of a gravity well makes no sense -- instead we should be migrating to O'Neill style colonies or long term interstellar "arks" (presumably to remove the "single-point-of-failure" problem humanity faces by living on a single planet or around a single star).

1. Modifying large numbers of cells to be radiation & lack-of-gravity tolerant in adults will be very hard (read nearly impossible) without molecular nanotechnology (e.g. chromallocytes) in adults. The only way to do this correctly is to breed a new species of human designed for space environments. Unless you can engineer them to mature much faster (doubtful) that implies you need to take transgenic-human-birth-dates + ~25-30 years before one can seriously consider long term exploration/colonization efforts.

Comment Think 3D not 2D (Score 1) 372

If you've ever visited LLNL you would know that its over the hills from Oakland/Hayward, effectively in the Imperial Valley -- where there is lots of land to build things on (which is why it takes up football fields). Now on the other hand if you one was building it in Manhattan you would go down and not out (think of the foundations for the World Trade Center buildings. The arrangement of the lasers is fairly arbitrary -- one can go down or up nearly as easily as spreading out.

In reality it comes down to a cost trade-off between normal conducting transmission lines, superconducting transmission lines (which have been and are being built today) and land costs at a distance vs. construction costs of digging a large hole or a moderately sized skyscraper. We could significantly decrease our long term energy costs by using wind/solar into a superconducting grid augmented by pumped water storage (or batteries/capacitors if those end up being cheaper). There is no reason that electricity should not be relatively "free" if we accept the early lifetime investment costs and build the required infrastructure.

Comment Re:Terrible idea, of course, which is why we don't (Score 1) 351

One doesn't need to illuminate every single coastline. One would get most of the benefit by illuminating high population density regions, regions likely to generate larger waves due to offshore slopes, etc. The calculations could be done in advance based on historical records, known likely origination sources for the quakes, etc. The laser beam steering technology exists (due to research in targeted laser beam weapons).

One also doesn't need a nuclear reactor. The Japanese are planning to launch a solar power satellite in the next decade or two (the general technology for SPS has been around for decades). Alternatively one could simply use a moderately large array of solar cells and dump the energy into a capacitor bank (or a high temperature or gas pressure vessel that could dump the energy into a turbine on demand). It is worth noting that for highly redundant solid state components, they could be launched using the recently proposed undersea "rail-gun" (est. cost $1-2B?) on an ongoing basis and assembled in-orbit (presumably using robots). No need for human involvement in space, human rated space vehicles, etc.

However, for this to be a "serious" proposal, one would want to cost-out a comparison with a semi-permanent high altitude solar powered UAV (50-100,000 ft). Since they could remain above critical areas and would involve extremely intermittent signals (perhaps using both radio and "light" [1]) and could receive signals faster than satellites at higher altitudes. There has to be a tradeoff between satellites, balloons and UAVs but I've never seen an analysis.

1. Note that these would also be a good alternate solution to Telco/3G/4G cell/Cable/Fiber/WiMax Internet access (the more competition the lower the prices are likely to be) but the power demands are greater due to the "always on" requirements.

Comment Dreaming does not make it so (Score 1) 692

Well, a few days ago, I had a dream where my father and I had a conversation with Bill Gates (*really*) [1]. When I woke up I thought "What a cool dream, what brought that about?" I put it down to having watched the TED conference presentation by Bill a few months ago, articles I have read about his house (where the conversation took place) a few years ago and the general separation (2-3 degrees) that I have from him [2]. I can contrast this "memory" (which might better be called a synthetic pseudo-experience) with a ~17 year old memory of a diner at Larry Ellison's house that included Steve Jobs and others. The advantage of the 2nd "memory" is that there were sufficient additional people present and external evidence (I probably have the charge records of flying from Seattle to S.F. and have the email exchanges setting up the diner) so that I have a relatively high confidence that the diner really took place in physical reality rather than just the reality of thoughts bouncing around in my head [3].

So, are NDE "real"? Quite probably for some people, particularly those who may have spent a significant fraction of their lives participating in or holding on to a particular perception of "reality". Are they "significant"? Perhaps only if you use them constructively in the remainder of your life. Otherwise I'd tend to place them in the same category as my conversation with Bill.

1. Over the last few years my dreams have become more complex and my ability to recall them seems to be increasing. I tend to put this down to some combination of possible mental changes from a variety of drugs, natural aging and simply having the time to sleep more than other periods of my life.
2. My ability to "make stuff up" to fill in the blanks in dreams amazes me. I put it down to the fact that I've lived 50+ years in a relatively fixed framework (non-changing laws of physics, relatively "normal" people around me, etc.) so the reality that my brain "expects" is pretty fixed. If I could harness the ability of my brain to "make up good (borderline plausible) stories" for my dreams in my waking reality I'd be a famous fiction author.
3. Generally speaking acting on the basis of the validity of the external reality rather than the internal reality saves me the trouble of having someone come and bail out of jail the next time I encounter Bill and nonchalantly walk up to him saying "Yo Bill, what's happenin?".

Comment Re:After death studies on live people? (Score 4, Interesting) 692

Simple, the "electrical waves" to which you refer are the propagation of ion current flows, esp. Na+ and K+ along the neurons in the brain. Just because one cannot detect such propagation of local charge differentials does *not* mean that all chemical activity, esp. the pumping of Na+/K+ due to local ATP pools has ceased. Indeed if one's brain is not *FROZEN* there is going to be chemical activity (there is probably even some chemical activity above liquid nitrogen temperatures) -- which may be a reason why one can get better brain recovery even with no heartbeat and no electrical activity if one cools it down before attempting a reboot. (Brain rebooting is a complex interaction of proper chemical reactions and improper (harmful) chemical reactions.)

The problem is with the current definition of "DEAD" [1]. You are not DEAD until the information content (organization) of ones brain has been damaged beyond the capability of any technology to recover. Currently the two most probable (frequent) methods for making one really dead are disassembly by incineration (cremation) and disassembly by consumption (allowing fungi/bacteria to consume a body). The next most common methods probably involves brain crushing injuries such as in earthquakes, industrial accidents, etc.

So long as proper brain (neuronal) organization exists and most of the proper cellular structure is in place YOU ARE NOT DEAD -- you are simply "shut-down". I've got a 10+ year old 8086 based computer sitting downstairs. It runs either Windows 98 or Linux depending on how I boot it. It isn't normally "dead", its simply "off". You should read a bit more about brain/neuron physiology and cell biology to understand this. Also education regarding cryonic preservation and the future capabilities offered by robust molecular nanotechnology would be useful.

1. The current definition of "dead" and therefore "NDE" is based on the very limited definition roughly equal to "beyond the probable restoration of significant levels of functioning using *currently* known medical technologies" [2].
2. If one is cynical about it one might consider how prevalent the trend is to promote declaring people with fully organized brains as "dead" so as to enable the harvesting of organs for organ transplants (which surgeons and hospitals do make money from). In contrast an alternative would be to have both the supposedly "dead" individual as well as the individual(s) likely to die should they not receive an organ transplant undergo cryonic suspension [3].
3. A third nearer term alternative, which is currently unapproved, would be hydrogen sulfide "anesthetic" preservation which appears to have certain "suspended animation" properties (may retard overall metabolic rate) and thus give people an increased opportunity for technology to "catch up" with their condition(s).

Slashdot Top Deals

Disraeli was pretty close: actually, there are Lies, Damn lies, Statistics, Benchmarks, and Delivery dates.

Working...