Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:Any objections? (Score 1) 571

At this point, I am not worried about incumbents as much as I am the lack of constitutionally empowered oversight of the legislative branch by the people.

There is nothing in the constitution about us (the people) changing the way senators are selected, or changing the rules by which the senate operates, which means that to do so would require a constitutional amendment. This, of course, requires a *two-thirds* majority vote by the Senate, and to change that requirement, of course, requires a constitutional amendment (ad infinitium).

You might reply that our power is to elect new senators who will act and vote in the way that we would like. Unfortunately, it seems that senators quickly lose whatever gumption they had to change the system and become a cog in it. Additionally, it's going to require a full 6 years to replace the entire incumbent bloc. It'd be better if we (the people) at least had the threat of direct action, even if the only power we were granted was some sort of last-resort nuclear option.

While California has the opposite problem and is currently overwhelmed and hogtied by too much direct constituent participation in legislation via the ballot initiative process, I still find the lack of any way for the people to override the legislative branch disturbing.

Comment are they encoded signals? (Score 2, Interesting) 483

It occurred to me when looking at the charts that the stock market quote system is the perfect way to send encoded transmissions- the sender/offering entity is almost impossible to trace back and the receiver can remain entirely anonymous since almost anyone can look at stock pricing charts. Next, the patterns can be nearly impossible to detect, especially if several sources are linked together to make one transmission system, since the system is filled with lots and lots of what amounts to 'random noise' in the millions of non-encoding quotes/trades out there.

A sender would also have a significant amount of bandwidth given the number of different ticker symbols, the frequency of quotes, the rate of change between quotes, the direction of quotes, etc.

Normally, a casual observer wouldn't even notice the signals present at all. In this case, a potentially unrelated event (the flash crash) caused more scrutiny, but, supposing this are encoding signals we're witnessing, we still don't know what they mean or to whom they were sent.

Comment Re:Amylopectin (Score 5, Informative) 194

From reading the physorg summary linked in the article linked in the summary on Slashdot (why we have to link to tertiary sources, I don't know) it seems that it isn't the polymer branching of the molecule that lends the mortar strength- the amylopectin doesn't even directly add strength as far as I can tell. Instead, it's that the amylopectin breaks up the crystallization of the lime in the mortar, creating micro crystals instead. I can imagine a big crystal being quite brittle with all of the possible shear planes.

So, it wasn't as obvious to me why the amylopectin made it stronger.

News

Submission + - Mayan Plumbing Found in Ancient City (physorg.com)

DarkKnightRadick writes: "According to this article on PhysOrg.com, the ancient Mayans had pressurized plumbing as early as sometime between the year 100 (when the city in question was first founded) and 800 (when it was abandoned). While the Egyptians had plumbing way earlier (around 2500BC according to this site), this is the first instance of plumbing in the New World prior to European exploration and conquering."

Comment backup failure doesn't mean a failure to test (Score 4, Insightful) 153

I see lots of comments stating that this would not have happened had admins run regular tests on the failover mechanisms. That seems a poor assumption- if the system happens to fail and then an outage occurs before the next scheduled test, one may not be aware of it.

We had this problem recently where we were testing our backup generator. Normally, we cut power to the local on-campus substation, which kicks in the generator and activates a failover mechanism, rerouting power. Well, the generator came on no problem but the failover mechanism was broken, so every server in the datacenter spontaneously lost power. Had we known the failover was broken, we would have not done the regular test. However, the last test on the failover (done directly without cutting power), a mere month prior, had shown the failover mechanism was fine.

Point being, unless you are going to literally continuously test everything, there is still some probability of an unexpected double failure.

Comment to remove some confusion: (Score 5, Informative) 263

Let me explain to you why this is not as scary and outrageous as it would first seem. The summary and article are very good ones, but don't provide enough context for a non-expert to understand how serious/non-serious it is:

As the summary indicates and RTFA seems to confirm, DSHS collected the samples for use in anonymous human medical research. This is done all of the time, as another poster commented (and gave the great example of HeLa cells). Typically, an oversight committee reviews a great many details about your research plan and ensures your collection methods are sufficiently anonymous, and your research is done in such a way as to avoid revealing the identity of the sample if at all possible. (Usually, users are separated from the database maintainers, and the users never even know the identities of the samples).

As one example, co-worker of mine receives nasal swabs of infected children in Nicaragua, under the auspices of WHO and CDC. He screens them using very expensive diagnostic assays that aren't viable in the clinic but are useful for basic research. His lab has discovered several new viruses in these samples that weren't previously discovered due to geographic bias in clinical cohorts (you sample the people most likely to be able to pay for the cure). He never knows the names of the children, just age, symptoms, and previous infections. He has to renew his certification to work with human samples once a year to ensure he knows all relevant legal and ethical regulations, and must update his research plan regularly, and receive annual approval from the oversight committee, even if he doesn't change anything. (And must stop all research if he procrastinates and certification lapses) However, without being able to use these samples, both basic research and clinically relevant research would be hampered. DSHS probably operates in the same way.

The issue here is that these samples were passed to the federal government and they used them to build a DNA database. People sued primarily because DNA is considered very personal information in this country and having the government track you using it is a current moral panic/boogeyman. (Partially warranted, partially not). In this case, however, they were using mitochondrial DNA, which is separate from your normal chromosomal DNA. Because sperm have no mitochrondia, all of your mitochondrial DNA is passed matrilineally (i.e. from mother to child-- sons cannot pass it on at all). Because you only have one copy, it does not undergo recombination during sperm/egg generation, and thus changes very very slowly. As a result, people like the National Geographic Society are using the information to trace human migration patterns throughout history using mitochondrial sequence information (google it). However, because it's so similar from person to person ---it is unlikely to be able to be traced directly back to you or identify you the way your chromosomal DNA is--- instead, it can tell where your mother's mother's mother's mother's mother's mother came from, i.e. your ethnicity. With enough samples it may even be able to tell whether you are a recent immigrant, a long term american, etc. This means that, using this database as a source, police may one day collect mtDNA from a crime scene and know they are looking for a person from Eastern Europe that is 1st-3rd generation american. That is, it can be used to narrow suspects, but can't be used to identify you directly.

So, in the end, the information (at least to me, as a molecular biologist) is relatively harmless and perhaps even good, in balance. However, given the serious objections people would likely have if they had known their information would be used in this way, the oversight committee should have required additional consent to use and collect this information for each person's sample they collected (and insured the people who gave consent gave informed consent). That would have avoided the mess entirely, and been more ethical.

Comment only one step of a great many (Score 5, Informative) 270

For those current in the field, this discovery is not surprising. Several people have created synthetic ribozymes already, most doing some trivial and superfluous task. It was only a matter of time until someone created a self-replicating ribozyme. Yet, they do serve as basic evidence that the RNA-world hypothesis may be correct.

However, a soup of replicating molecules is still a far cry from life, and, indeed, there are many more complicated features of life as we know it, even at the most basic level, for which there is no creation hypothesis. We know that membranes can self-assemble into micelles, and one key component of all life is a membrane layer to separate the living environment from the surroundings. However, if, by chance, a micelle happened to self-assemble around a ribozyme, how does the ribozyme continue to function, now that it has no ready source of diffusing ribonucleotides (the building block of RNA)?

Second, how did the first micelles replicate? Did they simply continue to grow as more membrane molecules spontaneously add to them until they broke apart into two? Perhaps life arose in some sort of thermally-cycling environment and the micelles broke apart at high temperture, releasing the contents, and then reformed again, with new randomized contents when the temperature cooled.

Third, how did we transition from RNA contents with lipid membranes into the vastly richer information of the amino acid world? Is there a reductionist "alphabet" for amino acids that may have served as the starting point, from which the extra amino acids were added slowly. Is our alphabet 'optimal' (virtually all life uses the same 20-acid alphabet, which minor variations of 1 or 2 in extreme organisms)? Or perhaps the alphabet only evolved once, and thus had no competition and could be completely far from optimal.

As you can see, there are a number of interesting questions to be explored. We have, however, gone from not knowing how the basic components of cells (proteins, DNA, lipids) functioned, to knowing that DNA encodes the 'heritable' information, to its structure, to the Miller-Urey experiment, and now on to knowing immense details about the complicated protein functional networks within cells, and between cells as well creating synthetic molecules that can evolve via natural selection, all in the span of just more than a century. It's going to be extremely fun to see what we know by the end of the 21st century. Right now we feel like we know all of the basics and just have to work on the hard stuff. I will bet dollars to donuts that we have a lot to learn, and, by 2100, several discoveries will have been made that future people will wonder how we ever thought we knew anything without.
Apple

Submission + - Apple dual band TimeCapsule 5GHz problems

An anonymous reader writes: The new dual band TimeCapsule + AirportExtreme are in subject of dropping speeds and connections on the 5GHz network. New product promising (Firmware 7.5) "... improvements in antenna design give you up to 50 percent better performance and up to 25 percent better range than with the previous-generation Time Capsule." http://store.apple.com/us/product/MC343/Time-Capsule-1TB
Various users report the problem directly to apple and on discussions boards: http://discussions.apple.com/thread.jspa?threadID=2241789&start=0&tstart=0
Other sources are also picking up the story (German): http://www.maccommunity.de/beitrag/airport-probleme-jetzt-ist-apple-gefragt-t180.html#
Interesting is that the new firmware in the units is mandatory requirement to run these units but they do not even provide it on the support download pages, probably knowing it is a buggy version.
Apple gives no answer on possible problem and it is unsure if it is HW or SW problem.

Comment Autodiscovery will have to fully mature... (Score 2, Interesting) 173

For what it's worth, I signed up for the trial. Despite the level-1 tech support's crappiness, and the relative overpricing of their services, Comcast's network department does a pretty good on the backend. Our area has gone from 3mbps to 16mbps (with a 50mbps tier available) in 8 years, and has already completed the analog reclamation process in our area. Good on them for getting a head start on IPv6.

I presume they are going to want to do end-to-end IPv6 eventually, instead of assigning a single IPv6 address to my modem, and then continuing to use IPv4 NAT behind it. However, if they are going to do that, several things are going to have to change:

1. Router default settings will have to change. Out of the box, most home routers use NAT by default, and, since most people don't change the settings (based on the number of 2WIRE### SSID's broadcast to my house), they'll have to redo them for IPv6.
2. Auto discovery services will have to get better. I can say, categorically, that OS X is better than Windows and Linux at automatically finding nearby machines and devices that do not have a static IP/DNS A record assigned to them. The other 2 OSes will have to catch up, because, while a quartet of triplets is annoying but manageable to type, an IPv6 address will be a bear to copy down.
3. A debate between static and dynamic IP addresses will have to take place. Ideally, a device would get a static IPv6 address assigned to it and keep it forever, no matter where it roamed and went. It'd be akin to a routable MAC address. However, if we do that, we'll run out of IPv6 addresses more quickly (though still not fast), since things like phones get recycled fairly frequently. But there are several obvious downsides to continuing to use totally dynamic IPs.

Finally, as an aside, it's interesting to me, at least, how Apple Airport Base Stations do IPv6 routing automatically via a tunnel provider (as another commenter noted). Comcast doesn't support any IPv6, but when I'm connected to my router at home I get full IPv6 support transparently. Apple doesn't even mention this as a feature on the box, and it's not highly configurable either. So why did they spend all the effort to get it that way? Are they trying to stay so far ahead of the IPv6 curve no one will ever complain they're behind?

Comment my own experience (Score 2, Interesting) 297

I use the 30 second skip button on my Tivo to flash through the commercials. This typically means that the only commercial I see is either the first one of the break, or the last one of the break. If the first one catches my attention in the first 3 seconds, I end up watching it, and if the last 5 seconds of the last one is intriguing (say, has a punch line but not the setup), I will rewind to watch it. Occasionally, I will end up watching a commercial in the middle if the quick flash draws my brain in too (typically with some sort of interesting colors, etc).

Otherwise, I just skip through them. Seems like there could be money made studying the unique commercial viewing habits of DVR users. I'm not sure if my own experiences are unique or common.

Also, is 'had commercial playing' the finest granularity Nielsen can provide? What percent of those people actually remembered what the ad was about? And how does that percentage compare to live TV watchers?

Comment Re:Great idea! (Score 2, Insightful) 214

This logic always irks me. Do you really believe the speculative pundits they interview for these articles are more likely to come up with a new idea than the talented and probably extremely intelligent programmers who wrote up the Conficker worm in the first place?

Yes, perhaps some less-than-average person has now read this article and has seen the new idea for the first time, but that's no one to worry about. Usually if you are smart enough to implement some genius idea, you think of it first.

Comment thoughts from someone in the community (Score 5, Insightful) 245

Normally I have to preface my posts with "I am not a XXXX, but". However, in this case, I actually am a molecular biologist deeply involved in the synthetic biology community. Here are a few thoughts:

First, the amount of ignorance regarding genetic engineering and it's facets (such as GMO food) is astounding. Anecdotally, I've heard that a significant fraction of British folks polled said they would prefer DNA-free food. (Think about it until you realize the ridiculousness). People typically imagine we are trying to create hybrid organisms or bizarre clone armies or something, when it reality, it's just mixing DNA that encodes for a series of proteins you would find useful in combination. To make glow in the dark yogurt that responds to melamine would be fairly simple if you had the right set of genes: a melamine sensor that, when bound to melamine, binds to a specific DNA sequence (a promoter) that drives expression of a fluorescent protein such as green fluorescent protein ("GFP", a widely used fluorescent marker derived from a jellyfish). It's not difficult, and it's not unsafe. The vast majority of DNA and proteins are degraded rapidly in your stomache, so they are safe to eat (toxins, parasites, and infectious agents excluded).

Second, people underestimate how difficult it is to accomplish something genetically. Yes, the circuit logic above is fairly simple. Unlike electrical circuits, though, where you can control electron flow with wires there is no such spatial regulation of biological parts. It's very stochastic. One has to tune the concentrations such that the melamine sensor will strongly bind to DNA at the concentrations of melamine likely to be in food, without prematurely activating and freaking people out, while also avoiding being sued because it didn't activate when it should have and someone died. Once you get the sensor right, you have to then tune the promoter so that you get expression of GFP the same way-- no leaky expression causing permanently green yogurt, but enough expression when activated such that you can see it. I can build a simple circuit to drive GFP in the presence of melamine, but getting it commercially relevant is extremely difficult.

Finally, and most importantly, the regulations of these types of technologies are, well, 2 steps from insane. There are no regulations on the transport of DNA encoding some severe toxin, to list one example. Take botulism toxin: the DNA encoding it is well known, and short enough that one could order it directly from a DNA synthesis company. From there you can use PCR to make as many copies of it as you need. Then, put it in your bacterium of choice, produce a whole bunch, and purify it out. That entire process could be done with someone with basic college level biology and about $5k. Anybody can find the botulism toxin DNA on, say, NBCI (run by the NIH) and get to work. And there are NO regulations on any of the steps required to produce it. A person with practical experience could do it much faster. I could produce enough to kill my entire university, starting from scratch, in about 2 weeks, give or take, maybe faster

A second example is the definition of 'natural' when it comes to food. Any chemical produced in a flask, chemically, is considered artificial, even if it's molecularly identical to the natural flavor molecule. On the other hand, any synthetic flavor produced by bacteria in a vat is considered natural, as long as the sugar used to feed the bacteria is also natural. The food industry is spending billions trying to engineer bacteria to produce flavors in large quantities, because the average person will think 'all natural' means healthier or better for me.

A third example involves regulation of the types of bacteria used to produce flavors: if I randomly mutagenize bacteria with UV light until I find one I like, that's considered safe, even though I probably have no idea what mutations I've actually made. On the other hand, if I go in and, with ultra-precision, make a single, targeted mutation, that's considered wildly unsafe and the FDA will throw a fit if I try to use it.

There is a raging debate among academics regarding how to introduce these types of technology to the wider public. We all believe that tinkering in the garage is a good thing, but how to do is such that we don't end up sued because we inadvertently provided some kook the sequence for botox, without making things so controlled that no one wants to take up these basic projects. If you would like to learn more about the efforts, I would start with , an NSF sponsored research center of which I am part.

Slashdot Top Deals

"The most important thing in a man is not what he knows, but what he is." -- Narciso Yepes

Working...