Forgot your password?
typodupeerror

Comment Re:What is the long term plan? (Score 5, Informative) 41

The main purpose is to slow the spread, so that health care infrastructure can keep up with the demand. The quality of care also improves over time, since health practitioners learn more and more about how best to manage the disease. (In the extreme case, if we can slow the spread enough then some people will get the vaccine before getting the real virus.)

This visualizes this in graphical form.

Comment Re:Makes one wonder (Score 1) 41

There's a difference between "works" and "works well". I was recently scheduled to teach a 2-day short course recently; the meeting was cancelled (due to COVID19) so we switched to giving the lectures through video-conferencing and doing Q&A using a chat channel. It worked okay, but was not nearly as engaging as an in-person meeting. When courses are run well, the back-and-forth between instructor and students helps make the content more relevant and memorable. (E.g. the instructor can read body language and know when a concept needs to be re-explained.)

Overall, there are certainly lessons to be learned in terms of leveraging online education models to improve efficiency. And I'm not defending the dated "professor droning in front of bored students" teaching model, which could indeed be improved in numerous ways (including by leveraging online components). However currently there is no online experience that can replicate the advantages of in-person discussion, and thus a purely online course will not be as effective as a properly run in-person lecture+discussion.

Comment Re:This should be a given.. (Score 3, Informative) 47

The base-pair sequence of DNA determines its biological function. As you say, this sequence determines what kinds of proteins get made, including their exact shape (and more broadly how they behave).

But TFA is talking about the conformation (shape) of the DNA strand itself, not the protein structures that the DNA strand is used to make.

In living organisms, the long DNA molecule always forms a double-helix, irrespective of the base-pair sequence within the DNA. DNA double helices do actually twist and wrap into larger-scale structures: specifically by wrapping around histones, and then twisting into larger helices that eventually form chromosomes. There are hints that the DNA sequence itself is actually important in controlling how this twisting/packing happens (with ongoing research about how (innapropriately-named) "junk DNA" plays a crucial role). However, despite this influence between sequence and super-structure, DNA strands essentially are just forming double-helices at the lowest level: i.e. two complementary DNA strands are pairing up to make a really-long double-helix.

What TFA is talking about is a field called "DNA nanotechnology", where researchers synthesize non-natural DNA sequences. If cleverly designed, these sequences will, when they do their usual base-pairing, form a structure more complex than the traditional "really-long double-helix". The structures that are designed do not occur naturally. People have created some really complex structures, made entirely using DNA. Again, these are structures made out of DNA (not structures that DNA generates). You can see some examples by searching for "DNA origami". E.g. one of the famous structures was to create a nano-sized smiley face; others have 3D geometric shapes, nano-boxes and bottles, gear-like constructs, and all kinds of other things.

The 'trick' is to violate the assumptions of DNA base-pairing that occur in nature. In living cells, DNA sequences are created as two long complementary strands, which pair up with each other. The idea in DNA nanotechnology is to create an assortment of strands. None of the strands are perfectly complementary to each other, but 'sub-regions' of some strands are complementary to 'sub-regions' on other strands. As they start pairing-up with each other, this creates cross-connections between all the various strands. The end result (if your design is done correctly) is that the strands spontaneously form a ver well-defined 3D structure, with nanoscale precision. The advantage of this "self-assembly" is that you get billions of copies of the intended structure forming spontaneously and rapidly. Very cool stuff.

This kind of thing has been ongoing since 2006 at least. TFA erroneously implies that this most recent publication invented the field. Actually, this most recent publication is some nice work about how the design process can be made more robust (and software-automated). So, it's a fine paper, but certainly not the first demonstration of artificial 3D DNA nano-objects.

Comment Non-deterministic sort (Score 4, Interesting) 195

Human sorting tends to be rather ad-hoc, and this isn't necessarily a bad thing. Yes, if someone is sorting a large number of objects/papers according to a simple criterion, then they are likely to be implementing a version of some sort of formal searching algorithm... But one of the interesting things about a human sorting things is that they can, and do, leverage some of their intellect to improve the sorting. Examples:
1. Change sorting algorithm partway through, or use different algorithms on different subsets of the task. E.g. if you are sorting documents in a random order and suddenly notice a run that are all roughly in order, you'll intuitively switch to a different algorithm for that bunch. In fact, humans very often sub-divide the problem at large into stacks, and sub-sort each stack using a different algorithm, before finally combining the result. This is also relevant since sometimes you actually need to change your sorting target halfway through a sort (when you discover a new category of document/item; or when you realize that a different sorting order will ultimately be more useful for the high-level purpose you're trying to achieve; ...).
2. Pattern matching. Humans are good at discerning patterns. So we may notice that the documents are not really random, but have some inherent order (e.g. the stack is somewhat temporally ordered, but items for each given day are reversed or semi-random). We can exploit this to minimizing the sorting effort.
3. Memory. Even though humans can't juggle too many different items in their head at once, we're smart enough that we encounter an item, we can recall having seen similar items. Our visual memory also allows us to home-in on the right part of a semi-sorted stack in order to group like items.

The end result is a sort that is rather non-deterministic, but ultimately successful. It isn't necessarily optimal for the given problem space, but conversely their human intellect is allowing them to generate lots of shortcuts during the sorting problem. (By which I mean, a machine limited to paper-pushing at human speed, but implementing a single formal algorithm, would take longer to finish the sort... Of course in reality mechanized/computerized sorting is faster because each machine operation is faster than the human equivalent.)

Comment Re:Just another step closer... (Score 1) 205

You make good points. However, I think you're somewhat mischaracterizing the modern theories that include parallel universes.

So long as we use the real physicists definitions and not something out of Stargate SG1, those parallels will always remain undetectable. SF writers tell stories about interacting with other universes - physicists define them in ways that show they can't be interacted with to be verified.

(emphasis added) Your implication is that physicists have invented parallel universes, adding them to their theories. In actuality, parallel realities are predictions of certain modern theories. They are not axioms, they are results. Max Tegmark explains this nicely in a commentary (here or here). Briefly: if unitary quantum mechanics is right (and all available data suggests that it is), then this implies that the other branches of the wavefunction are just as real as the one we experience. Hence, quantum mechanics predicts that these other branches exist. Now, you can frame a philosophical question about whether entities in a theory 'exist' or whether they are just abstractions. But it's worth noting that there are plenty of theoretical entities that we now accept as being real (atoms, quarks, spacetime, etc.). Moreover, there are many times in physics where, once we accept a theory as being right, we accept its predictions about things we can't directly observe. Two examples would be: to the extent that we accept general relativity as correct, we make predictions about the insides of black holes, even though we can't ever observe those areas. To the extent that we accept astrophysics and big-bang models, we make predictions about parts of the universe we cannot ever observe (e.g. beyond the cosmic horizon).

An untestable idea isn't part of science.

Indeed. But while we can't directly observe other branches of the wavefunction, we can, through experiments, theory, and modeling, indirectly learn much about them. We can have a lively philosophical debate about to what extent we are justified in using predictions of theories to say indirect things are 'real' vs. 'abstract only'... but my point is that parallel realities are not alone here. Every measurement we make is an indirect inference based on limited data, extrapolated using a model we have some measure of confidence in.

Occam's Razor ...

Occam's Razor is frequently invoked but is not always as useful as people make it out to be. If you have a theory X and a theory X+Y that both describe the data equally well, then X is better via Occam's Razor. But if you're comparing theories X+Y and X+Z, it's not clear which is "simpler". You're begging the question if you say "Clearly X+Y is simpler than X+Z! Just look at how crazy Z is!" More specifically: unitary quantum mechanics is arguably simpler than quantum mechanics + collapse. The latter involves adding an ad-hoc, unmeasured, non-linear process that has never actually been observed. The former is simpler at least in description (it's just QM without the extra axiom), but as a consequence predicts many parallel branches (it's actually not an infinite number of branches: for a finite volume like our observable universe, the possible quantum states is large but finite). Whether an ad-hoc axiom or a parallal-branch-prediction is 'simpler' is debatable.

Just about any other idea looks preferrable to an idea that postulates an infinite number of unverifiable consequents.

Again, the parallel branches are not a postulate, but a prediction. They are a prediction that bother many people. Yet attempts to find inconsistencies in unitary quantum mechanics so far have failed. Attempts to observe the wavefunction collapse process have also failed (there appears to be no limit to the size of the quanum superposition that can be generated). So the scientific conclusion is to accept the predictions of quantum mechanics (including parallel branches), unless we get some data that contradicts it. Or, at the very least, not to dismiss entirely these predictions unless you have empirical evidence against either them or unitary quantum mechanics itself.

Comment Re:Can't have it both ways (Score 1) 330

I disagree. Yes, there are tensions between openness/hackability/configurability/variability and stability/manageability/simplicity. However, the existence of certain tradeoffs doesn't mean that Apple couldn't make a more open product in some ways without hampering their much-vaunted quality.

One way to think about this question to analyze whether a given open/non-open decision is motivated by quality or by money. A great many of the design decisions that are being made are not in the pursuit of a perfect product, but are part of a business strategy (lock-in, planned obsolescence, upselling of other products, DRM, etc.). I'm not just talking about Apple, this is true very generally. Examples:
- Having a single set of hardware to support does indeed make software less bloated and more reliable. That's fair. Preventing users from installing new hardware (at their own risk) would not be fair.
- Similarly, having a restricted set of software that will be officially supported is fine. Preventing any 'unauthorized' software from running on a device a user has purchased is not okay. The solution is to simply provide a checkbox that says "Allow 3rd party sources (I understand this comes with risks)" which is what Android does but iOS does not.
- Removing seldom-used and complex configuration options from a product is a good way to make it simpler and more user-friendly. But you can easily promote openness without making the product worse by leaving configuration options available but less obvious (e.g. accessed via commandline flags or a text config file).
- Building a product in a non-user-servicable way (no screws, only adhesives, etc.) might be necessary if you're trying to make a product extremely thin and slick.
- Conversely, using non-standard screws, or using adhesives/etc. where screws would have been just as good, is merely a way to extract money from customers (forcing them to pay for servicing or buy new devices rather than fix old hardware).
- Using bizarre, non-standard, and obfuscated file formats or directory/data-structures can in some cases be necessary in order to achieve a goal (e.g. performance). However in most cases it's actually used to lock-in the user (prevent user from directly accessing data, prevent third-party tools from working). E.g. the way that iPods appear to store the music files and metadata is extremely complex, at least last time I checked (all files are renamed, so you can't simply copy files to-and-from the device). The correct solution is to use open formats. In cases where you absolutely can't use an established standard, the right thing to do is to release all your internal docs so that others can easily build upon it or extend it.

To summarize: yes, there are cases where making a product more 'open' will decrease its quality in other ways. But, actually, there are many examples where you can leave the option for openness/interoperability without affecting the as-sold quality of the product. (Worries about 'users breaking their devices and thus harming our image' do not persuade; the user owns the device and ultimately we're talking about experience users and third-party developers.) So, we should at least demand that companies make their products open in all those 'low-hanging-fruit' cases. We can then argue in more detail about fringe cases where there is really a openness/quality tradeoff.

Comment Re:n = 1.000000001 (Score 3, Informative) 65

I'm somewhat more hopeful than you, based on advances in x-ray optics.

For typical x-ray photons (e.g. 10 keV), the refractive index is 0.99999 (delta = 1E-5). Even though this is very close to 1, we've figured out how to make practical lenses. For instance Compound Refractive Lenses use a sequence of refracting interfaces to accumulate the small refractive effect. Capillary optics can be used to confine x-ray beams. A Fresnel lens design can be used to decrease the thickness of the lens, giving you more refractive power per unit length of the total optic. In fact, you can use a Fresnel zone plate design, which focuses the beam due to diffraction (another variant is a Laue lens which focuses due to Bragg diffraction, e.g. multilayer Laue lenses are now being used for ultrahigh focusing of x-rays). Clever people have even designed lenses that simultaneously exploit refractive and diffractive focusing (kinoform lenses).

All this to say that with some ingenuity, the rather small refractive index differences available for x-rays have been turned into decent amounts of focusing in x-ray optics. We have x-rays optics now with focal lengths on the order of meters. It's not trivial to do, but it can be done. It sounds like this present work is suggesting that for gamma-rays the refractive index differences will be on the order of 1E-7, which is only two orders-of-magnitude worse than for x-rays. So, with some additional effort and ingenuity, I could see the development of workable gamma-ray optics. I'm not saying it will be easy (we're still talking about tens or hundreds of meters for the overall camera)... but for certain demanding applications it might be worth doing.

Comment High resolution but small volume (Score 5, Informative) 161

The actual scientific paper is:
C. L. Degen, M. Poggio, H. J. Mamin, C. T. Rettner, D. Rugar Nanoscale magnetic resonance imaging PNAS 2009, doi: 10.1073/pnas.0812068106.

The abstract:

We have combined ultrasensitive magnetic resonance force microscopy (MRFM) with 3D image reconstruction to achieve magnetic resonance imaging (MRI) with resolution <10 nm. The image reconstruction converts measured magnetic force data into a 3D map of nuclear spin density, taking advantage of the unique characteristics of the 'resonant slice' that is projected outward from a nanoscale magnetic tip. The basic principles are demonstrated by imaging the 1H spin density within individual tobacco mosaic virus particles sitting on a nanometer-thick layer of adsorbed hydrocarbons. This result, which represents a 100 million-fold improvement in volume resolution over conventional MRI, demonstrates the potential of MRFM as a tool for 3D, elementally selective imaging on the nanometer scale.

I think it's important to emphasize that this is a nanoscale magnetic imaging technique. The summary implies that they created a conventional MRI that has nanoscale resolution, as if they can now image a person's brain and pick out individual cells and molecules. That is not the case! And that is likely to never be possible (given the frequencies of radiation that MRI uses and the diffraction limit that applies to far-field imaging.

That having been said, this is still a very cool and noteworthy piece of science. Scientists use a variety of nanoscale imaging tools (atomic force microscopes, electron microscopes, etc.), but having the ability to do nanoscale magnetic imaging is amazing. In the article they do a 3D reconstruction of a tobacco mosaic virus. One of the great things about MRI is that is has some amount of chemical selectivity: there are different magnetic imaging modes that can differentiate based on makeup. This nanoscale analog can use similar tricks: instead of just getting images of surface topography or electron density, it could actually determine the chemical makeup within nanostructures. I expect this will become a very powerful technique for nano-imaging over the next decade.

Earth

Plasma Plants Vaporize Trash While Creating Energy 618

Jason Sahler writes "Recently St. Lucie County in Florida announced that it has teamed up with Geoplasma to develop the United States' first plasma gasification plant. The plant will use super-hot 10,000 degree Fahrenheit plasma to effectively vaporize 1,500 tons of trash each day, which in turn spins turbines to generate 60MW of electricity — enough to power 50,000 homes!"

Comment Orientation analysis in an image (Score 3, Informative) 215

The image analysis question is interesting. You are trying to read dial positions, so conventional OCR is probably useless (unless there is a package to do exactly that?).

What you can do is use image processing commands (in your favorite programming language; a shell script, Python, etc.) to crop the image to generate a small image for each dial. Then convert to grayscale (and maybe increase the contrast to highlight the dial). To then calculate the preferred orientation in the image, you calculate gradients along different directions. There will be a much higher value for the gradient along directions perpendicular to the preferred axis. This procedure is described very briefly in this paper:
Harrison, C.; Cheng, Z.; Sethuraman, S.; Huse, D. A.; Chaikin, P. M.; Vega, D. A.; Sebastian, J. M.; Register, R. A.; Adamson, D. H. "Dynamics of pattern coarsening in a two-dimensional smectic system" Physical Review E 2002, 66, (1), 011706. DOI: 10.1103/PhysRevE.66.011706

This is easiest to do if you use a graphics package that has directional gradients built-in (but coding it yourself probably wouldn't be too hard). Basically you create copies of the image and on one you do a differentiation in the x-direction, and for the other one a differentiation in the y-direction. Let's call these images DIFX and DIFY. Then you compose two new images:
NUMERATOR = 2*DIFX*DIFY
DENOMINATOR = DIFX^2-DIFY^2

Then you calculate a final image:
ANGLES = atan2( NUMERATOR, DENOMINATOR )

(All the above calculations are done in a pixel-by-pixel mode.) The final image will have an angle map (with values between -pi to pi) for the image. It should be easy to then use the avg or max over that image to pull out the preferred direction. You may also improve results by tweaking the initial thresholding, or by adding an initial "Sharpen Edges" step, or by blurring the NUMERATOR and DENOMINATOR images slightly before doing the next step.

In any case, the above procedure has worked for me when coding image analysis for orientation throughout an image (coding was done in Igor Pro in my case). So maybe it is useful for you.

Comment Re:while historical chemical advances (Score 4, Insightful) 610

As a chemist and practicing scientist, I can attest to the phenomenal costs of doing modern science (much of which comes from safety regulations, and associated "certified" equipment). So I do agree that it is very difficult in the modern age for a hobbyist in their garage to make a groundbreaking discovery... That having been said, i think there are many reasons why hobbyist chemistry (and hobbyist science in general) is a good thing:

1. The combinatorial space in science (and in the production of chemicals especially) is absolutely massive. There is no practical way for chemists to explore it all, so of course they make educated guesses about what is both (a) reasonably easy to make; and (b) of some practical value. However because the combinatorial space is large, there is still plenty of uncharted territory for others to explore. Random fortuitous discoveries are certainly a part of science.

2. Hobbyists can afford to do research that is risky and has no obvious application (I mean "risky" in the sense of "it might not work or lead anywhere" and not in the sense of "it might be dangerous"). They don't have to satisfy funding agencies or pragmatic concerns. They can just explore. Thus they can sometimes pursue crazy lines of inquiry that established scientists wouldn't touch.

3. There is such a thing as having your creativity inhibited by institutionalized concepts. A hobbyist isn't as restricted by the "well-established-rules" of the field, and thus may make creative discoveries others would have missed. (This is rare, by the way: the vast majority of science comes from pushing along using well-established procedures and concepts... but rare "out of the box" discoveries are also important in science.)

4. Doing chemistry (or science in general) on a budget, using only commonly-available equipment, can actually force specific kinds of discoveries. Specifically, it helps to discover things that are cheap (which industry loves!) since it can be done with commodity chemicals and tools. (Who knows, there may be a cheap way to make a better antifreeze using only what is in your house and back-yard.) So hobbyists actually have a chance to discover things that will actually make an impact on industry (whereas the chance that they discover something fundamentally new, without modern diagnostic tools, is slimmer).

5. Finally, even if the hobbyist doesn't actually discover anything new or interesting (which is, by far, the most likely outcome), it has a positive effect on the participants. The people doing it are doing so for fun (presumably), and that in itself is reason enough. Moreover it may be the catalyst for someone to go into science professionally. The ability to make kids enthusiastic about science should not be overlooked. Like most hobbies, hobby-science is more about the process than the end result.

Comment Bad example... (Score 5, Insightful) 610

As a chemist, I definitely like the idea of hobby chemists, and/or home laboratories. People should be free to do science at home if they are so inclined. But this is in some sense a bad example:

Charles Goodyear figured out how to vulcanize rubber with the same stove that his wife used to bake the family's bread.

You should never use the same equipment for your chemistry as for your other household things. If you're going to do chemistry at home, do it safely. This means having a separate (well-ventilated) room for your work, and using separate ovens, microwave, glassware, and other equipment for your work. Chemical contamination is a real threat. You may look at a chemical reaction and deem all the reactants and products to be safe... but if you make a mistake you may contaminate a room/oven/glassware with a more dangerous side-product. And you do not want to be then ingesting these contaminants (worse, you do not want to expose your family and friends).

So, like I said, be safe and use dedicated equipment for your experiments. (And don't brush your teeth with the toothbrush you use to clean your test tubes.)

Education

Submission + - GPL Edutainment Software

haxot writes: "I'm the technologist at a local library. In our lab, I've managed to get some recognition for tools such as GIMP and Open Office, and even such toys as Bomberman & BZFlag. Now I'm turning towards the children's computers, which are mostly filled with ancient, buggy, rather boring games that try to be interactive TV shows rather than something entertaining. I'm looking for (preferably multi-platform — I want to be ready for an OS switch to Linux) OSS style software, not picky about the license; but most especially picky about the software actually having that "neat" appeal. Some stuff I've found already is Gcompris and Tux Paint
My focus is the 2 year old to 8 year old — but I'm happy to hear teen-oriented suggestions too. As a public library however, I can't have any software on the computers that is risqué, gory, or too violent.
So does anyone know of any family-friendly edutainment, multi-OS OSS games?"
Math

Submission + - Party Ideas For Nerds? 4

rbf writes: "I am wondering what party ideas /. readers have for a group of nerds? There is a girl I like at my university who is a graduate student in mathematics who will be having a birthday next month. She had thought of having a nerd-themed party with things such as coming with tape on glasses, pants hiked up, etc. However, she decided against it as most of her friends are math nerds and wouldn't have to dress up! So my question for the /. community is: Are there any fun party ideas that would be appealing to a group of nerds that consist mostly of math majors?"

Slashdot Top Deals

Support Mental Health. Or I'll kill you.

Working...