Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Science

World's Smallest Superconductor Discovered 72

arcticstoat writes "One of the barriers to the development of nanoscale electronics has potentially been eliminated, as scientists have discovered the world's smallest superconductor. Made up of four pairs of molecules, and measuring just 0.87nm, the superconductor could potentially be used as a nanoscale interconnect in electronic devices, but without the heat and power dissipation problems associated with standard metal conductors."

Comment Re:Natural Resource (Score 1) 263

Even for a new allele, say a SNP, its combination is only A, C, T, or G. Unless they can show that it is highly unlikely that the patented modification would occur naturally, any new alleles should be patent free. And heck, they can't compare the chance with pure random chance since we know that mutations / gene modifications do not occur randomly either. Claiming so would be very hard.

New treatment may or may not be patentable as well. If the treatment involves a naturally occurring sequence from other people (I'm thinking of siRNA and the likes), they can't patent the sequence either. They can only patent the method to synthesize it. Even then, if the method is in fact a naturally occurring method (i.e., that's how the body of human or other creatures does it), then they can't patent it either.

Comment It's a tough situation (Score 1) 429

Actually, it's a tough situation. There is no real life experimental data can 100% fit the assumptions of commonly used statistical models. Real life data is messy. There is some degree of simplification. In addition, resorting to whiz-bang fancy methods that "fit" the real data may not be easily interpretable. Ease of result interpretability is what medical scientists want. There are other issues as well, such as computing time, equations derivability, etc.

In addition, many many medical scientists use statistics as a tool to filter things (e.g. candidate genes, target enzymes, treatments, etc). In this case, 100% accuracy is not really important. Once the scientists narrow down the genes, they can test the validity directly in either test animals or real people.

Comment Depends (Score 2, Insightful) 113

Many of these biology experiments require very expensive machines, such as microarray machines, as mentioned by the article. I don't know if purchasing refurbished machines is a wise choice since we don't want data quality to be compromised. Also, don't forget about service plans when the machines break or producing inconsistent output. Not to mention various reagents, other chemicals, and supplies such as microarray chips that make the experiment yields high quality data. These easily reach hundreds of dollars a piece. Also, purchasing such chemicals will get you labeled as a terrorist.

Another issue is gathering the samples. If you're collecting yeast, that would be simple. Arabidopsis, other small plants, mice, or other small animals, you probably need quite some space. Humans? That won't be simple at all. You have to clear privacy issues, getting the research review board to sign papers, etc. Sample collection alone can cost you lots of money and time. You can always resort to publicly available data. But chances are that you won't be able to impress scientists much for going that route. Also, most of the important discoveries are already done on this data. Most likely, all you can do is to confirm existing results or to provide some tangential additional info.

Comment Re:More than a million? (Score 2, Insightful) 395

The program for my dissertation (which were done in 3 years or so) has more than 160K lines. I've done several projects of that magnitude. 1M of codes is not as hard as you have thought. Some codes are just accessors, bridges, and other standard patterns that you would do over and over again for many different use cases. Each of these can easily reach 200-300 lines of code. If you're used to doing this, you wouldn't even think twice about the code.

Comment Re:Are You Serious? (Score 1) 794

No numerical difficulties at all. But the algorithm was implemented in R. It uses smooth.spline which in turn was implemented in FORTRAN. The original FORTRAN code was GCV and PPPack from Netlib. As far as I know, nobody off the net other than me has ported these routines out of FORTRAN so that I can reasonably use them for QValue routine without invoking R whatsoever.

Comment Re:Are You Serious? (Score 1) 794

I think ODE is going to be one of the simplest case of numerical library. I can simply copy algorithms from Numerical Recipes and get away with it. But, there are a LOT more written exclusively in Fortran that you don't want to touch with 10 foot pole.

For example, you should know R and why R does contain some Fortran libraries, especially BLAS and LAPACK. I think BLAS is the easiest one to translate over albeit tedious. It's simply matrix operations (add, subtract, multiply), but there are lots of them with multiple cases making it really fast. LAPACK (and other associated libraries like LINPACK, etc) requires a more intimate matrix theory. I have the necessary background to do, say, Singular Value Decomposition (SVD). But to date, the only fast SVD routines I know are in Fortran or derived from Fortran, ALONG with quirks and limitation of Fortran 77. Don't believe me? Check Jama's SVD routine and how it doesn't handle matrices with number of columns greater than number of rows with only one pass. You can get around that by invoking the routine twice (and believe me, that happens in a lot of places although 1 pass is possible), but that's partly due to the limitation of Fortran 77 of not being able to dynamically allocate arrays. The routine is so tight and fast that the option of translating it to other languages depend on the luck of machine translation (and making sense of it and try to clean up the mess, yada yada).

You have no idea that many of the libraries in the 80's are still in Fortran 77 and left untouched. Try Netlib (http://www.netlib.org/liblist.html) and try to pick one Fortran library and translate it to another language. See if you don't cry river. Believe it or not, many of them are still in wide use, usually as a part of other newer algorithms. I myself have done that. For example, a B-spline shrunk smoothing library of GCV (downloadable from Netlib) from 1985 is known to have a very good result. It's not your usual (and cheap) B-spline smoothing that you can find off the net, it's almost heaven-and-earth differences. This GCV is used in Q-value routine of 2003 to determine false discovery rates by smoothing over the P-values of thousands of genes. Nobody has tried to translate that off of Fortran. I did a daring job and spent 200 hours to translate that one library to Java with success. If I had the option not to do that, I'd rather spent 200 hours somewhere else and use whatever Fortran-Java glue to get around it. Seriously.

So, if you've never been into serious scientific library development, please don't make such arrogant and ignorant assertion. Although you can assert that Operating System is complex, the principle behind it is simple. Much simpler than a scientific formula, which requires much more math skills than just Calc1-3 and DE1-2. It's not simply translating differential or integral or what have you, that's the easiest part if somebody is giving it to you. The hard part is to read the scientific paper behind that Fortran code in order to try to make sense what the code is doing. Many of the algorithms contain some hack that makes the formula work. For example, certain algorithm define limits, magic constants, assumptions, etc that are NOT explained anywhere in the paper AT ALL. Some can be found from the papers cited by that paper, with reasons that might be unclear to you. Now, if you translate the mathematical formula off of the paper without reading it, wouldn't it be a recipe of disaster?

Many people may be gifted in coding, but very very rare have sufficient skills to translate highly numerical algorithms. Seriously. Most coders know absolutely nothing about higher-level maths. Netlib is just a start. They only contain algorithms of the 80's and early 90's (which are still widely used). Even Numpy uses plenty of untouched Fortran codes for its backend.

This ignorance of gigantic proportion of yours needs to stop. Now. If you still cling on your assertion, start an open source project that translates gigabytes of Fortran numerical library into a more modern language. See if you can even get some contributors. Good luck.

Image

Voting Drops 83 Percent In All-Digital Election 156

For the first time ever, Oahu residents had to use their phones or computers to vote with some surprising results. 7,300 people voted this year, compared to 44,000 people the previous year, a drop of about 83 percent. "It is disappointing, compared to two years ago. This is the first time there is no paper ballot to speak of. So again, this is a huge change and I know that, and given the budget, this is a best that we could do," said Joan Manke of the city Neighborhood Commission. She added that voters obviously did not know about or did not embrace the changes.
Privacy

Using Net Proxies Will Lead To Harsher Sentences 366

Afforess writes "'Proxy servers are an everyday part of Internet surfing. But using one in a crime could soon lead to more time in the clink,' reports the Associated Press. The new federal rules would make the use of proxy servers count as 'sophistication' in a crime, leading to 25% longer jail sentences. Privacy advocates complain this will disincentivize privacy and anonymity online. '[The government is telling people] ... if you take normal steps to protect your privacy, we're going to view you as a more sophisticated criminal,' writes the Center for Democracy and Technology. Others fear this may lead to 'cruel and unusual punishments' as Internet and cell phone providers often use proxies without users' knowledge to reroute Internet traffic. This may also ultimately harm corporations when employees abuse VPN's, as they too are counted as a 'proxy' in the new legislation. TOR, a common Internet anonymizer, is also targeted in the new legislation. Some analysts believe this legislation is an effort to stop leaked US Government information from reaching outside sources, such as Wikileaks. The legislation (PDF, the proposed amendment is on pages 5-15) will be voted on by the United States Sentencing Commission on April 15, and is set to take effect on November 1st. The EFF has already urged the Commission to reject the amendment."

Comment Re:What's the goal, really? (Score 1) 114

The original post made a point that "In almost all cases, the only people who actually benefit from access to particular data are a small handful of specialists." I completely agree with him. Public mostly has no use to any of such data unless they know how to process the data and all the rationale behind them (which implies that they must know all the underlying scientific process). I agree to that as well. However, you stressed the communication issue to the uninitiated--which I think is misleading. And that accounts to the data? If it were only main results and summaries, I would heartily agree. But the data? No! The public doesn't know anything about the process behind them. They don't care about the data in general.

I agree that the results have to be communicated to general public, but that's not the primary goal of typical scientific papers: It's to inform other scientists in the field. For general public, there are popsci magazines, textbooks, and universities. And that has NOTHING to do with the data.

physics and math are up there too but that's got more to do with the common-sense and intelligence of the community surrounding those subjects

Yet there are lots of pro contras in that issue alone even with many PhDs with lots of brain powers are on both sides. If you don't follow the math and understand the assumption to the gory details, it's virtually impossible to decide which analysis is valid and which is not. You can use your so-called common-sense and intelligence. After all, this is our environment and we've got to do something regardless of the analysis of global warming, right? Now, you can safely chuck all valid analyses that belong to the "other camp" and subscribe into whatever analyses of "your camp". Presto, problem solved, right? What I know is each side has their valid points, but I'm not qualified to judge them because I don't know the gory details.

If only the results or summaries are of importance to you, you don't need any access to those papers (and thus saving $$$) and subscribe to your favorite popsci magazines (cost much less, thus invalidate your ivory-tower $$$ claim). While the papers discuss how the data was gathered, processed, and transformed into the result, numerous subtleties on the assumptions, and inherent limitations on the employed methods, they are of no use to the general public. Without knowing them, however, the understanding of the result would not be complete and might be misleading. If you only read the scientists' conclusions / results, like general public would, you're essentially taking their words at face value. This is dangerous since I've seen too many occassions that even so-called seasoned scientists are not aware of the subtleties of the methods they're using and thus misinterpreting their reports (yes, that's despite the peer review).

While I agree that reading the original paper is very important to make an informed decision, it's inaccessible to general public anyway due to the sheer amount of required background knowledge. Even the so-called simple three-page paper of Einstein that you've linked actually fail to provide any sufficient elucidation for the general public. I doubt that any general public would benefit from open access to any actual research paper, let alone the data. So, your accusation of elitism is completely unfounded. It's not elitism. It's the way the science is done.

I would urge the public to educate themselves far beyond popsci writings so that they make an informed decision, but that's not for everyone. Those who are really interested should devote their time to study the subject and only then they're worthy of the access of the data.

Comment Re:What's the goal, really? (Score 1) 114

Anything that complicates the retrieval of knowledge ends up reducing access to that knowledge. Why should someone have to put up with manual process, when we have this things called the internet. The internet is designed to facilitate access of knowledge, so it is the tool of choice.

Yes, and there are open-access journals already. Guess what? The scientists (i.e. the paper authors) are required to pay much more for the open access. Heck, they're required to pay for non-open journals as well. Don't believe me? Ask your fellow scientists. Some scientists simply wise up and not pay the extra charges and still fulfill the publish or perish call.

While some readers of papers may not understand the content fully, it is sometimes enough to start the quest of understanding.

That's the task of textbooks, popsci magazines, universities, or even wiki/encyclopedias. In general, papers are to communicate novel results among scientists of the field, not newcomers.

Science suffers from a lack of people entering the field, so anything that can make it easier to access knowledge makes the idea of entering less daunting. In many ways this can be seen as part of the PR process.

Anything to make it easier? I think there's no shortcuts in science, much like there's no shortcuts in computer programming. Shortcuts make bad scientists just like shortcuts make bad programmers. PR process doesn't do any good. Aren't you afraid of quantum physics scientists that don't know squat about calculus? You should, just like you would OS programmers who don't know squat about subroutines. Getting papers is the least daunting task for a budding scientist (unless perhaps he/she has a phobia to libraries). The most daunting task is typically the math and finding a suitable mentor.

The other way of approaching the issue, is simply asking why journals should be the only ones allowed to publish the information? They aren't paying anyone for the content, yet they are requiring a monopoly of the publishing of the given paper.

Journal double-dips. They charge the scientists who author the papers and their customer who read the papers. I agree that there should be a free no-charge journal should be formed. But somebody needs to pay the bills.

Establishing a journal is tedious and very involved (get the scientists to do peer review, get the papers edited, published, build reputation, get $$$, etc). I can't foresee how we could solve this problem. It's a chicken-and-egg problem. Scientists won't volunteer doing review for no-name journals. But journals' reputation is built upon quality publications, which are highly peer-reviewed. Scientists won't publish their quality works to no-name journal either. After all, scientists need to get tenures, right? Tenures are evaluated upon how many publications are published in famous journals. I think that having "a competition" won't solve the problem much. But some journals already open-access their paper collections. I think it's a matter of time that open access will be a norm.

Comment Re:What's the goal, really? (Score 1) 114

Einstein managed to get away with three elegant pages and zero refrences

Science has evolved much from 1905. Even with his zero references, he's still implicitly citing the results of Lorentz. By today's standard, no citation like that is unacceptable.

Let me ask you this: Can you honestly ask a high school student or a freshman to understand even that paper without grasping the concepts of differential equation (DE)? They can't. Sure, you can understand the motivation and introduction of that paper, just like those of typical scientific papers. But when you start to delve into the formulas, i.e. what the "meat" is all about, you suddenly need to know everything involved much beyond words explained by the scientists. I have no backgrounds in physics and I can't even follow the derivations of the formula in section I.3 of that paper although I know DE. In other words, I'm lost at section I.3 and I cannot see how Einstein arrived at his conclusions. Maybe a little knowledge of physics will help. There are some baseline knowledge you'd expect your audience to know. You can't explain everything.

Let's face it: English is an ambiguous medium of transfer for scientific knowledge. Mathematical formulas are far more succinct and far less ambiguous. If you think you can sidestep the formula part of the paper, you're dreaming. You might be better off reading popular science magazines.

The folks at RealClimate are just commenting on their results, not real papers. This sort of writing is more of popsci magazine style. They're glossing over way much on how they arrive at their results. To some degree, it's useful. But to me, the gory details is of more importance because only then can I know their assumptions and theoretical limitations on the underlying assumptions or formulas and how to further advance the knowledge or make the estimates more precise.

Scientists do not take other scientists' words at face values and neither should laymen. Given the climate crisis pro-contra, I think reading just the research comments will add to the confusion to the public minds. Or worse, creating camps. We don't want that to happen. So, I think it's wise for the public to read far beyond pop sci writings.

Comment Re:What's the goal, really? (Score 1) 114

To be honest, if your institution does not foot the bill for subscription, try inter-library loans. That's easy. Most credible institutions in the US do have some subscription for more mainstream journals. Unless you're in third world countries.

The problem with scientific publication is that you need to be terse. They're limited to 8-12 pages. If you are required to spend time for background knowledge for the uninitiated, you'll produce a 1000 page book instead. Moreover, the reviewers will think that you spend your time too much on things that are assumed to be familiar to the intended audience.

Face it: The knowledge we have so far is the agglomeration of previous knowledge. Those who are in the cutting-edge are expected to know the background already. Try explaining measure theory to high school students that has no idea what calculus is. And then if your research has anything to do with measure theory, your result is pretty much unreachable to those students. Let alone any data that correspond to that research.

Scientists want to have their work known. But they don't have the patience and 5 years to explain to aspiring noobs. Sorry. They have a lot more research to do. If you want to know the research, do your homework and study the subject carefully for a few years. Then you'll appreciate whatever data or paper the scientists are publishing.

Slashdot Top Deals

If you have a procedure with 10 parameters, you probably missed some.

Working...