Forgot your password?
typodupeerror

Comment: Re:Not a computing element (Score 5, Informative) 183

by wanax (#47304467) Attached to: How Vacuum Tubes, New Technology Might Save Moore's Law

That's mentioned in the IEEE Spectrum article (which by the way is about the most clearly written article on an early prototype technology that I've ever read).
The problems are:
-Too high voltage; can be mitigated by better geometry (probably).
-Insufficient simulations at present for improving the geometry, with the caveat that getting better performance (voltage-wise) might compromise durability.
-Because of the above, they don't have a good set of design rules to produce an integrated circuit. They're hopeful about this step, since the technique uses well established CMOS technology and there are many tools available.

Their next targets are things like gyroscopes and accelerometers. I'd say on the whole this strikes me as realistic and non-sensational. But if anybody knows better, I'd like to hear why.

Comment: citation puffery (Score 1) 231

This is no different from trying to come up with ways of measuring scholars' intellectual impact using citation metrics, like the h-factor or the many recent successors to it, which try to repair the weaknesses in a fatally flawed idea. It makes no distinction between positive and negative citation, and it ignores the raw fact of historical precedence, while preserving every historical bias a culture may have.

The most influential people in world history, at least the very top-tier, isn't particularly debatable, but yet this list failed to capture it. In alphabetical order (and assuming they all existed):

Aristotle
Buddha
Confucius
Homer
Jesus
Lao Tzu
Muhammad
Plato
Ved Vyasa

Then there's the next tier, which include people like Al-Hazan, Alexander, Augustine, Einstein, Genghis, Hammurabi, Imhotep, Newton, Linnaeus, Peter (of Russia), Shakespeare, Suleiman, Zeami Motokiyo etc etc, since I'm sure the further I try to extend the list, the more it would converge with my cultural history.

While unsupervised algorithms can often find interesting things in high-dimensional data, they aren't interpret-able without some expert knowledge.. and if you don't have the 9 entries I mentioned above in your top 20 at least, you can toss the method.

Comment: Yes, there are methods available (Score 5, Insightful) 552

by wanax (#47074657) Attached to: Ask Slashdot: Communication With Locked-in Syndrome Patient?

Yikes, that sounds like a terrible experience. My sympathies to your sister in law and the whole family.

There are several methods available, most prominently implanting arrays of electrode over pre-motor cortex, which can then be decoded online and used to control a computer pointer.

See for example:
http://www.youtube.com/watch?v...

You might want to contact Frank Guenther at BU. Who has worked on this for several years, and has started the Unlock Project particularly for people in your sister in law's situation.

Comment: Re:Molecules shmolecules (Score 2) 274

by wanax (#46866813) Attached to: Male Scent Molecules May Be Compromising Biomedical Research

There's a huge bias towards using exclusively male mice in many types of research, and the issue of higher variance in female rodent behavior (due to estrous cycle issues, among others) is well known (see eg: pdf).

There are also related problems more generally with stress and over-training in neuroscience. Experienced investigators are able to produce a much less stressful working environment for animals, so they tend to get different results from neophyte investigators even when following the same protocol. This shows up a lot when a different lab tries to replicate the work of an experienced post-doc and gets null results for the first 6 months then suddenly is able to replicate everything. Thus often is attributed to 'correcting' the protocol (often with extensive communication with the previous lab) when often I think the change is attributable to the investigator in the replicating lab becoming experienced enough to relieve stress (I don't have a great link for this, mostly just an observation from having been around quite a few labs).

Over-training is also a problem, since it often takes thousands (sometimes well into the hundreds of) to train animals in complex cognitive tasks, and it's well known from experiments in humans (and a few in non-human primate and rodent) that neural responses shift profoundly between 'trained' and 'over-trained' states, say between amateur and professional ballerinas watching videos of ballet.

However, these issues are a much bigger problem in pre-clinical research than in basic research. Our understanding of the brain is sufficiently limited that the effects we're used to seeing in basic research questions swamp the potential modulation from gender, stress and training factors (unless you're talking about stress research specifically, but they're pretty careful about controlling for these types of effect). The issue with pre-clinical research is that often the difference between the current treatment and proposed treatment is only a few percent (note: if valid, this can mean thousands of lives saved or hugely improved), and so failing to identify and control for factors such as researcher or mouse gender can overwhelm the supposed primary result.

Comment: Opt-in vs Opt-out (Score 1) 769

by wanax (#46864945) Attached to: The Koch Brothers Attack On Solar Energy

To destroy the world's carrying capacity for humanity we have to opt-in to global thermonuclear war. To destroy that same capacity through climate change simply requires that a modest proportion of the world's population does not opt-out of mitigating carbon release (the Pareto-optimal level of GDP is pretty small, actually, around 2% of global GDP).

Comment: Re:Sand in our Brain (Score 2) 105

by wanax (#46680687) Attached to: Sand in the Brain: A Fundamental Theory To Model the Mind

With regard to question 2) No.
Question 1 is an ongoing field of research. Some of the work that I've found helpful in approaching the question:
-The Computational Beauty of Nature (Gary William Flake)
-Barriers and Bounds to Rationality (Peter Albin; there are free pdf copies available online).
-A New Kind of Science (Stephen Wolfram; also available free online).

Comment: Re:Sand in our Brain (Score 5, Informative) 105

by wanax (#46680197) Attached to: Sand in the Brain: A Fundamental Theory To Model the Mind

The linked article was horribly written. I'll give a shot at trying to explain it (or rather, a really, really simplified version).

Two of the fundamental problems that neural circuits must solve are the noise-saturation dilemma and the stability-plasticity dilemma. The first is best explained in the context of vision. Our visual system is capable of detecting contrast (ie. edges) over a massive range of brightness, spanning a space of about 10^10. Given that neurons have limited firing rates (typically between 0 and 200hz), there needs to be some normalization criteria that allows useful contrast processing over massive variations in absolute input (more on this later). The stability-plasticity dilemma is that the brain needs to be sufficiently flexible to learn based on a single event (let's say, touching a hot stove is a bad idea), but once learned memories have to be sufficiently stable to last the rest of a creatures' life span.

The stability-plasticity dilemma implies that neural circuits must operate in at least two (as I said, very simplified) distinct states, a "resting" or "maintenance" state, and a "learning" state, and that there is a phase-transition point in between them. Furthermore, these states need to have the following properties regarding stability:
1) the learning state must collapse into the maintenance state in the absence of input (otherwise you get epilepsy).
2) reasonable stimulation (input) during the resting state must be able to trigger a phase change into the learning state (or you become catatonic).

Many circuits/mechanisms have been proposed to explain how the brain solves these dilemmas. Most of them involve the definition of a recurrent neural network using some combination of gated-diffusion and oscillatory dynamics to fit well known oscillatory and wave-based dynamics that have been recorded in neural circuits. Some of these models employ intrinsic learning using a learning-rule (ie. self-organized maps) while others are fit by the researcher. One key point about this class of models (as opposed to the TFA approach) is that they have a macro-circuit architecture specified by the modeler. Typically these models are at least somewhat sensitive to parametric perturbation.

TFA describes another approach, which comes out of research on cellular automata done by Ulam, von Neumann, Conway and Wolfram. This approach posits that parametric stability and macro-circuit organization is only loosely important so long as the system obeys a certain set of rules regarding local interaction (could also be through of as micro-circuit) because it will self-organize to a point of 'critical stability'. In the the two-state model described above, this approach predicts that neural circuits are always at a state of 'critical stability' where maintenance occurs through frequent small perturbations or avalanches, and any new input will trigger a large avalanche, causing learning. Bak has proposed this as a general model of neural circuit organization. One trademark of these type of models is that they show 'scale free' or 'power law' behavior, where the size of an event is inversely proportional to its frequency by some exponential function. Some recent data has shown power-law dynamics in neural populations (a lot of other data doesn't show power-law dynamics).

One big problem with the critical stability hypothesis is that it doesn't deal well with the noise-saturation dilemma: it needs to cause the same general size of avalanche whether it's hit by one grain of sand, or 10^10 grains of sand.

None of this is particularly new, neural-avalanches (albeit in a different context) were postulated in the early 70s. Could some systems in the brain exploit self-organized criticality? Sure, but there is a lot of data out there that's inconsistent with it being the primary method of neural organization.

Comment: Re:Spain loves Android (Score 2) 161

by wanax (#46667585) Attached to: Illustrating the Socioeconomic Divide With iOS and Android

Having recently been in Spain (with my unlocked iphone 4 in tow), I can tell you that the support for iphones (at least in Barcelona) is terrible. It took trips to 4 different stores to find an iphone 4 compatible prepaid mini-sim (if I had the iphone 5, I would have been SOL and had to pay for roaming data from my US plan). None of those stores prominently placed iphones (although they were available, at least through vodaphone, even the 5 new, but you couldn't use a prepaid sim in it).

I tend to think that the issue is that Spain has a really fractured retail environment, both with a lot of providers (vodaphone/movistar/orange/yoigo and lots of 3rd party options) and with a lot of kiosk type stores. Vodaphone has their own retail outlets, but most of the others seemed to be based in malls, and the malls in turn seemed to have one 'basket' of stores, depending on who owns the mall. During my search for a mini-sim for example, I was sent on a goose-chase from store to store with directions that turned out to be pretty approximate (wrong address, but within about 300 meters of the correct address).

Given that retail environment, I think it's pretty natural that android, with its myriad of slightly customized, provider branded phones etc, fares a lot better than iOS at the moment... People want something that can be supported by their local mall/kiosk.

Comment: RIP PLOS (Score 2) 136

by wanax (#46344025) Attached to: Major Scientific Journal Publisher Requires Public Access To Data

It goes way beyond just genes and patient data. First, there's the issue of regulation. In most biology/psychology related fields, there's a raft of regulations from funding sources, internal review boards, the Dept. of Agriculture (which oversees animal facilities) and IACUCs for example that make it impossible to comply with this requirement, and will continue to do so for a long time. No study currently being conducted using animal facilities can meet this criteria, because many records related to animal facilities (including the all important experimental protocol) must remain confidential by statute (with the attestation of compliance from the IRB and IACUC). Likewise in the case of (any) human research, you'll have to get a protocol past the IRB for protecting subject anonymity, and given the likelihood of inadvertent identity disclosure that will extremely difficult to do.

Second, there's a deep flaw in how the policy is written and how it conceives of data. To wit, the policy defines: "Data are any and all of the digital materials that are collected and analyzed in the pursuit of scientific advances."

Now for starters, there's a loophole big enough to drive several trucks through: In many experimental contexts material necessary for complete understanding of the 'raw data' are not in digital form, but rather in say, lab notebooks. Which leads to the broader issue: what most researchers would be actually interested in seeing publicly disclosed is the 'data set' which is not 'raw data', but data that's processed into a useful, compact form that's suitable for statistical analysis.

However, in many experiments all of the material necessary to understand the 'raw data' (which I'll definite here as the measured result of an assay in a very general sense) is distributed between lab notebooks, digital data collection, calibration and compliance records in facilities archives and several levels of processing often using proprietary and very expensive software. Even if all of those things could be published (see above), the 'raw data' would be mostly worthless because of the vast amount of time and effort required in many cases to turn the 'raw data' into the 'data set'.

The third problem of course, which has been addressed in several places already on this thread is that there's no money in grants to fund the required repositories.

I think at some level this policy is a noble idea, but it's been implemented in a terrible way, and obviously written by people in fields that already have functioning, funded public databases. Either people are going to stop publishing in PLOS from many fields, or they'll drive the truck through the loopholes and it'll be just a toothless as Science and Nature's sharing requirements.

If they really wanted to effectively push for greater transparency, what they should be pushing at the moment is simultaneous publication of the 'data set', which would let fields that don't have standardized databases in place to design standards that would allow their creation.

Comment: Re:Use Class Rank (Score 1) 264

by wanax (#46215381) Attached to: Adjusting GPAs: A Statistician's Effort To Tackle Grade Inflation

I should have been more specific, since indeed I'm fairly ignorant about the american college experience for many (most? I'll have to check) students. My experience in academia has been nearly entirely in large research universities, with friends and family filling out my knowledge the liberal-arts colleges, and some local colleges. But the entire grade inflation debate has been focused on colleges that have competitive admission (only about 15% or so), so I'll maintain that my experience is relevant.

Comment: Re:Use Class Rank (Score 1) 264

by wanax (#46215347) Attached to: Adjusting GPAs: A Statistician's Effort To Tackle Grade Inflation

What you link to is one of many examples of 'classic' tests that are 'difficult' because they are not so much tests of 'intelligence' or even 'scholastic aptitude' that we currently fetishize, but are straight out tests of cultural knowledge. That test would be easy for any decently schooled person (read: sufficient family income) at the time, just like the GRE is easy today (I doubt any student in the country in 1869 could crack the 85th percentile on the SAT). Most of the history of standardized testing in the last century has been slowly trying to move away from testing cultural knowledge to something a bit more general, but that change has been limited.

With regard to your uncle, I think it's telling that he retired recently. As was mentioned lower in the thread, one of the symptoms of teachers who are no longer engaged is that they start blaming their students for lack of understanding. Both my parents are professors, and I work at a major research university, so I suspect that I have a better pool to sample than you. Most of what I hear is about 'what great students we have' and 'who could believe that an undergraduate could have written this' etc etc.. Or to make a more concrete example, my Mom is a professor of classics, who's been teaching since the late 60s. She's received about 12 papers from undergraduates over the course of her career that are of such a high quality that she's suggested they revise them for professional submission. Of those papers, 8 have been submitted in the past 10 years.

backups: always in season, never out of style.

Working...