Actually, since neurons have functional homeostatic pruning and nonlinear membrane responses, there are quite large values of zero when we're recording firing rate.
With regard to question 2) No.
Question 1 is an ongoing field of research. Some of the work that I've found helpful in approaching the question:
-The Computational Beauty of Nature (Gary William Flake)
-Barriers and Bounds to Rationality (Peter Albin; there are free pdf copies available online).
-A New Kind of Science (Stephen Wolfram; also available free online).
The linked article was horribly written. I'll give a shot at trying to explain it (or rather, a really, really simplified version).
Two of the fundamental problems that neural circuits must solve are the noise-saturation dilemma and the stability-plasticity dilemma. The first is best explained in the context of vision. Our visual system is capable of detecting contrast (ie. edges) over a massive range of brightness, spanning a space of about 10^10. Given that neurons have limited firing rates (typically between 0 and 200hz), there needs to be some normalization criteria that allows useful contrast processing over massive variations in absolute input (more on this later). The stability-plasticity dilemma is that the brain needs to be sufficiently flexible to learn based on a single event (let's say, touching a hot stove is a bad idea), but once learned memories have to be sufficiently stable to last the rest of a creatures' life span.
The stability-plasticity dilemma implies that neural circuits must operate in at least two (as I said, very simplified) distinct states, a "resting" or "maintenance" state, and a "learning" state, and that there is a phase-transition point in between them. Furthermore, these states need to have the following properties regarding stability:
1) the learning state must collapse into the maintenance state in the absence of input (otherwise you get epilepsy).
2) reasonable stimulation (input) during the resting state must be able to trigger a phase change into the learning state (or you become catatonic).
Many circuits/mechanisms have been proposed to explain how the brain solves these dilemmas. Most of them involve the definition of a recurrent neural network using some combination of gated-diffusion and oscillatory dynamics to fit well known oscillatory and wave-based dynamics that have been recorded in neural circuits. Some of these models employ intrinsic learning using a learning-rule (ie. self-organized maps) while others are fit by the researcher. One key point about this class of models (as opposed to the TFA approach) is that they have a macro-circuit architecture specified by the modeler. Typically these models are at least somewhat sensitive to parametric perturbation.
TFA describes another approach, which comes out of research on cellular automata done by Ulam, von Neumann, Conway and Wolfram. This approach posits that parametric stability and macro-circuit organization is only loosely important so long as the system obeys a certain set of rules regarding local interaction (could also be through of as micro-circuit) because it will self-organize to a point of 'critical stability'. In the the two-state model described above, this approach predicts that neural circuits are always at a state of 'critical stability' where maintenance occurs through frequent small perturbations or avalanches, and any new input will trigger a large avalanche, causing learning. Bak has proposed this as a general model of neural circuit organization. One trademark of these type of models is that they show 'scale free' or 'power law' behavior, where the size of an event is inversely proportional to its frequency by some exponential function. Some recent data has shown power-law dynamics in neural populations (a lot of other data doesn't show power-law dynamics).
One big problem with the critical stability hypothesis is that it doesn't deal well with the noise-saturation dilemma: it needs to cause the same general size of avalanche whether it's hit by one grain of sand, or 10^10 grains of sand.
None of this is particularly new, neural-avalanches (albeit in a different context) were postulated in the early 70s. Could some systems in the brain exploit self-organized criticality? Sure, but there is a lot of data out there that's inconsistent with it being the primary method of neural organization.
Having recently been in Spain (with my unlocked iphone 4 in tow), I can tell you that the support for iphones (at least in Barcelona) is terrible. It took trips to 4 different stores to find an iphone 4 compatible prepaid mini-sim (if I had the iphone 5, I would have been SOL and had to pay for roaming data from my US plan). None of those stores prominently placed iphones (although they were available, at least through vodaphone, even the 5 new, but you couldn't use a prepaid sim in it).
I tend to think that the issue is that Spain has a really fractured retail environment, both with a lot of providers (vodaphone/movistar/orange/yoigo and lots of 3rd party options) and with a lot of kiosk type stores. Vodaphone has their own retail outlets, but most of the others seemed to be based in malls, and the malls in turn seemed to have one 'basket' of stores, depending on who owns the mall. During my search for a mini-sim for example, I was sent on a goose-chase from store to store with directions that turned out to be pretty approximate (wrong address, but within about 300 meters of the correct address).
Given that retail environment, I think it's pretty natural that android, with its myriad of slightly customized, provider branded phones etc, fares a lot better than iOS at the moment... People want something that can be supported by their local mall/kiosk.
What was the different solution? (I've also wrecked quite a few shirts in my time)
It goes way beyond just genes and patient data. First, there's the issue of regulation. In most biology/psychology related fields, there's a raft of regulations from funding sources, internal review boards, the Dept. of Agriculture (which oversees animal facilities) and IACUCs for example that make it impossible to comply with this requirement, and will continue to do so for a long time. No study currently being conducted using animal facilities can meet this criteria, because many records related to animal facilities (including the all important experimental protocol) must remain confidential by statute (with the attestation of compliance from the IRB and IACUC). Likewise in the case of (any) human research, you'll have to get a protocol past the IRB for protecting subject anonymity, and given the likelihood of inadvertent identity disclosure that will extremely difficult to do.
Second, there's a deep flaw in how the policy is written and how it conceives of data. To wit, the policy defines: "Data are any and all of the digital materials that are collected and analyzed in the pursuit of scientific advances."
Now for starters, there's a loophole big enough to drive several trucks through: In many experimental contexts material necessary for complete understanding of the 'raw data' are not in digital form, but rather in say, lab notebooks. Which leads to the broader issue: what most researchers would be actually interested in seeing publicly disclosed is the 'data set' which is not 'raw data', but data that's processed into a useful, compact form that's suitable for statistical analysis.
However, in many experiments all of the material necessary to understand the 'raw data' (which I'll definite here as the measured result of an assay in a very general sense) is distributed between lab notebooks, digital data collection, calibration and compliance records in facilities archives and several levels of processing often using proprietary and very expensive software. Even if all of those things could be published (see above), the 'raw data' would be mostly worthless because of the vast amount of time and effort required in many cases to turn the 'raw data' into the 'data set'.
The third problem of course, which has been addressed in several places already on this thread is that there's no money in grants to fund the required repositories.
I think at some level this policy is a noble idea, but it's been implemented in a terrible way, and obviously written by people in fields that already have functioning, funded public databases. Either people are going to stop publishing in PLOS from many fields, or they'll drive the truck through the loopholes and it'll be just a toothless as Science and Nature's sharing requirements.
If they really wanted to effectively push for greater transparency, what they should be pushing at the moment is simultaneous publication of the 'data set', which would let fields that don't have standardized databases in place to design standards that would allow their creation.
I should have been more specific, since indeed I'm fairly ignorant about the american college experience for many (most? I'll have to check) students. My experience in academia has been nearly entirely in large research universities, with friends and family filling out my knowledge the liberal-arts colleges, and some local colleges. But the entire grade inflation debate has been focused on colleges that have competitive admission (only about 15% or so), so I'll maintain that my experience is relevant.
What you link to is one of many examples of 'classic' tests that are 'difficult' because they are not so much tests of 'intelligence' or even 'scholastic aptitude' that we currently fetishize, but are straight out tests of cultural knowledge. That test would be easy for any decently schooled person (read: sufficient family income) at the time, just like the GRE is easy today (I doubt any student in the country in 1869 could crack the 85th percentile on the SAT). Most of the history of standardized testing in the last century has been slowly trying to move away from testing cultural knowledge to something a bit more general, but that change has been limited.
With regard to your uncle, I think it's telling that he retired recently. As was mentioned lower in the thread, one of the symptoms of teachers who are no longer engaged is that they start blaming their students for lack of understanding. Both my parents are professors, and I work at a major research university, so I suspect that I have a better pool to sample than you. Most of what I hear is about 'what great students we have' and 'who could believe that an undergraduate could have written this' etc etc.. Or to make a more concrete example, my Mom is a professor of classics, who's been teaching since the late 60s. She's received about 12 papers from undergraduates over the course of her career that are of such a high quality that she's suggested they revise them for professional submission. Of those papers, 8 have been submitted in the past 10 years.
There's a problem comparing sports pros to college students, which is that there are a lot of effects of over-training, sunk-cost psychology and sticky liquidity in terms of skill transfer between sports. I currently work in neuroscience where we have to be very careful in interpreting animal research due to the same issue. College students who are sophmores or juniors have comparatively little cost shifting into a field that's a better fit for them (and likewise there are many more cognate fields), so you wouldn't necessarily expect the same effects on the distribution.
Grading on a curve only works for large, introductory courses. The problem is two fold 1) smaller classes cannot be assumed to have a normal distribution and 2) Once you get past intro classes in any subject, there is a strong selection bias so that people in upper level classes all tend to be high level performers in that subject (which also means you can't assume a normal distribution).
The big problem with grades is that they conflate course difficulty and student performance. If you want grades to be a proxy for performance, you have to weight them somehow or other by class difficulty. The problem is nobody can agree on how to rank class difficulty due to academic politics, since nobody wants to be the department that gets the short-end of the stick with class difficulty rankings. In my personal experience, being one of the few people who have taken multiple graduate level classes in 3 disciplines (History, Mathematics and Neuroscience) at that level no field is particularly easier or harder than another, it's just that the type of work one does is very different.
The other issue that I rarely see addressed in all of the 'grade inflation' concern (and which class rank also ignores) is that maybe today's college students are actually working a lot harder than those in 1960 (perhaps due to debt, the weak economy, lack of security from getting a degree etc), and have actually earned a big chunk of the upward grade adjustment. That's certainly been my experience when compared to my own cohort, and that of quite a few professors that I talk to as well.
To amplify the above comment, as a neuroscientist with a computational background: don't try to go it alone.
There are a few reasons for this:
1) Research in the field is done by groups because the main problem in generating an 'interesting simulation problem' is carefully defining a scope and a target. That's really hard to do, and generally involves careful discussions between people with different knowledge bases and priorities. If you can't give a clear and succinct answer to the question "How, if successful, will this research advance the field?" to somebody like Larry Abbott, you aren't working on a 'real world problem.'
2) The state of the field is generally about 2 years ahead of the published literature. Unless you have collaborators who routinely attend talks and meetings, and know what people in your area(s) of interest are doing, it's very easy to wind up on the wrong track.
3) Modeling is only useful if it leads to experimental predictions that can be tested, and so needs to be part of an ongoing collaborative interaction between people collecting data, people analyzing it, and people modeling it. Without the entire loop in place, it's difficult to make useful contributions. Also related: outside of things like gene arrays, and a few other standardized approaches, most data in the field is collected by bespoke setups, so even understanding how to parse a data set requires interaction with the people who collected it.
So to answer the original questions:
(1) There are so many that it's impossible to specify. Very little computational neuroscience these days requires more than a workstation. You need to get into a collaboration to reduce the scope of the question for it to be answerable.
(2) It's probably easier than you think, but again it requires collaboration with somebody who's in industry or academia (the latter is probably easier). There are several people I know who informally collaborate doing neural modeling or data analysis with established labs. There are plenty of researchers who welcome informal collaboration, as long as it's competent.
(3) It really depends on who you wind up collaborating with, and the type of question. Neuron and Genesis are compartmental modelling simulators, which you'll only use if you wind up working with people on the molecular end of the spectrum (ie. figuring out intracellular processes). Most systems level work is done using Matlab (some Mathematica and Python as well).
(4) Get involved with non-DIYers. Find a lab to collaborate with! Go to SFN next year, and/or ICCNS/ICANNS/CoSyne/etc (see for example: http://www.frontiersin.org/events/Computational_Neuroscience). Go to posters and talk with people. If you see something interesting, ask if they'd be interested in collaboration.. or ask them your question (1). It'll probably take multiple attempts to find the right group, but there are a ton of groups out there.
Finally, I'd just like to emphasize that working on 'real world' problems in neuroscience (computational or not) is a time consuming endeavor. If you don't think you'll be able to devote several hundred hours a year at the least, it'll be hard for you to find tractable problems.
I have little idea what works for supercomputers and highly parallized data analysis (I've never used one).. I work on data sets that tend to have memory bottlenecks, which I think describes a lot of exploratory data analysis activity... and in the framework, I've found one major advantage of mathematica is that I can leave the data intact, while creating a lot of code that accesses it in multiple forms, due to mathematica's ability to process the symbolic instructions before querying the dataset.
In terms of price of the shiny, I bought my initial license for mathematica for $500.. I've paid on average about $120/year for two licences (work and home) 8 and 6 core respectively.. It's hardly an expense.
The wide distribution of silk merely implies that there was some trade -- it doesn't imply at all that the markets weren't so thin that a single caravan's choice of whether to travel or not didn't control the availability of new silk for year(s) at a time. Try reading Hakluyt's voyages some time -- organizing even a single successful long distance trading caravan was not an easy operation.
I think one thing that people often forget about the great steam age of transportation, is that the flows of people were bilateral, and mostly symmetric. While some residual of the passengers who left Europe for, say, the US stayed, mostly they eventually came back to were the left from -- those steam ships leaving from New York were crowded. Comparing that to the Crusades is apples to oranges: Sure, quite a few people left France and the HRE for the middle-east, but nearly all of them stayed once they arrived. Only a very few top-tier nobility and traders ever intended to return to their homes.
The difference between 'large' and 'small' world networks here is that for a small world, we can make the statistical assumption that there will be interpersonal contact between people all over the world at a fairly small tau (say, 4 days). What this research shows is that assumption isn't met by medieval European society at the time of the Black death. Quite likely, because long-distance travel and trade were sufficiently small scale that a few individuals' decisions (say, on hearing about the plague) could radically change the structural dynamics of the network for substantial periods of time.
Sage is okay for small-midsize projects, as is R (both benefit from being free).. on the whole though, I'd really recommend Mathematica, which is purpose-built for that type of project, makes it trivial to parallelize code, is a functional language (once you learn, I doubt you'll want to go back) and scales well up to fairly large data sets (10s of gigs).
Indeed, if it's criminal, it'll be wire fraud... and that's the big IF here, since I don't know whether the Fed's embargoes are criminal to breach... But if a reporter releases embargoed information before the agreed time, and you as a trader should know that the information is embargoed (you did get a license, right?), by trading ahead of release you and the reporter have likely engaged in a conspiracy to commit wire-fraud, which is actually a much easier deal to prove than insider trading.