What was the different solution? (I've also wrecked quite a few shirts in my time)
It goes way beyond just genes and patient data. First, there's the issue of regulation. In most biology/psychology related fields, there's a raft of regulations from funding sources, internal review boards, the Dept. of Agriculture (which oversees animal facilities) and IACUCs for example that make it impossible to comply with this requirement, and will continue to do so for a long time. No study currently being conducted using animal facilities can meet this criteria, because many records related to animal facilities (including the all important experimental protocol) must remain confidential by statute (with the attestation of compliance from the IRB and IACUC). Likewise in the case of (any) human research, you'll have to get a protocol past the IRB for protecting subject anonymity, and given the likelihood of inadvertent identity disclosure that will extremely difficult to do.
Second, there's a deep flaw in how the policy is written and how it conceives of data. To wit, the policy defines: "Data are any and all of the digital materials that are collected and analyzed in the pursuit of scientific advances."
Now for starters, there's a loophole big enough to drive several trucks through: In many experimental contexts material necessary for complete understanding of the 'raw data' are not in digital form, but rather in say, lab notebooks. Which leads to the broader issue: what most researchers would be actually interested in seeing publicly disclosed is the 'data set' which is not 'raw data', but data that's processed into a useful, compact form that's suitable for statistical analysis.
However, in many experiments all of the material necessary to understand the 'raw data' (which I'll definite here as the measured result of an assay in a very general sense) is distributed between lab notebooks, digital data collection, calibration and compliance records in facilities archives and several levels of processing often using proprietary and very expensive software. Even if all of those things could be published (see above), the 'raw data' would be mostly worthless because of the vast amount of time and effort required in many cases to turn the 'raw data' into the 'data set'.
The third problem of course, which has been addressed in several places already on this thread is that there's no money in grants to fund the required repositories.
I think at some level this policy is a noble idea, but it's been implemented in a terrible way, and obviously written by people in fields that already have functioning, funded public databases. Either people are going to stop publishing in PLOS from many fields, or they'll drive the truck through the loopholes and it'll be just a toothless as Science and Nature's sharing requirements.
If they really wanted to effectively push for greater transparency, what they should be pushing at the moment is simultaneous publication of the 'data set', which would let fields that don't have standardized databases in place to design standards that would allow their creation.
I should have been more specific, since indeed I'm fairly ignorant about the american college experience for many (most? I'll have to check) students. My experience in academia has been nearly entirely in large research universities, with friends and family filling out my knowledge the liberal-arts colleges, and some local colleges. But the entire grade inflation debate has been focused on colleges that have competitive admission (only about 15% or so), so I'll maintain that my experience is relevant.
What you link to is one of many examples of 'classic' tests that are 'difficult' because they are not so much tests of 'intelligence' or even 'scholastic aptitude' that we currently fetishize, but are straight out tests of cultural knowledge. That test would be easy for any decently schooled person (read: sufficient family income) at the time, just like the GRE is easy today (I doubt any student in the country in 1869 could crack the 85th percentile on the SAT). Most of the history of standardized testing in the last century has been slowly trying to move away from testing cultural knowledge to something a bit more general, but that change has been limited.
With regard to your uncle, I think it's telling that he retired recently. As was mentioned lower in the thread, one of the symptoms of teachers who are no longer engaged is that they start blaming their students for lack of understanding. Both my parents are professors, and I work at a major research university, so I suspect that I have a better pool to sample than you. Most of what I hear is about 'what great students we have' and 'who could believe that an undergraduate could have written this' etc etc.. Or to make a more concrete example, my Mom is a professor of classics, who's been teaching since the late 60s. She's received about 12 papers from undergraduates over the course of her career that are of such a high quality that she's suggested they revise them for professional submission. Of those papers, 8 have been submitted in the past 10 years.
There's a problem comparing sports pros to college students, which is that there are a lot of effects of over-training, sunk-cost psychology and sticky liquidity in terms of skill transfer between sports. I currently work in neuroscience where we have to be very careful in interpreting animal research due to the same issue. College students who are sophmores or juniors have comparatively little cost shifting into a field that's a better fit for them (and likewise there are many more cognate fields), so you wouldn't necessarily expect the same effects on the distribution.
Grading on a curve only works for large, introductory courses. The problem is two fold 1) smaller classes cannot be assumed to have a normal distribution and 2) Once you get past intro classes in any subject, there is a strong selection bias so that people in upper level classes all tend to be high level performers in that subject (which also means you can't assume a normal distribution).
The big problem with grades is that they conflate course difficulty and student performance. If you want grades to be a proxy for performance, you have to weight them somehow or other by class difficulty. The problem is nobody can agree on how to rank class difficulty due to academic politics, since nobody wants to be the department that gets the short-end of the stick with class difficulty rankings. In my personal experience, being one of the few people who have taken multiple graduate level classes in 3 disciplines (History, Mathematics and Neuroscience) at that level no field is particularly easier or harder than another, it's just that the type of work one does is very different.
The other issue that I rarely see addressed in all of the 'grade inflation' concern (and which class rank also ignores) is that maybe today's college students are actually working a lot harder than those in 1960 (perhaps due to debt, the weak economy, lack of security from getting a degree etc), and have actually earned a big chunk of the upward grade adjustment. That's certainly been my experience when compared to my own cohort, and that of quite a few professors that I talk to as well.
To amplify the above comment, as a neuroscientist with a computational background: don't try to go it alone.
There are a few reasons for this:
1) Research in the field is done by groups because the main problem in generating an 'interesting simulation problem' is carefully defining a scope and a target. That's really hard to do, and generally involves careful discussions between people with different knowledge bases and priorities. If you can't give a clear and succinct answer to the question "How, if successful, will this research advance the field?" to somebody like Larry Abbott, you aren't working on a 'real world problem.'
2) The state of the field is generally about 2 years ahead of the published literature. Unless you have collaborators who routinely attend talks and meetings, and know what people in your area(s) of interest are doing, it's very easy to wind up on the wrong track.
3) Modeling is only useful if it leads to experimental predictions that can be tested, and so needs to be part of an ongoing collaborative interaction between people collecting data, people analyzing it, and people modeling it. Without the entire loop in place, it's difficult to make useful contributions. Also related: outside of things like gene arrays, and a few other standardized approaches, most data in the field is collected by bespoke setups, so even understanding how to parse a data set requires interaction with the people who collected it.
So to answer the original questions:
(1) There are so many that it's impossible to specify. Very little computational neuroscience these days requires more than a workstation. You need to get into a collaboration to reduce the scope of the question for it to be answerable.
(2) It's probably easier than you think, but again it requires collaboration with somebody who's in industry or academia (the latter is probably easier). There are several people I know who informally collaborate doing neural modeling or data analysis with established labs. There are plenty of researchers who welcome informal collaboration, as long as it's competent.
(3) It really depends on who you wind up collaborating with, and the type of question. Neuron and Genesis are compartmental modelling simulators, which you'll only use if you wind up working with people on the molecular end of the spectrum (ie. figuring out intracellular processes). Most systems level work is done using Matlab (some Mathematica and Python as well).
(4) Get involved with non-DIYers. Find a lab to collaborate with! Go to SFN next year, and/or ICCNS/ICANNS/CoSyne/etc (see for example: http://www.frontiersin.org/events/Computational_Neuroscience). Go to posters and talk with people. If you see something interesting, ask if they'd be interested in collaboration.. or ask them your question (1). It'll probably take multiple attempts to find the right group, but there are a ton of groups out there.
Finally, I'd just like to emphasize that working on 'real world' problems in neuroscience (computational or not) is a time consuming endeavor. If you don't think you'll be able to devote several hundred hours a year at the least, it'll be hard for you to find tractable problems.
I have little idea what works for supercomputers and highly parallized data analysis (I've never used one).. I work on data sets that tend to have memory bottlenecks, which I think describes a lot of exploratory data analysis activity... and in the framework, I've found one major advantage of mathematica is that I can leave the data intact, while creating a lot of code that accesses it in multiple forms, due to mathematica's ability to process the symbolic instructions before querying the dataset.
In terms of price of the shiny, I bought my initial license for mathematica for $500.. I've paid on average about $120/year for two licences (work and home) 8 and 6 core respectively.. It's hardly an expense.
The wide distribution of silk merely implies that there was some trade -- it doesn't imply at all that the markets weren't so thin that a single caravan's choice of whether to travel or not didn't control the availability of new silk for year(s) at a time. Try reading Hakluyt's voyages some time -- organizing even a single successful long distance trading caravan was not an easy operation.
I think one thing that people often forget about the great steam age of transportation, is that the flows of people were bilateral, and mostly symmetric. While some residual of the passengers who left Europe for, say, the US stayed, mostly they eventually came back to were the left from -- those steam ships leaving from New York were crowded. Comparing that to the Crusades is apples to oranges: Sure, quite a few people left France and the HRE for the middle-east, but nearly all of them stayed once they arrived. Only a very few top-tier nobility and traders ever intended to return to their homes.
The difference between 'large' and 'small' world networks here is that for a small world, we can make the statistical assumption that there will be interpersonal contact between people all over the world at a fairly small tau (say, 4 days). What this research shows is that assumption isn't met by medieval European society at the time of the Black death. Quite likely, because long-distance travel and trade were sufficiently small scale that a few individuals' decisions (say, on hearing about the plague) could radically change the structural dynamics of the network for substantial periods of time.
Sage is okay for small-midsize projects, as is R (both benefit from being free).. on the whole though, I'd really recommend Mathematica, which is purpose-built for that type of project, makes it trivial to parallelize code, is a functional language (once you learn, I doubt you'll want to go back) and scales well up to fairly large data sets (10s of gigs).
Indeed, if it's criminal, it'll be wire fraud... and that's the big IF here, since I don't know whether the Fed's embargoes are criminal to breach... But if a reporter releases embargoed information before the agreed time, and you as a trader should know that the information is embargoed (you did get a license, right?), by trading ahead of release you and the reporter have likely engaged in a conspiracy to commit wire-fraud, which is actually a much easier deal to prove than insider trading.
The boats are incredible, but it's not sailing in any accessible aspect. I love sailing sunfishes on lake morey, or bigger boats on lake champlain (and I know enough about my skill level to avoid of wider waters like the Sound). But what they're doing now is so totally foreign to everybody who's ever sailed a boat... I've watched a few of the 'challenger races' and I could scarcely tell what direction the wind was coming due to the airfoils (they have to both tack coming upwind and gibe going downwind) except for the speed of the boat (~25kn upwind, ~40kn downwind). This was, as far as I can tell from the PR material, meant to make the race more exciting, but since it dropped the number of teams down to 4, there was never any mystery that team New Zealand would challenge Oracle since so few people could afford to build the boats and spend time racing them (even Oracle has cosponsors)... I absolutely agree though that the announcers and TV coverage has been phenomenal.
I've always felt like that quotation had another interpretation, one that's much more favorable to the MPs:
If you're an MP, you've probably had to deal with a lot of people asking for money to fund what is essentially snake oil. If you don't understand the underlying 'cutting edge' technology (both plausible and acceptable), one simple test is to ask a question that you KNOW if the answer anything other than "No" that the person is bullshitting, and you can safely ignore them... and as reported the question is phrased in such a way that it would sorely tempt any huckster to oversell their device. I think Babbage's lack of comprehension was due to his inability to understand the idea that the MP was questioning HIM, rather than the device.
I used to agree with you, and was religious about getting dual 24" 1920x1200s for my setups (usually Acer). However, last time I upgraded my home machine I finally decided to bite the bullet and shell out the 1k for a 2560x1600 30" (in my case, a DoubleSight DS-309W).. and I could not be happier. The difference in vertical screen space is surprisingly noticeable, and it just about fills my useful-field-of-view at about 22-24" viewing distance, so I don't find myself having to turn my head very much. I have a 27" 2560x1440 on the other wing of my L-desk (hooked up to my laptop while at home) and frankly I've been looking for an excuse to replace it with a 30" the last few months.
One other thing to keep in mind about large displays, is that they need to be mounted at the correct height to be comfortable: when you're sitting in a relaxed posture looking straight ahead, the center of the display should be at eye-level. That's about 4-7" higher for most people than the included stand on a normal height desk. Either get a wall mount/better stand, or make sure you have a few hefty books to put it on (mine is currently mounted on an old Principles of Neuroscience and A New Kind of Science, which I find to be perfectly sturdy).
Remember the "and" part. Yes, we have abundant energy, but it's not cheap. My ability to get computations per dollar has increased many orders of magnitude in the last 30 years (or 60, but I'm not that old), to the level that my smartphone would have been the fastest computer in the world when I was born. Energy, on the other hand, is within an order of magnitude, the same cost: the real coal price is about the same as in 1800 (see: http://econbus.mines.edu/working-papers/wp201210.pdf and that's externalizing costs of climate change). That may have counted for cheap AND abundant then, but it certainly doesn't now.