Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Education

The Rise and Rise of the Cognitive Elite 671

hessian writes "As technology advances, the rewards to cleverness increase. Computers have hugely increased the availability of information, raising the demand for those sharp enough to make sense of it. In 1991 the average wage for a male American worker with a bachelor's degree was 2.5 times that of a high-school drop-out; now the ratio is 3. Cognitive skills are at a premium, and they are unevenly distributed."

Submission + - Hotmail users get a clean start to 2011 (pcmag.com)

nycguy writes: Hotmail users may resolve to find a better email provider after finding their accounts have been reset, starting yesterday and continuing today. Microsoft's support forums are choked with complaints from users whose accounts have apparently been erroneously reset due to "inactivity", causing all saved emails and folders to be deleted.
Censorship

Pentagon Makes Good On Plan To Destroy Critical Book 306

mykos writes "Remember when the Pentagon said they were arranging a taxpayer-funded, government-sponsored book burning a couple weeks ago? Well, they made good on that threat, purchasing 9,500 copies of the book to be destroyed. The publisher, St. Martin's Press, has redacted anything the Pentagon told them to redact in the upcoming second run of the book. They Department of Defense has not yet paid for the burned books, but says they are 'in the process.' Pentagon spokeswoman Lt. Col. April Cunningham gave this statement: 'DoD decided to purchase copies of the first printing because they contained information which could cause damage to national security.' Whew, looks like we're safe now."

Comment tags and search (find/grep) (Score 1) 235

I would just put each set of experimental data in a separate subdirectory. Within each subdirectory I'd put a file with specific name (e.g., "description.txt") in which you briefly write up exactly what the experimental data is, how it was generated (e.g., if generated by a program, give the arguments and/or pointers to input data), and some keywords to allow it to be indexed/searched. Then I'd use your standard OS search tools to find the description file(s) you're looking for, thereby allowing you to locate your data based on its description rather than some brittle directory hierarchy.

I have a pretty standard setup for generating experimental data in my work. Whenever I run an experiment (which are usually simulations), I have a wrapper script that generates a random (meaningless) subdirectory name, copies my simulation binary and configuration to that directory (so I can reproduce the results later in case either my simulator code or its configuration changes), and prompts me to enter a description of what it is I'm simulating, and asks me to provide some keyword tags. The only way I can find the data afterward is to search the description files from the last step, because the data is otherwise just in a randomly-named directory.

Of course, this scheme depends on you doing a decent job of describing your data and providing keywords, but I don't think you can get around that with any technique. At some point you have to inject some human labeling/categorization. Directories and symlinks are just a pretty restrictive way of organizing things.

Slashdot Top Deals

BLISS is ignorance.

Working...