Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment academic article... (Score 1) 686

The article mentioned in Ambient belonging: How stereotypical cues impact gender participation in computer science. Cheryan et al., Journal of Personality and Social Psychology. Vol 97(6), Dec 2009, 1045-1060.. My institution apparently doesn't subscribe to this APA journal. Here is the lead author's website. She posts reprints of many of her papers on her lab's website, but this current paper is listed as in press. I agree with Laird that it would be nice to read what the article actually said. But I also think that it was weak to posti a blog response criticizing a popular news medium's reporting on a scientific paper, without first reading the paper. The blog post consists of suppositions of how the popular report may have differed from the facts in the academic paper. And then warns that the popular media is just trying to attract eyeball to advertising rather than establish "truth". Of course, the rich irony here is that the blog post is based on no primary source (e.g. an interview or the academic article in question) and makes a controversial opposing claim based on little to no information or evidence, and does all this on an advertising supported site!

Comment Re:It Hurts (Score 1) 320

Yes, that's the image I'm talking about. I never suggested that it was unreasonable for an amateur to make a hypothesis. Certainly Leonardo had many insights that proved right over time.

But the history of reproduction is well studied and is written up nicely in popular style in this book. (You can even browse the book. The first few pages show and discuss a relevant Leonardo sketch.) Leonardo is credited with recognizing (as an adult) the connection between sex and reproduction and noting that the features of the child derive from both parents. That in itself was a significant leap for science at the time. Identifying the sperm and egg as carriers of sexual reproduction would have been groundbreaking. Interestingly, Leonardo sketch of copulation (found in the book above) shows no indication of sperm or egg.

I don't believe the specifics of plant reproduction were known in the mid-15th century either. Nehemiah Grew is generally recognized with identifying plant sexual organs Before that, horticulture proceeded through the selection of seed from plants with desired properties rather than through selective crossbreeding. So plant reproduction seems like an unlikely place for extrapolating the sexual carriers.

Examining the VM drawing, I'm guessing the "united sperm and ova" is the circle near the middle of the image with four spikes coming out of it and four smaller circles adjoining it. The circle to the left does indeed have a very sperm-like tail. But why would the artist hypothesize this specific shape for the sexual carrier? A flagellum isn't even a particularly useful means of propulsion at length scales that can be seen with the naked eye (which have higher Reynolds number, making fins a better choice).

More reasonably, I suspect that modern knowledge of the appearance of sperm was combined with Leonardo's acknowledged insight into reproduction, resulting in a misinterpreted geometric doodle.

Comment Re:It Hurts (Score 3, Insightful) 320

Personally, I like

This picture also depicts the union of a sperm with an ova, indicating an extraordinary insight into human reproduction.

and then

I postulate that Leonardo da Vinci wrote the Voynich Manuscript circa 1460 when he was about 8 years old.

Meanwhile,

An early microscope was made in 1590 in Middelburg, The Netherlands.

How exactly did a youthful da Vinci figure out what an ova and sperm look like? If Leonardo da Vinci (as a child) could sketch sperm and ova over 100 years before a crude microscope was invented and almost 200 years before Hooke and Leeuwenhoek, then that alone would be an astonishingly significant discovery. Unfortunately, it seems unlikely that Leonardo would build a microscope, discover cell biology, and not bother to write something up about it as an adult. He was, after all, interested in pretty much everything. The more reasonable conclusion is that Edith Sherwood is willing to interpret images very "liberally" (meaning here, without much evidence), without making even simple checks for logical consistency. This is a single example, but the carelessness calls the rest into question. (As you have already indicated)

Comment Re:its fair turn around (Score 1) 1172

In is interesting to note the time period examined, September 8 to October 16, 2008. This starts one week after the start of the Republican National Convention (Sept 1-4). Palin was introduced August 29. Lehman Bros failed on Sept 15. After refusing interviews for several weeks, Palin interviewed with Couric on Sept 24. McCain skipped Letterman on Sept 24 to return to D.C., then appeared with Couric in New York.

Regardless of political persuasion, I don't think it's hard to argue that the Republicans lost the national election in September. They came off a strong national convention that highly motivated their base, but promptly squandered that through a series of poor interviews and misguided decisions. Over the same time period the Obama camp was following a low key approach, so much so that the left was hand-wringing that Obama would lose the election by not being aggressive enough.

Judging media bias by reviewing articles from this time period is bound to be misleading. I'd suggest that a better time would have been earlier (e.g. the month before the Democratic National Convention, when both presidential candidates were known, but running mates were unkown.) Moreover, using such a short time slice is bound to be misleading, but it would be hard to track tone over a much longer period because the number of candidate changes rapidly from January through the conventions.

Comment Re:Seems fair to me. (Score 1) 317

The relevant question is what public pays and what public benefits? Should work funded by U.S. research agencies (e.g. NSF, NIH) benefit predominantly U.S. interests or be shared equally with the world. Before the Bayh-Dole act of 1980, public-funded work could not be copyrighted or patented. The Bayh-Dole act allowed universities, small businesses, and non-profits to gain IP generated during public-funded work. The justification: If public-funded work was public domain, then the benefits of U.S. funded research would ending up in the hand of foreign competitors. (Mostly Japan was blamed as a source of cheap knockoffs rather than innovation. That reputation has changed. Now China has the reputation for being the source of cheap knockoffs. Makes you think about where we'll be in 30 years.) Allowing ownership of the IP ensures that U.S. interests would be able to benefit from the funded research, yielding net growth to the U.S. economy, which in turn yields more tax dollars. The initial research expenditures are said to be justified by the economic growth and increased tax yield. The patent system has gotten out of hand in general, and in specific it is discouraging to see academics step too far away from intellectual openness in pursuit of IP, but I believe the justification for holding IP on public-funded work is sound.

Textbooks and curricula are a different matter from research, and the case for open-sourcing deserves further consideration. I don't think, however, that the answer is obvious. The economic impact of textbooks and curricula are likely to be very small. Grants funding curriculum development and textbooks are typically small and don't cover the amount of effort involved. I suspect that the result would be that texts that sell in high volume (e.g. for early undergraduate courses taken by many students), authors would opt to turn down public funding because the potential gains from owning rights would be greater. For more advanced and specialty subjects, where it is barely profitable to write a text anyway, authors would accept public money to write the text and make it open, since they would be unlikely to see any appreciable income anyway. This might work out well for all involved. At the very least, publishers would need to determine a way to deal with open content. But the proposal requires a nuanced cost-benefit analysis at the national scale.

Comment Difference in degree or in kind? (Score 1) 712

Trying to estimate the rate of technological development across decades is a phenomenally tricky business. Looking at the past, ideas that were incremental can be lumped together and thought of as revolutionary, and revolutionary ideas can be re-imagined as merely incremental. The telephone was developed in the 1880s. But the telegraph, a perfectly good way of sending binary data down a wire, had already existed for decades. Telephony brought analog signals into the mix. It wouldn't be until the 20th century that they figured out how to multiplex multiple (analog) signals on a single wire. Or developed feedback amplifiers that permitted signals to be sent across North America using a reasonable sized conductor. Or developed an automated switch to replace the human operators that physically connected your circuit to the party you wished to call. Or realized that, rather than multiplexing analog signals, it was more efficient and reliable to digitize the signal and use packet switching on a digital communications network (back to digital data, like the telegraph!). Or to set up networks of locally operating radio towers (cells) that provide a mobile telephone with seamless coverage as it travels from one place to another.

Which of these is simply incremental? Which is revolutionary? Is the 1880s telephone itself the major revolution? Note that some buildings and ships already existed with tubes designed into them for communicating (i.e. shouting) between rooms. The telephone replaces the tube with wires, borrowing the idea from telegraphy, to achieve the same purpose. Were the innovations that followed (multiplexing analog signals, feedback amplifiers, automated switching, packet switching, cell networks) incremental or revolutionary? It's difficult, I'd actually suggest impossible, to make a definitive claim.

My claim is simply that measuring the rate of technological process is a subtle and tricky business. I believe the best we can really hope for is to find metrics with narrow domains. For example, the number of transistors in an IC (Moore's law), the annual output of technical papers, or the speed at which DNA can be sequenced. These metrics only truly measure what they say they measure (transistors, papers, and sequencing speed). We may try to infer progress rates from such metrics, but logical errors arise when these metrics are used to predict progress outside the metric. (transistor counts to predict artificial intelligence.) The author of the article makes the similar estimation errors to the singularity folks when discussing technological progress, but biased in the opposite direction.

I am not a believer in the singularity. But, it is fun to think about the future, so I cut the singularity folks some slack with their over-the-top predictions. I just don't take them seriously. But I believe futurologists do some good in encouraging people to think about possible futures and what it take to achieve (or avoid) them. If the aim of the article is to remind us not to take the singularity folks to seriously, then I agree. But as it is written, it sounds more like "Get off my lawn!"

Comment Re:Eh. (Score 1) 677

As a freshman I took an honors calculus course with a professor that I was sure was absolutely insane. Her problems were never something we could learn to do by applying an algorithm or computation from lecture or the book, but each one required cooking up some unique insight. "How was I supposed to know that!?!" I and many of my peers railed against her failure to "teach," but many years later I realize that she was trying to introduce us, however unwilling we were, to the substance of mathematics. I learned quite a bit from the course, though I didn't appreciate it until several years later.

Most undergraduate mathematics courses focus on computation. The mystery is figuring out which algorithm to apply, or how to transform the problem into a standard form to apply an algorithm. This provide good practice in problem solving (or in the worst case, practice in following really sophisticated directions). The concept of math as art (the essence of mathematics that Lockhardt writes about) only really shows up in the graduate curriculum, although it is foreshadowed in senior level classes like "advanced calculus." (Which is not really about advanced calculus, but really lays solid foundations on which to build calculus.) At the graduate level, all the problems start to look a lot more like puzzles, with no systematic way to approach them. The questions regard the truth of a given statement rather than the particular result of a calculation. Answering the question often requires the creation of an ad hoc tool.

Developing the creativity to successfully approach these unstructured problem is a useful mental skill, and like problem solving, I believe it is portable to other fields. But I think it is much more difficult to teach than the author lets on. The students must be willing to bang their heads against the wall until useful ideas pop out in order to make progress. Students are often unwilling to do this when they know that someone somewhere already knows the answer. And the internet makes it easy to just look up the answer, possibly replacing a self-directed deep understanding with a superficial one. The reason to work through these types of problems is because you inevitably hit problems for which you can't find the answer, and you have to reason it out on your own. Without practice, you're sunk.

(This message is not so much a direct response to you, as thoughts evoked by your "math as art" comment. Perhaps your program did a great job of motivating fundamentals the whole way through. I dunno...)

Comment Re:It is (Score 1) 599

Yay! You are the first I've seen to mention budget numbers. Since WWII, the United States has funded significant amount of research. This was largely a result of the Cold War, in which high technology played a significant role, and continues today because of the positive effects this research ultimately has on the US economy. Vannevar Bush played a large role in established the beaurocracy to fund pure and applied research. In addition to the purer science agencies you mentioned NIH, NSF, DoE, the military also funds a significant amount of pure and applied research through DARPA, ONR, AFRL, DHS, etc.

One should note that government employees are not allowed to claim IP. It is automatically freely usable. For this reason, many modern numerical packages (e.g. Matlab, GSL) are based on LINPACK and LAPACK, Fortran code written in the 70s and 80s by the National Institute of Standards and Technology, freely available to anyone. Up until the early 80s, intellectual property could not be claimed on governement funded research. Of course, the problem was that the US dumped huge amounts of money into research, but the entire world benefited from the results. At that time, the finger was especially pointed at Japan, who people thought provided no innovation but took advantages of advanced developed in US labs. The Baye-Dole Act created a uniform patent policy across funding agencies and allowed federal research money to result in patents held by the researchers organization. This helped ensure that US benefited financially from US funded research. It also helped lead to the current patent madness. Note that under the previous system in which the government owned the IP, it would be nearly impossible to ensure that US companies would benefit preferentially from US funding. The current structure encourages this naturally, and places the responsibility for tracking IP violations in the hands of the organization that developed the IP.

Comment More details (Score 2, Insightful) 284

More details on the study are available in this news item from OSU.

Many variables are not considered directly in the analysis (at least in the brief writeup). For example, the sample has more grad students than undergrads, and grad students were found to be less likely to use Facebook. But grad students are selected from academic high(er) achievers, and graduate courses are generally graded with a higher curve than undergrad courses. That alone could explain the correlation. So why do less grad students use Facebook? Perhaps age plays a role (since not so long ago, Facebook was targeted only at undergrads). Similar arguments could be made regarding STEM students, who are more likely to use Facebook, but (I suspect) are also more likely to have lower undergrad GPAs. It is very difficult to compare GPAs across disciplines without controlling for the mean GPA.

Slashdot Top Deals

Neutrinos have bad breadth.

Working...