Become a fan of Slashdot on Facebook


Forgot your password?

Comment: Re:recent breakthrough. (Score 1) 195

by ganv (#49530639) Attached to: Concerns of an Artificial Intelligence Pioneer
No progress you say? Seems like computers that drive cars and win at chess and Jeopardy are clearly progress. I guess if you define progress as the creation of a human level intelligence, then there will not be any progress until one arrives. But that is a useless notion of progress. I guess you are really arguing that we have a long ways to go. And I would agree. Do you have any evidence for the claim that the human brain is close to the maximum interconnect possible in this universe? The name calling at the end suggests I probably should not be bothering to reply. We really do need to learn to be more civil online. It is a serious philosophical question: can you distinguish between intelligence and the achievement of complex goals that humans achieve using intelligence ('getting the job done'). Many of us think this distinction can not be clearly made in the real world.

Comment: Re:recent breakthrough. (Score 1) 195

by ganv (#49524827) Attached to: Concerns of an Artificial Intelligence Pioneer
There has been dramatic progress. Call it a breakthrough or not, as you like, but 15 years ago, we were struggling to get basic voice recognition to work without elaborate training and controlled environments, and now we all use it many times a day. Similarly, machine translation between human languages has made dramatic progress in the past decade. Many of us suspect that your distinction between machines that get the job done and machines that are sentient, may not be a substantial distinction. I haven't heard the serious AI researchers I know promising strong AI over the past couple of decades. It is easy to heap scorn on the early optimists who thought they were going to build a super-intellect with LISP and 1960s era hardware. But it is much harder to explain to the phone call center workers, paralegals, and even surgeons, that their jobs are being taken by machines that don't have intelligence. And we have only just begun to learn how to build machines that integrate a wide range of sensor data with large databases to do tasks like drive a car. We are a long ways from the kind of intelligence displayed by humans. But consider where we were 200 years ago, and it is pretty hard to argue that human level artificial intelligence is not possible in the next 200 years. And if it is possible, then super-human intelligence will follow very quickly. Maybe they will be very different than human intelligence, but if they outcompete us, then they are better, even if they are different. And if you think that super-human level intelligence 200 years from now is not something to be cautious about, then I suggest contemplating what humans will choose to do when they feel obsolete and out of control of their future.

Comment: Re:Way too many humanities majors (Score 3, Insightful) 397

by ganv (#49380049) Attached to: Why America's Obsession With STEM Education Is Dangerous
I think you are right that current trends are devaluing the STEM majors. There is a big push to make these majors less 'elitist' which is code for requiring less foundational mastery of basic math and science. We really need a way to advocate for attracting underrepresented groups into STEM that does not involve changing the preparation standards required.

Comment: Re:Way too many humanities majors (Score 5, Insightful) 397

by ganv (#49379961) Attached to: Why America's Obsession With STEM Education Is Dangerous
Here is a quote from the Zakaria article to think about: 'Critical thinking is, in the end, the only way to protect American jobs.' His implication is that the humanities are a bastion of critical thinking. But when an introductory student is asked to do actual critical thinking where they might be wrong (i.e. introductory engineering, science, and math courses) they often conclude that they would rather go to the arts or humanities where the requirements of critical thinking are not as high.

The fundamental idea is right...that it is understanding of the human condition that will be the biggest growth area in the next few decades. But he is wrong that this is an argument for training more students in current curriculum in anthropology or classics. The future belongs to people who can take the serious critical thinking characteristic of math, science, and engineering curricula and apply it in complex situations where technical details and human behavior are both important.

Comment: Re:The fallacy of labels (Score 1) 320

Yes much of the problem lies in the difficulty of conceiving of the scientific enterprise. We inherited our labels from an era when science was just emerging as a human endeavor, and that was also a time of enlightenment optimism about the ability of human rationality to attain reliable truth free from spin and political agendas. In our era we have swung to the opposite end of the pendulum where all ideas are assumed to be used in pursuit of some political agenda or other. Reality is a very subtle combination of both. But anything simplified enough to be used in the media has to be black or white. So one set of stories digs out political agendas in scientific work and calls them scandalous. And another glorifies how close we are to a 'theory of everything'. The idea that a messy human scientific process might be able to achieve a patchwork of mostly self-consistent models of how most of our corner of the universe works is beyond description by the labels that the media has available at present.

Comment: Color means many things (Score 2) 420

by ganv (#49153205) Attached to: Is That Dress White and Gold Or Blue and Black?
This reminds me of the great entries from the competitaion to explain 'What is Color': http://www.centerforcommunicat...

By the way, I see white/lavender and brown. It would be very interesting to know what lighting/image manipulation was done to get those colors out of a dark blue and black dress.

Comment: Is NIH unique here? (Score 1) 153

by ganv (#48778295) Attached to: Fewer Grants For Young Researchers Causing Brain Drain In Academia
In the physics and engineering proposals I have reviewed, it seems that young researchers still get a significant preference in the distribution of grants. But there is a problem that the proposals from young researchers are often much weaker. It is really hard to write a great grant proposal and new faculty members usually struggle long and hard to get good at it. You have to have great ideas, preliminary work, and a great presentation. And you have to know how to market your ideas to the diverse set of people who will be reviewing the proposal. Maybe 30 years ago, people could get research grants just by describing some potentially interesting research, but in the current environment, you have to write a proposal that is better than 80 or 90% of the others, and that is hard for young people to do. Maybe the bias toward younger researchers should be stronger. But I don't think it helps them to set a low bar and then they will fail to get their grants renewed. I would recommend that grant agencies more aggressively limit the number of grants that can be accumulated by the big names. No one can effectively mentor 5 post-docs and 10 graduate students, and letting them suck up all the funding just because they are able to spit out a large number of strong proposals limits the number of new researchers who can be funded.

Comment: Re:A Word to the young bright kids out there (Score 2, Interesting) 153

by ganv (#48778101) Attached to: Fewer Grants For Young Researchers Causing Brain Drain In Academia
That advice makes sense. It will be very hard to implement though. Research grants pay graduate student stipends. I am not sure it is a subsidy. It is the way research work is paid for. The problem is that the work is done for depressed wages: the typical accomplishments of a grad student are much much larger than you could get with a similar salary offered to a non-degree seeking researcher, same thing for post-docs: they are paid less with the hope that they are preparing for a step up to a permanent position soon. So implementing your system is going to make research much more expensive to perform. If there are fewer graduate students doing research, then research becomes even more a winner take all because only the top professors will be able to support graduate students and maintain active research programs. That means even fewer faculty positions (without research funding, universities hire fewer faculty who teach more rather than more faculty who are also doing research). I think a better fix is to adjust the graduate programs so that they focus not on creating future researchers, but on creating experts prepared for a wide range of technical jobs that are not in research. Research would become a smaller part of these graduate programs and only the top few students who wanted to pursue a research career would continue for post-doctoral research.

Comment: Re:Why is he worried (Score 1) 583

by ganv (#48247209) Attached to: Elon Musk Warns Against Unleashing Artificial Intelligence "Demon"

I think Elon sees something that most of you do not. Artificial intelligence is not like anything else. We know very very little about the kinds of intelligence that are possible. But if it is possible to build AI that is smarter and more capable than us, then it will by definition be better than us at building the next generation of itself. And at that point, humans are permanently obsolete because we have no rapid methods for upgrading ourselves. It has nothing to do with who is 'using the AI' or 'Who is doing the prescription'. There will be no person and no human moral intuitions in the loop at all. The intelligence that supersedes us will be doing what it wants to do. We'll be like the fish who debate how to control their bipedal relatives who have decided to start overfishing the oceans. It is simply out of their control. And if that doesn't scare you, then you don't understand.

We don't know whether or not artificial intelligence is possible. But it seems like a very reasonable possibility sometime in the next few centuries. And we know so little about intelligence that we have very little idea about whether it will share anything like the moral intuitions that undergird human society. Many of us suspect those evolved for survival in hunter-gatherer tribes and AI will evolve a very different set of criteria upon which it makes its choices.

Comment: Re:Wrong distance away (Score 3, Informative) 23

by ganv (#48208985) Attached to: Two Exocomet Families Found Around Baby Star System
That error jumped out to me also. Its like describing a city 93 miles away and instead saying it is 93 million miles away which instead of being 1.5 hour drive is all the way to the sun. It is really useful to get a cosmic distance scale in your head: billions of light years is the size of the visible universe, millions of light years are distances to nearby galaxies. 30,000 light years is the distance to the center of our galaxy. 4 light years is the distance to the nearest stars.

Comment: Re:Amusing (Score 1) 350

by ganv (#48174057) Attached to: The Physics of Why Cold Fusion Isn't Real
We just discovered that we are made of atoms a little over one century ago, and our ignorance is vast. But we should also be careful not to err on the side of blindly assuming that anything is possible. It is essential to think clearly about what might and might not be possible based on what we know now in order to direct our investigations. Will we discover new laws of physics that are relevant to releasing energy from nuclear reactions? I suspect the answer to that is probably no, and the reason is the high precision we achieve from our current theories in describing the behavior of atoms and nuclei. Careful experiments of nuclear excitation energies, fusion cross sections, etc agree with theoretical calculations, often to many significant digits. There just isn't much place to hide new physics in this energy range. Of course new fundamental discoveries (dark matter, etc) are very likely. They just are unlikely to change our predictions for nuclear phenomena by a quantitatively significant amount. Would it be better to stay open minded because one can never be sure? (See http://www.preposterousunivers...) Or is it better to make audacious, falsifiable hypotheses, such as the hypothesis that we already know the laws underlying the physics of everyday life? (

That doesn't tell us much about how to engineer processes that obey the known laws of physics. Predicting what humans will be able to do is very very difficult...and people regularly get it badly wrong being both too optimistic and too pessimistic. In my mind, good hypotheses based on careful consideration of the best evidence are never premature. They just might be wrong.

Comment: Re:Useful but physics? (Score 1) 243

by ganv (#48126317) Attached to: 2014 Nobel Prize In Physics Awarded To the Inventors of the Blue LED
You imply that if string theory and fundamental particle physics must have practical relevance like the quantum theories of atomic and nuclear physics of 100 years ago. But they are dramatically different. At that point, they didn't understand what matter was made of. (And despite a few notorious quotes, the best scientists knew that they didn't know how to explain atoms and chemistry.) Now we can't find anything in our galaxy that deviates from our current theories. There just are not going to be any practical applications of the Higgs boson or dark energy (at least not for many thousands of years...). If you are among those who think that physics is discovering new fundamental laws and engineering is using those laws to understand and control phenomena that we care about, then your version of physics is ceasing to be relevant. Instead, physics is actually the attempt to explain and control the world we live in using our knowledge of the fundamental laws. That kind of physics is slowly taking over all of science and engineering.

Comment: Re:Useful but physics? (Score 2) 243

by ganv (#48085241) Attached to: 2014 Nobel Prize In Physics Awarded To the Inventors of the Blue LED
Maybe we can try to help those ignorant of applied physics, but you may be right that they are hard to help...

There is a fantasy that lives on and on that physics is only the search for the fundamental rules of how the universe works. Physics does include the search for the most fundamental theory...things like trying to detect the higgs boson or understand dark energy. But those two pretty nicely define 'irrelevance' to the everyday lives of humans. If physics is only about the search for fundamental rules, then physics is essentially over as an enterprise with practical relevance. (See http://www.preposterousunivers...) But the overwhelming majority of physicists have long been working on applications of known fundamental physics to discover new emergent laws and new technological applications. Semiconductor and device physics is one of the great successes of 20th century physics and this achievement of fabricating gallium nitride with its large bandgap was a major advance, both in the fundamental science of crystal growth and in high frequency electronics as well as the production of blue light. This is exactly the kind of prize that should be given because we need the next generation of physicists to be finding fundamental problems that have practical relevance rather than using their talents on interesting but economically useless tasks like string theory. I predict that in the rest of the 21st cenury, there will be more Nobel prizes in physics given for biological, environmental, and neuroscience applications of physics than there will be for fundamental particle physics. If not, then the Nobel prize will be overshadowed by the Kavli prize or some other prize that recognizes accomplishments that have consequences for humans.

Comment: Propose the risky ideas after you demonstrate them (Score 1) 348

by ganv (#47877191) Attached to: When Scientists Give Up
"The reviewers who decide which projects receive funding are risk-averse."

This hasn't been my experience. Reviewers and grant officers want to fund high risk/high reward science. But you are competing with others who have already tried a bunch of risky ideas and are only proposing the ones that happened to work. You basically have to make a significant discovery before you can be funded and then you can get funding to bring that idea to full bloom and hopefully fund a few risky projects on the side that will serve as the basis of the next grant proposal.

Most new ideas are bad ideas, so funding agencies have to have a pretty rigorous filter to sort out the promising ones. As a result, it will always be very hard to get funding to explore an idea before there is evidence that it is on the right track.

The sooner you fall behind, the more time you have to catch up.