Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Note: You can take 10% off all Slashdot Deals with coupon code "slashdot10off." ×

Comment Identifying the unknown inspires students (Score 3, Interesting) 236

When the instructor effectively places the material they are presenting in a larger framework including unknowns, it is often quite inspiring. Textbooks in mathematics and physics are the worst in this regard. They try to paint their presentation as the complete story on the subject and that leaves students bored. Even just a little bit of explaining the complex problems that are being sidestepped by the way the course material was chosen can greatly enliven a course. Even better, the students come out with an understanding of where the methods they learned will work and where they will not.

Comment Re:We've been here before (Score 1) 262

No, you are pretty precisely wrong. Elon and Gates made their fortunes in the software business but don't work in exactly the niche of AI. Exactly the same as Einstein worked in relativity and a bit in quantum mechanics, not nuclear physics. While AI and nuclear explosions are totally different, the level of understanding of the possibilities comparing now to 1939 is not all that different. At least you give no reasons beyond personal incredulity for the claim that there is no feasible way for this to happen in the next century.

The comparison to the risk of atmospheric fire is also precisely wrong. That was brought up as a possibility in the 1940s. The experts evaluated it, and concluded it was extremely low probability. Strong AI on the other hand is estimated by many experts to be very likely over the next century. ( ) The main question is whether it will be a threat.

It is the next century or two that Musk, Gates, and others are warning of. And it is quite short sighted to dismiss the threat with 'there is no feasible way for this to happen' right now.

Comment We've been here before (Score 2) 262

It has happened before that the smartest people in the world warn that technological advances may present major new weapons and threats. Last time it was Einstein and Szilard in 1939 warning that nuclear weapons might be possible. The letter to Roosevelt was three years before anyone had even built a nuclear reactor and 6 years before the first nuclear explosion. Nuclear bombs could easily have been labelled a "problem that probably does not exist." And if someone could destroy the planet, what could you do about it anyway? The US took the warning seriously and ensured that the free world and not a totalitarian dictator was the first capable of obliterating its opponents.

This time Elon Musk, Bill Gates, and Stephen Hawking are warning that superintelligence may make human intelligence obsolete. And they are dismissed because we haven't yet made human level intelligence and because if we did we couldn't do anything about it. If it is Musk, Gates, and Hawking vs Edward Geist, the smart money has to be with the geniuses. But if you look at the arguments, you see you don't even have to rely on their reputation. The argument is hands down won by the observation that human level artificial intelligence is an existential risk. Even if it is only 1% likely to happen in the next 500 years, we need to have a plan for how to deal with it. The root of the problem is that the capabilities of AI are expanding much faster than human capabilities can expand, so it is quite possible that we will lose our place as the dominant intellect on the planet. And that changes everything.

Comment Re:Blimey (Score 2) 518

The problem is that the explanations that we can get our hands on are so obviously problematic. Take the simple Newtonian mechanics diagram in the original post. They seem to be implying that the radiation pressure on the front of the cavity is larger than the radiation pressure on the back. That is fine. If that is the case, then there is a transfer of momentum to the radiation field. Where does this momentum go? If it goes out the back, then this is a simple and well understood phenomena that isn't powerful enough to propel a rocket. Any flashlight is a EM drive...photons out the back, momentum away from the light beam. It is just not an efficient way to turn energy into momentum. But they are claiming that some net momentum is produced without any photons out the back. And that is simply impossible without overturning Noether's theorem or establishing a way for space-time to be inhomogeneous on the scale of the spacecraft. It is really easy to get stray and reproducible momentum sources (see the Pioneer anomaly). It is much much harder to replace well established fundamental physics that is derived from principles as simple as the homogeneity of space.

Comment Re:Blimey (Score 5, Insightful) 518

Which is exactly why this should be presented as a breakthrough in physics with the standards of verification and publication used in physics rather than announced as a way to propel rockets. Using the standards of a breakthrough in physics, they have an anomalous experiment. Now they need to replicate it under more idealized conditions and then we'll evaluate whether to give them a Nobel Prize. Almost certainly this anomalous experiment will turn out to be an experimental error or misinterpretation since the theory of conservation of momentum they are claiming to violate is so extremely well corroborated.

Comment Re:In other words... (Score 4, Insightful) 327

Indeed, presentation tools can't compensate for poor skills in creating or giving presentations. Do people remember before powerpoint? At the scientific conferences I attended, as often as not people were throwing unreadable transparencies onto the project at a rate significantly faster than anyone in the audience except their collaborators could comprehend the concepts. Now they just flip through readable but incomprehensible power-point slides. It's the humans you have to fix, not the technology.

Comment Re:This again? (Score 1) 480

It is very hard to believe that they are going to send a propulsion system into space without a clear understanding of how it works. They claim that they have a device that violates very basic physics. They shouldn't be thinking about space flight at all. They should be asking the best experimentalists in RF cavities to collaborate with them to win the Nobel prize that will be given to anyone who shows a human scale object that violates the known laws of physics. It would be easily the most important discovery of the last 50 years. But for that same reason, it is 99.9% likely to be a misinterpretation of their experimental results.

Comment Re:With the best will in the world... (Score 1) 486

Yes, the original article combines electrolysis, CO2 + H2 -> Diesel, and CO2 capture, three quite different processes. If you look at the full cycle efficiency including extracting the CO2 from the air, then the efficiency is likely very low. Electrolysis can be pretty efficient. I think it is straight forward to achieve 70% efficiency. Maybe the CO2 + H2 -> Diesel process can be made 50% efficient. Afidel above says 50-60% max, and that seems optimistic but not obviously wrong. But if you have to also extract the CO2 from the air, then your efficiency is going to be much lower. If this process is commercialized, I suspect it will use high concentration CO2 from power plant emissions. If you have to extract CO2 from the air, I would guess you would be happy to achieve 10% of the input electrical energy returned to heat by burning the Diesel fuel, and of course the diesel engine will only be 30% or 40% efficient or so, so on those (very rough) estimates, a vehicle driven on this fuel would require something like 25 times more renewable energy than the same vehicle powered directly by electricity. That still may be useful. Sometimes there is excess renewable energy, and this could be a way to put some of it into long term storage. And if you are on a nuclear aircraft carrier, you may be quite happy to obtain jet fuel from electricity even if the efficiency is low.

Comment Re:recent breakthrough. (Score 1) 197

No progress you say? Seems like computers that drive cars and win at chess and Jeopardy are clearly progress. I guess if you define progress as the creation of a human level intelligence, then there will not be any progress until one arrives. But that is a useless notion of progress. I guess you are really arguing that we have a long ways to go. And I would agree. Do you have any evidence for the claim that the human brain is close to the maximum interconnect possible in this universe? The name calling at the end suggests I probably should not be bothering to reply. We really do need to learn to be more civil online. It is a serious philosophical question: can you distinguish between intelligence and the achievement of complex goals that humans achieve using intelligence ('getting the job done'). Many of us think this distinction can not be clearly made in the real world.

Comment Re:recent breakthrough. (Score 1) 197

There has been dramatic progress. Call it a breakthrough or not, as you like, but 15 years ago, we were struggling to get basic voice recognition to work without elaborate training and controlled environments, and now we all use it many times a day. Similarly, machine translation between human languages has made dramatic progress in the past decade. Many of us suspect that your distinction between machines that get the job done and machines that are sentient, may not be a substantial distinction. I haven't heard the serious AI researchers I know promising strong AI over the past couple of decades. It is easy to heap scorn on the early optimists who thought they were going to build a super-intellect with LISP and 1960s era hardware. But it is much harder to explain to the phone call center workers, paralegals, and even surgeons, that their jobs are being taken by machines that don't have intelligence. And we have only just begun to learn how to build machines that integrate a wide range of sensor data with large databases to do tasks like drive a car. We are a long ways from the kind of intelligence displayed by humans. But consider where we were 200 years ago, and it is pretty hard to argue that human level artificial intelligence is not possible in the next 200 years. And if it is possible, then super-human intelligence will follow very quickly. Maybe they will be very different than human intelligence, but if they outcompete us, then they are better, even if they are different. And if you think that super-human level intelligence 200 years from now is not something to be cautious about, then I suggest contemplating what humans will choose to do when they feel obsolete and out of control of their future.

Comment Re:Way too many humanities majors (Score 3, Insightful) 397

I think you are right that current trends are devaluing the STEM majors. There is a big push to make these majors less 'elitist' which is code for requiring less foundational mastery of basic math and science. We really need a way to advocate for attracting underrepresented groups into STEM that does not involve changing the preparation standards required.

Comment Re:Way too many humanities majors (Score 5, Insightful) 397

Here is a quote from the Zakaria article to think about: 'Critical thinking is, in the end, the only way to protect American jobs.' His implication is that the humanities are a bastion of critical thinking. But when an introductory student is asked to do actual critical thinking where they might be wrong (i.e. introductory engineering, science, and math courses) they often conclude that they would rather go to the arts or humanities where the requirements of critical thinking are not as high.

The fundamental idea is right...that it is understanding of the human condition that will be the biggest growth area in the next few decades. But he is wrong that this is an argument for training more students in current curriculum in anthropology or classics. The future belongs to people who can take the serious critical thinking characteristic of math, science, and engineering curricula and apply it in complex situations where technical details and human behavior are both important.

Comment Re:The fallacy of labels (Score 1) 320

Yes much of the problem lies in the difficulty of conceiving of the scientific enterprise. We inherited our labels from an era when science was just emerging as a human endeavor, and that was also a time of enlightenment optimism about the ability of human rationality to attain reliable truth free from spin and political agendas. In our era we have swung to the opposite end of the pendulum where all ideas are assumed to be used in pursuit of some political agenda or other. Reality is a very subtle combination of both. But anything simplified enough to be used in the media has to be black or white. So one set of stories digs out political agendas in scientific work and calls them scandalous. And another glorifies how close we are to a 'theory of everything'. The idea that a messy human scientific process might be able to achieve a patchwork of mostly self-consistent models of how most of our corner of the universe works is beyond description by the labels that the media has available at present.

Comment Color means many things (Score 2) 420

This reminds me of the great entries from the competitaion to explain 'What is Color': http://www.centerforcommunicat...

By the way, I see white/lavender and brown. It would be very interesting to know what lighting/image manipulation was done to get those colors out of a dark blue and black dress.

Comment Is NIH unique here? (Score 1) 153

In the physics and engineering proposals I have reviewed, it seems that young researchers still get a significant preference in the distribution of grants. But there is a problem that the proposals from young researchers are often much weaker. It is really hard to write a great grant proposal and new faculty members usually struggle long and hard to get good at it. You have to have great ideas, preliminary work, and a great presentation. And you have to know how to market your ideas to the diverse set of people who will be reviewing the proposal. Maybe 30 years ago, people could get research grants just by describing some potentially interesting research, but in the current environment, you have to write a proposal that is better than 80 or 90% of the others, and that is hard for young people to do. Maybe the bias toward younger researchers should be stronger. But I don't think it helps them to set a low bar and then they will fail to get their grants renewed. I would recommend that grant agencies more aggressively limit the number of grants that can be accumulated by the big names. No one can effectively mentor 5 post-docs and 10 graduate students, and letting them suck up all the funding just because they are able to spit out a large number of strong proposals limits the number of new researchers who can be funded.

In a five year period we can get one superb programming language. Only we can't control when the five year period will begin.