Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:FFS (Score 1) 398

I was hospitalized once for about 9 weeks, 2 of them in intensive care. I was on a lot of pain medication, so much that it had to be administered by an anesthesiologist. When it came time to start weaning me off, I didn't know what was happening at first. I felt sick like I had the flu. I had hot & cold flashes. I was nervous and irritable and couldn't sleep. The sheets had to be changed twice a day because I was sweating so much. It was awful. And that was in a controlled environment, weaning me off over a period of many days.

When I asked a nurse what was going on she told me I was going through withdrawal from all the pain meds I had been on. I was certainly addicted at that point, and going through physical withdrawal. I wasn't going through psychological withdrawal though, and I think that's because my brain had never made an association between the high and any behavior of mine. The withdrawal was terrible, but I never thought "I'd feel better if I just got some more." The drugs were being administered by the doctors and in a time-release way throughout the day. If I had taken the same amount of morphine and Oxycontin and whatever else on my own, in a way that made allowed my brain to associate it with a behavior (shooting the drug or whatever) then I would have been an addict by then.

I think that's partly how nicotine replacement therapy works, too. You put on a patch and get a steady dose of nicotine throughout the day, breaking the link between the drug and the behavior (smoking.) You don't go through physical withdrawal, because you're getting nicotine, but you still miss the behavior. As long as you don't smoke, eventually the brain stops associating the drug with the behavior. Then when it comes time to wean yourself off the patch, you're brain is in the situation I was in in the hospital: it wants the drug, but that no longer motivates the behavior of smoking since the association has been broken.

It has been proven many times that addiction is a result of miserable living conditions - social, economic, and/or psychological - not a result of the drugs themselves. the addiction is to the relief of pain, whether physical or psychological.

I don't think that is true: some drugs are inherently addictive. However, I have seen studies showing that the brain doesn't really distinguish emotional pain (from a shitty life, abusive relationships, whatever) from physical pain, so that could explain how people in those situations are more willing to take risks and continue doing things even when the negative consequences start to stack up, but once you're addicted you're addicted -- pain or not.

Comment Re:amazing (Score 1) 279

On the other other hand brains are orders of magnitude more energy efficient. I don't know if the efficiency is even related to the parallelism, asynchronicity, and ultra low "clock speed" of the brain, but it seems plausible that it is. The brain is optimized for efficiency above all else, where we have so far made the opposite trade-offs with computers.

We're doing that "real-time 3D vision and context-sensitive pattern recognition" with a few watts. Doing that with a bunch of GPUs would take thousands of watts at least. Doing it on a serial CPU is totally impossible but would require ludicrous clock speed and a few orders of magnitude more energy.

Comment Re:If we heard the guy... (Score 1) 421

Yes, lets go back to the caves and live like Noble Savages.

For another point of view, see this talk by David Deutsch from 2005. He rambles for a while (in an entertaining way) before getting to the point. Here's the ending:

So let me now apply this to a current controversy, not because I want to advocate any particular solution, but just to illustrate the kind of thing I mean. And the controversy is global warming. Now, I'm a physicist, but I'm not the right kind of physicist. In regard to global warming, I'm just a layman. And the rational thing for a layman to do is to take seriously the prevailing scientific theory. And according to that theory, it's already too late to avoid a disaster. Because if it's true that our best option at the moment is to prevent CO2 emissions with something like the Kyoto Protocol, with its constraints on economic activity and its enormous cost of hundreds of billions of dollars or whatever it is, then that is already a disaster by any reasonable measure. And the actions that are advocated are not even purported to solve the problem, merely to postpone it by a little. So it's already too late to avoid it, and it probably has been too late to avoid it ever since before anyone realized the danger. It was probably already too late in the 1970s, when the best available scientific theory was telling us that industrial emissions were about to precipitate a new ice age in which billions would die.

Now the lesson of that seems clear to me, and I don't know why it isn't informing public debate. It is that we can't always know. When we know of an impending disaster, and how to solve it at a cost less than the cost of the disaster itself, then there's not going to be much argument, really. But no precautions, and no precautionary principle, can avoid problems that we do not yet foresee. Hence, we need a stance of problem-fixing, not just problem-avoidance. And it's true that an ounce of prevention equals a pound of cure, but that's only if we know what to prevent. If you've been punched on the nose, then the science of medicine does not consist of teaching you how to avoid punches. (Laughter) If medical science stopped seeking cures and concentrated on prevention only, then it would achieve very little of either.

The world is buzzing at the moment with plans to force reductions in gas emissions at all costs. It ought to be buzzing with plans to reduce the temperature, and with plans to live at the higher temperature -- and not at all costs, but efficiently and cheaply. And some such plans exist, things like swarms of mirrors in space to deflect the sunlight away, and encouraging aquatic organisms to eat more carbon dioxide. At the moment, these things are fringe research. They're not central to the human effort to face this problem, or problems in general. And with problems that we are not aware of yet, the ability to put right -- not the sheer good luck of avoiding indefinitely -- is our only hope, not just of solving problems, but of survival. So take two stone tablets, and carve on them. On one of them, carve: "Problems are soluble." And on the other one carve: "Problems are inevitable." Thank you. (Applause)

Comment Re:on starting with smaller-scale albedo modificat (Score 1) 421

you can't impose these kinds of burdens (financial and otherwise) on people without the certainty that they'll make things better.

That's insane. You have to weigh the uncertainty against the consequences if the predictions are right. There will always be some uncertainty... even if just manufactured uncertainty. You're just burying your head in the sand.

Comment Re:disclosure (Score 1) 448

Honestly it is better that he doesn't, otherwise all the papers would simply be attacked ad hominem based on who pays the grants. You want to discredit his work, attack the science in it, not the funding for the science.

The same could be said about publishing the names of the researchers and the institutions they are associated with. That will affect how a paper is received much more than the source of funding will, but the paper should speak for itself whether published by a world-famous scientist at a prestigious university or a high school student working in his garage.

Unfortunately credibility matters. Most people (including scientists) are unable to judge most research entirely on its merit, so they rely on the opinions of those who can. But how do you judge their credibility? Even if the reviewers are completely fair and honest, the volume of research is too high to review everything. Some measure of credibility (e.g. a degree, or a recommendation) is necessary to even be considered. You could just publish everything without regard to its merit, but then how can you call that science? No matter how egalitarian your attitude, there is no escaping the need for credibility. The reputations of the scientists, institutions and journals, and the sources of their funding all help to establish credibility.

When researchers publish bad science, their reputation suffers. Hopefully if a corporation or special interest group continues to fund bad science, the research they fund will also lose credibility and be met with increased skepticism. Eventually scientists won't want to risk their own reputation by accepting the money. But that can only work if the information is disclosed.

Even if the science is good, it is not unfair to be skeptical toward research funded by someone with an obvious stake in the outcome. Just getting a larger number of papers published on one side of a controversial issue gives that side the appearance of increased legitimacy. And grant money always has an effect on researchers even when the money supposedly comes with "no strings attached."

That's the problem with science, insofar as there is a problem: it's ultimately a social and political process done by fallible people. Established scientists and institutions have an unfair advantage and socially unpopular or inconvenient ideas can be suppressed, sometimes for generations. And it can be affected by money. (It's still far better than the alternatives.)

Comment Re:Some misconceptions (Score 1) 319

Languages aren't compiled or interpreted: implementations are.

That's true in theory, utterly false in practice. Major design choices hinge on whether the language is intended to be interpreted or compiled. Interpreting languages that are intended to be compiled can be done but it usually amounts to compiling in the background and doesn't work out well. Compiling languages that are intended to be interpreted typically results in an order of magnitude slower performance compared to a language designed to be compiled. The information needed to optimize the compiled code just isn't there so the compiler cannot eliminate type checks, type coercions, bounds checks, overflow checks, etc. Typically all function calls are virtual and many memory accesses are doublely indirect.

Node.js isn't fast. It's concurrent. You can handle many thousands of simultaneous requests, as long as none of them are working particularly hard.

That's not what concurrent means in this context. The word you're looking for is "asynchronous". All of the javascript code you write will execute on a single thread in Node.js. Some APIs are asynchronous with a callback. That style of programming is much older than me and I'm a greybeard. Asynchronous code is great if you need the performance and can deal with the complexity but it shouldn't be the only option. And seriously, Javascript? for performance? really?

Server-side web programming can have a lot of in-flight requests being handled simultaneously, and not much need for synchronization because the requests are relatively independent (and the heavy lifting of dealing with race conditions is being handled by the database, operating system, file system and other libraries not written in javascript.) Real concurrent programming has much more data passing between threads of execution and the Node.js design of one single-threaded process per core is going to really suck for that.

Look at this stackoverflow question about Node.js and multi-core processors (scroll down to the more up to date anwser.)

"Node.js is one-thread-per-process. This is a very deliberate design decision and eliminates the need to deal with locking semantics. If you don't agree with this, you probably don't yet realize just how insanely hard it is to debug multi-threaded code."

That made me actually laugh out loud. Threads were invented because they are easier than asynchronous programming. Asynchronous programming has it's own pitfalls. Writing asychronous libraries is hard, most programmers don't have much experience with it, and many available libraries can't be used because they're not asynchronous. It's all or nothing: one blocking call (or just one long running call) and every in-flight request in that process is stalled and you have one core doing nothing. The node.js people will tell you that if you're writing long running code in javascript, you're probably doing something wrong, completely contradicting the supposed advantage of being able to do client and server code in the same language.

It's an intentional design decision all right, but not for that reason. The fact is that javascript engines like V8 were designed from day one to run in browsers and are inherently single threaded. Now that they've escaped the browser they're trying to make their single threaded nature out to be a feature. They're hyping asynchronous single threaded code because that's the only option. The browser implementers have zero interest in adding multi-threaded javascript support. Adding threaded javascript to browsers would be very difficult to say the least and rich source of new bugs, and the browsers just don't need it. Adding threading to just the server would fracture the language and remove the only advantage Node.js has. So instead Node.js tries to spin the lack of threads as a feature.

The performance "gains" people talk about with Node.js are entirely due to being single threaded and asynchronous. They're comparing asynchronous Node.js code to e.g. threaded Java. It's been well-known for decades that if you want ultimate performance you have to eliminate the overhead of context switches and go asynchronous. Again, big deal. We've known that since forever, and you can do that in many languages. C# has better language and library support for asynchronous operations. C++11 has excellent support as well. And if you want to see the ultimate in high performance asynchronous programming, just look at any OS kernel. All mainstream OSs (Linux, Windows, OS/X, iOS, BSD) are written this way, in plain old C. So most of the hype around Node.js is just because they've accidently rediscovered asychronous programming because they can't have threads.

Node.js : The speed of a dynamic interpreted language combined with the programming simplicity of low-level asynchronous systems code.

Exactly what collision course are we talking about?

The collision is that javascript is becoming the de-facto standard "byte code" and virtual machine target that other languages compile to, which is traditionally Java's (and somewhat .Net's) turf. TFA completely misses that point. The javascript VM is already deployed everywhere and the Java and .NET (and other) VMs definitely are not. Plus, for better or worse, HTML5 is a cross-platform GUI framework that actually works and both programmers and users are familiar with it. Java GUI sucks everywhere, .NET GUI sucks everywhere but Windows.

If you can write code in Java, C#, Python, or even C++, run it directly on the server, and compile it to javascript on the client, then you can use one language for the client and the server and it doesn't have to be javascript. This is both an incredibly great idea and a horrible idea at the same time. It's great because a ubiquitously deployed VM, with reasonable security and with cross-platform GUI capabilities that don't totally suck is exactly what we need. It's horrible because javascript is just the wrong tool for that, but it's what we have. As bad as it is, it can actually work pretty well in practice. Ironically the only thing really missing is threads.

This video is a pretty funny take on javascript as a bytecode.

Comment Re:So what? (Score 1) 412

St. John's Wort is no more effective than placebo.

(Sources: Search for "effectiveness of X" and pick nih.gov or webmd)

OK, sounds like fun! Lets see... google "effectiveness of st john's wort"... pick the first NIH or WebMD link. Got it, that'd be this one:

Is there scientific evidence that supports the use of St. John's wort for depression?

There is some scientific evidence that St. John's wort may be helpful in treating mild depression, and the benefit seems similar to that of antidepressants. However, two large studies, one sponsored by the National Center for Complementary and Alternative Medicine (NCCAM), showed that the herb was no more effective than placebo in treating major depression of moderate severity; ironically, the conventional drugs also studied did not fare any better than placebo, either.

Hmm. So, according to the first link (that you recommended) St John's wort is about as effective (or ineffective) as conventional drugs. Only cheaper and with far fewer side effects (Source: ask anyone who's taken a conventional antidepressant)

Meanwhile, Wikipedia (with references) says:

An analysis of twenty-nine clinical trials with more than five thousand patients was conducted by Cochrane Collaboration. The review concluded that extracts of St John's wort were superior to placebo in patients with major depression. St John's wort had similar efficacy to standard antidepressants.

And what about side effects?

The rate of side-effects was half that of newer SSRI antidepressants and one-fifth that of older tricyclic antidepressants.[9] A report[9] from the Cochrane Review states:
The available evidence suggests that the Hypericum extracts tested in the included trials a) are superior to placebo in patients with major depression; b) are similarly effective as standard antidepressants; and c) have fewer side-effects than standard antidepressants. [...] St John's wort is generally well tolerated, with an adverse effect profile similar to placebo.[21]

Follow through with the references at your leisure.

Comment Re:So what? (Score 1) 412

Sorry, I think you fail this one.

It's not unscientific at all to assume a claim is not true if there is no evidence for or against it.

If there is no evidence for or against a claim, then there is no reason to assume anything about it. Assuming it is not true is not a valid conclusion from no evidence.

Come on, people, this is too simple to fuck up. Here's a simple technique to avoid this stupid mistake: You are not required to update your beliefs whenever someone makes a claim. Assuming they are wrong is just as stupid as assuming they are right. If there's no evidence for or against, then the claim should have no effect on your belief.

Comment Re:So what? (Score 1) 412

Perversely, if they manage to prove effectiveness they then fall under the FDA.

Herbal supplements fall under the FDA anyway. You are aware what the F in FDA stands for, right?

Given that the FDA manages to simultaneously drive up costs and fails to provide safety, they want nothing to do with that.

Whether the FDA makes the cost of prescription drugs unnecessarily high is up for debate. Given some of the recently approved drugs that turned out to be deadly, it doesn't seem obvious that they do. But their track record on food safety is excellent, in my opinion. Unless you eat a ridiculous amount, there isn't any food you can buy in the supermarket (including herbal supplements) that is an immediate safety risk. What foods are healthy for you in the long run is an entirely different issue.

So until the FDA is reformed to stay on-mission and avoid extreme costs for no benefit, they will continue to stay far away from spending money to prove effectiveness.

It's not the FDAs job to prove the effectiveness of herbal supplements. Never has been.

Comment Re:More ambiguous cruft (Score 1) 514

How is this a Troll? Because it goes against the groupthink?

GMO is different, it's a fundamentally different approach to breeding plants which goes way beyond breeding, and it permits outcomes which were not feasible or even possible before. That is cause for alarm. It's not reason not to experiment, of course. Science is how we progress as a species. I object to using the wide world for these experiments, not to doing the science.

This is a perfectly reasonable point of view. It is objectively true that GMO is different from traditional breeding methods. How many generations of selective breeding would it take to breed a glow-in-the dark strawberry plant? I have no idea, but I bet if you started at the dawn of agriculture you still wouldn't have one. What would you even select for? But now we can do that directly with genetic engineering, in one generation. Genetic engineering of crops is a second agricultural revolution, except with even more potential impact both to human health and the health of ecosystems.

And what effect did the first agricultural revolution have on human health? It was good, right? Not necessarily:

When populations around the globe started turning to agriculture around 10,000 years ago, regardless of their locations and type of crops, a similar trend occurred: The height and health of the people declined.

... Many people have this image of the rise of agriculture and the dawn of modern civilization, and they just assume that a more stable food source makes you healthier," Mummert says. "But early agriculturalists experienced nutritional deficiencies and had a harder time adapting to stress, probably because they became dependent on particular food crops, rather than having a more significantly diverse diet.

Sound familiar?

... "Culturally, we're agricultural chauvinists. We tend to think that producing food is always beneficial, but the picture is much more complex than that," says Emory anthropologist George Armelagos, co-author of the review. "Humans paid a heavy biological cost for agriculture, especially when it came to the variety of nutrients. Even now, about 60 percent of our calories come from corn, rice and wheat."

... "I think it's important to consider what exactly 'good health' means," Mummert says. "The modernization and commercialization of food may be helping us by providing more calories, but those calories may not be good for us. You need calories to grow bones long, but you need rich nutrients to grow bones strong."

People have become healthier because of agriculture, but not because the food is healthier--it probably isn't healthier. Rather, we've become healthier in the long run because agriculture allowed us to produce enough food to have doctors and clean water and sanitation. In the short term, agricultural food made us less healthy.

In principle, a genetically engineered food supply could be overall better, maybe even incredibly better. But in practice it isn't clear we're getting that, or even if that's what we're trying to get. Instead, we're just continuing to make food even cheaper, not necessarily healthier, with even more dependence on particular crops.

Agriculture freed enough people from the burden feeding themselves to create modern civilization. But today almost nobody is a farmer--we're just making agriculture more profitable, but at what cost? Genetic engineering so far has mostly been used to maximize crop yield over nutrition and diversity, just like we've always done.

Which is to say nothing about the potential ecological implications. "The picture is much more complex than that" is an understatement. This won't be the first time we've forged ahead with some technology completely oblivious to the ecological impacts.

Comment Re:If it's accessing your X server, it's elevated (Score 3, Insightful) 375

The basic misunderstanding here is the idea that the screen lock in old X was designed for security, and usable as such; it was just a screensaver with a password

What use is a screensaver with a password that isn't designed for security? Why is the password even there? So it looks secure? Lets just admit it was poorly designed from a security standpoint. That's fine, most stuff designed at that time was not secure. MS-DOS had no security at all. Pointing out that NT occasionally has some good ideas is not an indictment against Unix.

Slashdot Top Deals

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...