...but how do they sign up for that email account?
Via your mobile phone number, Gmail has been doing that for a while for new accounts.
...but how do they sign up for that email account?
Via your mobile phone number, Gmail has been doing that for a while for new accounts.
Sounds insanely inefficient to me. Maybe there needs to be some competition to remove the inefficiencies. i.e. no, or at least highly restricted, patent monopolies.
I think you're missing the fundamental point of patents. If there is no temporary monopoly on a novel drug, what is to prevent a bunch of bottom-feeders from simply copying it and selling it at a tenth of the price? It's far easier to copy someone else than to come up with something genuinely new, especially with a product that's so ridiculously easy to reverse engineer. On the other hand, just because one company has a drug that treats heart disease, does not prevent another company from making an entirely different drug to treat heart disease. (Unless it's one of those sleazy cases like Ariad Pharmaceuticals and their NF-kappaB patent, which basically prevented anyone from developing drugs that altered that pathway. Fortunately, the courts eventually nixed this.)
The pure research is mostly done off of NIH or DOE grants. The only drug-money research is the attempt to add an extra protein here, or swap an atom there to make it patentable, and then get the analogue through human trials,
Drugs discovered using NIH or DOE grants are usually already patentable if they don't fail one of the other tests. But these only account for about 25% of new drugs; the remainder are genuinely discovered by drug companies. That doesn't mean that the drug companies don't benefit in other ways from public research - most of what we know about the mechanisms of disease and the biochemistry of individual proteins comes from academics. But there's a huge leap from "we know this protein causes cancer" to "we have a drug to stop cancer".
In any case, even when academics do find a promising drug, the human trials are usually still vastly more expensive than the basic research. And in many cases there is still a great deal of trial and error necessary to come up with a drug that has the desired functional and pharmacological properties.
Right now the NIH does the early research, but doesn't spend the boatload of money needed to actually test the stuff they come up with. They usually abandon research when it gets to the point where this article is at.
Not really - what actually happens is typically that the universities patent the discovery and license it to a company which performs the development work. Which does have an element of "socialize the risk, privatize the profits", except that the expense of the product development is typically far more than the basic research done with public funding, and the failure rate is dismal. So at least if a drug candidate bombs in clinical trials, most of the money that just got flushed down the toilet belongs to pharma company shareholders or VCs, and not the taxpaying public. The NIH and the universities don't have much incentive to do this themselves, especially if they can be hauled before Congress and asked to account for the money.
I don't say that trial and error does not have some place in science, but everything medical science seems too much based on trying stuff and doing statistics than on understanding things first.
That's because we still understand shockingly little about biological systems - I think around half of human genes remain uncharacterized. This means that even if we can say with certainty that "mutated protein X causes disease Y", and therefore inhibiting the mutant protein is a promising approach to curing the disease, we have no way of knowing what will happen when we introduce our candidate drug into the actual organism. We know some basic rules, e.g. certain chemical structures are more amenable to entering cells than others, and we can make educated guesses, for example protein kinase inhibitors tend to be non-specific, but there is still a huge amount of uncertainty. Eliminating the guesswork will take decades of painfully slow basic research. Should we simply not try to treat these diseases until we can comprehensively model the entire system and predict how drug candidates will work?
trade secrets, which means that the discovery is not made available to all
Which is extraordinarily difficult for drugs, because everyone will simply buy a bunch of their competitors' pills, and figure out exactly what they're made of down to atomic detail. A typical university chemistry lab could do this in a few days. There are some aspects that are more tricky - the exact packaging is sometimes key to getting the drug absorbed by the body at the desired rate, and the chemical synthesis can be messy - but figuring these out is still way cheaper than coming up with your own drug.
Among these is the drug/pharmaceutical industry because only they can afford the R&D needed to make important things happen.
It's less the "R" than the "D". The government spends large amounts on basic research, including some expenses which drug companies, at least individually, can't afford. For instance, the US Department of Energy builds massive X-ray generators called synchrotrons, which are used by biologists to determine the structures of proteins, and drug companies make heavy use of these to investigate drug candidates. A new state-of-the-art synchrotron is around $1 billion. Naturally, drug companies pay the DOE to use these facilities without revealing their data (which is a requirement of use for everyone else). It's a situation that just about everyone is happy with. (Also, more generally, the government funds studies which increase our knowledge and understanding of biological systems, which can inform drug development even though they usually don't magically lead to new therapies.)
What the government can't or doesn't want to spend money on is the laborious process of taking a drug candidate from the lab bench to the consumer. I made a longer post about this above, but the short version is that it typically costs hundreds of millions of dollars. and most drug candidates don't even make it that far. The government would naturally prefer not to spend huge amounts of taxpayer money on projects that have an exceptionally high risk of failure, and academic scientists are reluctant to work on such projects both in general, and without being well-compensated. So the "development" phase is farmed out to companies.
It is an imperfect process, and I think much could be done to improve the system (I am on the record as supporting the repeal of the Bayh-Dole Act), but right now I do not see any magical alternatives. Maybe with another 20 years' improvement in biotechnology and automation we'll do things differently; I certainly hope so.
Don't even bother arguing that profit motivates progress. The overwhelming majority of researchers and engineers are motivated by the joy of success, not crushing the opposition and getting filthy rich.
The problem with drug development is that the huge majority of efforts end in failure, and depending on how far along the pipeline the drugs are, these failures can be painfully expensive. Truth is, it's not really all that difficult or costly to come up with a nanomolar inhibitor for some key regulatory protein involved in heart disease or cancer. But that doesn't mean you've cured the disease. You might synthesize a molecule that completely shuts down your target protein, and start doing in-vivo studies. Here's where the bad shit starts: maybe your compound can't get past the cell membrane. Or maybe it gets shunted to the liver and immediately degraded - unless it fucks up the liver, of course (which one of the major reasons for negative drug interactions, and why many medications have labels saying "do not consume alcohol"). Or let's say it gets to exactly where it needs to be, but it also binds with high affinity to seven other proteins, three of which we know nothing about, and all of these are essential for other processes. So you come in the next morning, and half of your test mice are belly-up, another quarter are bleeding rectally, and the remainder will promptly croak if you feed them Tylenol.
If you're really unlucky, your drug passes the animal models easily, and makes it into clinical trials with actual sick humans. If you're really, really unlucky, you make it all the way to Phase III trials, with thousands of patients, and only then do you discover that either a) your drug doesn't really work as well as it needs to, or b) a large fraction of patients manifest severe side effects over time, or c) both. At this point the cumulative expense of developing this candidate may be hundreds of millions of dollars. And companies fail at this stage all the time; it's always big news when this happens, and their market capitalization takes it in the ass.
Now, I don't feel terribly sympathetic for drug companies as a whole; they do some pretty sleazy shit, and have paid some well-deserved fines for their malfeasance. But I would find it incredibly depressing to sink years of my life (and millions of dollars of investor money) into a promising clinical candidate, only to have it fail just shy of the endpoint. I'm an academic scientist, and this is one of the reasons why I've stayed in academia so long, for all of its faults. I get paid less, but I don't have to devote myself to narrowly-scoped projects which have a depressingly high risk of failure. If I had to start doing drug discovery as part of some newly nationalized research plan, I would leave without hesitation. Sorry, but if you want me to spend my life doing something that mind-numbing and soul-crushing, you'd fucking better pay decently me for it. The overwhelming majority of people who know anything about drug discovery will tell you the same thing.
PS #1: Please, explain how the extraordinary improvement in computer hardware since WWII was encouraged by lack of patents. Another counter-example: genome sequencing technology has become orders of magnitude faster in the last dozen or so years. (No, I'm not arguing that we should patent everything; I'm still against patents on software and gene sequences.)
PS #2: Don't assume that scientists aren't motivated by crushing the opposition. That's part of the joy of success, and while we may not be doing it for the money, our egos are at least as big as everyone else's.
For phones, sure, we are reasonably close at hitting diminishing returns. But when it comes to Google Glass, the Oculus Rift or augmented and virtual reality in general we are nowhere near at hitting it. It will probably take 20K screens 2 inch in size before we hit diminishing returns there. Nvidia also just demoed a few nifty light field displays that would need even more resolution then a classical 2D display, so that's out even further.
Also lets not forget about our good old monitors at home, 4K monitors are finally back on the market, but still far from having any kind of mass market penetration and when it comes to big curved monitors, you'd probably need 8K or 16K before you are done.
You'll do have to take care about a whole lot of compatibility issues when you want to deploying something that should run on IE6, but still, even then, the actual deployment of an HTML app is still vastly easier then trying to deploy a regular application across as many platforms as do support HTML.
Try to imagine the web wouldn't run in your web browser, but would instead come in the form of
I think "learning from the old masters" really isn't the problem. It's not that we don't have lots of smart people writing software. I think the core problem is that we haven't figured out how to do upgrades and backward compatibility properly, which the old masters haven't figured out either. You can go and develop a HTML replacement that is better and faster, sure, but now try to deploy it. Not only do you have to update billions of devices, you also have to update millions of servers. Good luck with that. It's basically impossible and that's why nobody is even trying it.
If software should improve in the long run we have to figure out a way how to make it not take 10 years to add a new function to the C++ standard. So far we simply haven't. The need for backward compatibility and the slowness of deploying new software slows everything to a crawl.
Someone else here reminded me that Manning actually delivered these documents to others, who WERE supposed to try to separate that out. But somebody goofed. So I'm not sure that can honestly be blamed on Manning, who actually did make an effort to expose wrongdoing while not releasing those other things to the public.
That's kind of a huge abdication of responsibility on his part, don't you think? Ultimately Manning was the person responsible for leaking classified information - it was his decision alone, and only he had the necessary access. If he really thought that the public would benefit from some of the material he released, it was his duty to separate it out.
I still think this points to naivete rather than malice, and I certainly don't buy the argument that Manning aided his enemies, which would criminalize just about any action which simply makes the US look bad. But I still find Manning's behavior shockingly irresponsible and somewhat dangerous. If revealing US misdeeds damaged our national interests, that's our problem, not his, and we obviously need to clean up our act. However, there is an awful lot of sensitive information which the government is quite right to keep secret, not because it hides evidence of their perfidy, but because leaking it simply creates messes. Stuff like which foreign nationals are (legally) cooperating with us, which foreign officials are problematic to deal with, what the political situation in a country is like, etc. I'm not convinced that it actually did as much harm as some have suggested - if people really did get killed as a result of the leaks, I'm sure the prosecution would have made a big deal about it - but we simply can't afford to let this kind of irresponsibility go entirely unpunished. Time served, a criminal record, and a dishonorable discharge seem like enough to me, however.
(On the other hand, from what I've read about Edward Snowden, I'd have a difficult time defending his prosecution under any circumstances, although I'm not very impressed that he sought refuge with the PRC and Putin.)
Many of the documents made it very clear that our government was working covertly in ways that were not necessarily in the actual interest of The People of the United States. I applaud those revelations.
I agree, but keep in mind that many of the documents were simply things we didn't want the entire world to know, but didn't actually indicate any wrongdoing. Like the cables in which diplomatic staff characterized the flaws of some of the people we have no choice but to deal with (unless, of course, you believe that the US should not even have diplomatic relations with countries under less-than-ideal government). This is an essential function of their job, and there was no greater purpose to be served by releasing those documents, other than further embarrassing the US government. So while I'm glad Manning released the video of a gunship mowing down civilians, I still think he needs to go to jail for indiscriminately spreading as many secrets as he could get his hands on, even the harmless ones. (20 years seems a little excessive, though.)
Even in college, calling my Professors "Dr. Whatever" was exceptionally rare and I went to an Ivy League school where you'd think they'd insist on their proper titles.
Weird, I always used their titles in class, also at an Ivy, and it wasn't that long ago (less than 15 years). Of course once I started doing research, I figured out after a couple of days that it was okay for a lowly undergrad to address the professor as "Mark". Since I work with mostly PhDs, usually the only time we're addressed as "Dr. So-and-so" is when someone is being sarcastic; I actually get uncomfortable when someone uses the title seriously.
*That* should teach her a lesson and send a strong signal.
It's still not as bad as Carly Fiorina driving HP's stock price down 50% and firing 7000 people, and getting let go with a $20M severance package, and still being considered a serious candidate for California senator. That's the biggest difference between the rest of us and the 0.01%: when we fuck up, we get fired with cause and are economic roadkill, and seriously risk being impoverished. When they fuck up, they lose access to the corporate jet and may have to postpone buying the third home in Pebble Beach. I honestly wouldn't have any problem with income inequality if we could occasionally see failed CEOs like Dick Fuld reduced to standing in line at soup kitchens like all of the other "takers".
"Trust me. I know what I'm doing." -- Sledge Hammer