Comment Same as today... (Score 1) 184
I don't think the business will be its own IT department. I do know that we in the business are today already hiring our own consultants to provide the IT support that we actually need.
The NSA's objections to the publication of "The Codebreakers", would not, by any chance, refer to some less than flattering comments on the performance of this semi-mythical organization? The bitter irony of all that is that, despite all the precautions, NSA has been involved in security breaches more spectacular and more damaging to the free world than any others in the Cold War except those of the atomic spies.
And on NSA's relations with Congress, This stratagem plays upon Congress' fear and ignorance., continuing a little further down with It is much easier not to bother with checking up on NSA. But it must be done. Otherwise the nation jeopardizes some of the very freedom that NSA exists to preserve.. I concede my quotes are form the 1996 revised edition, not the original 1967 edition. Still, it is hard to believe that anything in The Codebreakers can have been a technical risk to national security in 1967. A political risk to the people in the intelligence community, perhaps.
But here is a fascinating thought: America cryptography owns a lot to Elizabeth Wells Gallup, a high school principal from Michigan, who had "discovered" a secret message in the works of Shakespeare. A secret message of Bacon, of course. But Mrs. Gallup's theory attracted the attention of the rich George Fabyan, Fabyan hired Elizebeth Smith to help investigate it, and Elizebeth attracted (and married) William Friedman. Without that unlikely chain of events, William Friedman would never have entered cryptology, and the course of history could have been very different -- including the course of a few wars.
The paper on confocal system performance by Zucker and Price (Cytometry 44, 273-294, 2001) has a few useful data points and references, although it does not refer to two-photon systems. Maybe a later publication by them does.
I can't comment on the figures for two-photon microscopes, but on single-photon systems the ratio of the maximal power output from the objective to input beam power tends to be depressingly low, often much less than 0.1. There are fairly high losses in the microscope optics, especially in the highly corrected high-NA objectives used for confocal imaging. Of course there also is the problem of trying to achieve a fairly homogeneous illumination of the entrance pupil of the objective with a more-or-less Gaussian laser beam, and most solutions are wasteful in power.
.
A widefield deconvolution system doesn't really need a laser. Probably a lamp coupled by a liquid light guide is the better option for such a system. The excitation is not monochromatic but the illumination of the field is excellent.
Prices for this class of laboratory equipment are rarely put on paper, because you are expected to haggle. There usually is considerable margin for negotiation. Sometimes you can beat them down by as much as a third of the list price, although 10 to 20% is more common.
Why do you want to buy a Tsunami? It's a good laser system, but unless you have a specialization in optics or physics and are willing to spend a lot of time on tuning the system, it is better to spend your money on a laser with automatic tuning. (A Mai Tai, in Newport's case.) Performance is a bit less than a well-tuned Tsunami, but certainly good enough for most purposes, and the single box is more convenient than a Tsunami plus an external pump laser.
Anyway, femtosecond pulsed laser systems are somewhere in the quarter-million range, but the small solid-state lasers in most confocal microscopes can be had for an order of magnitude less.
A bit over $50,000 will buy you a good quality inverted research microscope with a basic set of options for widefield fluorescence imaging, with a sensitive camera and a few objective lenses. If you want to add options, $1,000 will pay for an extra set of optical filters, and roughly $5,000 for a single high-resolution objective. By adding more options you can spend $100,000 on a fairly standard microscope.
About $250,000 is roughly the ballpark for a confocal system, depending on exchange rate fluctuations. (Most manufacturers are German or Japanese, still reflecting the traditional location of the optical industries.) You would pay two or three times as much if you equip it for two-photon imaging or fancy techniques such as FCS or FLIM. Deconvolution systems are a bit cheaper (less hardware, more software) and they are also preferred for long-term experiments because they are more gently on the cells under study.
These are expensive tools, and besides most biologists don't understand microscopy nearly well enough to let them loose on it without assistance and supervision. The systems also need a suitable location (vibration free, stable temperature, moderate darkness) to work well. So at universities these systems tend to be in core facilities with a dedicated staff that keeps the instrument in shape, helps the users, and charges the departments back for their use.
In the long term, certainly yes, variants of disease that are are not recognized by the immune system or resistant to existing treatments will spread again, even from a small core, perhaps one created by accidental mutations. Influenza, of course, manages to do so every year, demonstrating how efficient viruses can be at playing this game. Experience with bacteria and parasites (malaria) is also rather discouraging.
However, how fast that occurs depends on quite a number of factors, including the ability of those mutant viruses to replicate, infect, and cause disease; and of course their geographical spread. Characteristic for many treatment-resistant mutants of HIV is that they are less virulent, because their optimal functioning as a virus has been compromised. Resistance mutations are known to all existing antiviral drugs, and it is only a matter of time before these resistant viruses will become more frequent, but meanwhile the drugs do increase the life expectancy of most patients by about thirty years. In reality, partial solutions are worth the effort. Imagine for a moment that you would have a vaccine that would largely eliminate HIV-1 subtype C, which accounts for over half the infections in the worlds poorest countries. That might be worth having, even if it is only a "short-term" solution and the virus will reconquer the lost terrain.
Unfortunately, in this case it is not clear from the information that I can access, what the practical implications of those 10% of viruses that escape the potential treatment are. The summary mentions 90% of HIV-1 strains, but actually the abstract in Science claims 90% of isolates. I can't access the article from here, but probably the authors identified the sequence of the binding site for their antibody, and then looked how frequently this occurs in a database of known HIV sequences, probably the Los Alamos database. That number doesn't tell you much about the viruses in which is does not occur.
Surely half a monologue is a hemilogue?
If one must invent neologisms, then at least it should be done properly. It's the only thing people are going to remember from this 'research'.
Curated Computing sounds like a bad idea to me, because those third persons are making decisions without actually knowing my needs and habits as a user. Therefore less choice is very likely to lead to less relevance as well. This is the kind of computing you get in a big company where a central IT department sets policies and standards for everything, and it generally drives people who try to develop something new or display some creativity into raging fury -- even if the choices that are being made for you aren't braindead.
I think in the long term devices such as the iPad are going to be a success only if they can be personal enough. In theory a more convenient model could be one in which the systems learns from my behavior as a user and adapts accordingly. However, so far this tends to be based on a frequency-of-use approach, which is rather limiting. It isn't much help to the less skilled user, who might never be able to find the right options. And there are potential privacy considerations, if this is focused on monitoring the behavior of a single person.
A better mechanism could be a kind of 'Evolved Computing' working like this: I make myself a member of a peer group, based on common activities and common user interface preferences. I get a software package which may be inherently flexible but complex, perhaps too complex for the daily needs. Monitoring the group statistics allows the system's managers to de-emphasize some features, and highlight or offers others which might be attractive to this group. As a user, I can be presented with tools that other members in my peer group have found useful, and can adapt or reject. Another group may have another set of preferences, of course, but a particular group is offered the relevant subset in its user interface.
It's nothing really knew -- it's traditional user feedback, and the selection mechanism for iPod Apps or other extension packages. But it could be done smoother and more intelligently.
Yes, or of over-engineering...
Most modern structures are designed to have a finite life plus some safety margin. That's not just a trick to sell more cell phones and washing machines, it is normal engineering practice to get the right balance of cost. The FAA would even refuse to approve a wing design for an aircraft that did not have a predicted but finite life. If the design life is exceeded that can be regarded as a bonus, but often it is also considered a sign that the engineers made it too expensive / heavy / complicated.
I guess that a life of 2,500 days for a design goal of 90 days can be justified on the grounds that, given the cost of getting it there, a premature failure would have been a great disappointment. On the other hand, maybe we could have added some useful extra sensor to the Rover, reducing its lifespan to "only" 1,000 days but providing it with a means to avoid sand traps...
I think it was Edmund Burke who pointed out that "It isn't always wise to do what you have a right to do". However, the common law legal system doesn't operate that way. It tends to assume that it you don't exercise your rights, you implicitly abandon them.
This is what makes companies take legal action in such cases, even if they might have sympathy for the project itself. If they do not, it might be used as a precedent next time, when somebody has a project that is less beneficial to Nintendo.
So don't blame them -- write to your Congressman instead.
There is more to scientific serendipity than just luck. A degree of luck is certainly involved, as by definition the process involves observing something one did not plan to see. However, that is why scientists do research: If we only ever saw what we expected to see, then why bother?
But there is an important additional ingredient to it, and that is being able to actually absorb the unexpected and to be able to think of a reasonable explanation for it. The ability to give unexpected data a rational interpretation is crucial, because this is what protects good scientists from the cognitive dissonance that makes other close their eyes for the unexpected. Without interpretation, a surprising observation is just that; it may be a coincidence or an experimental error, and is often thrown out.
The most famous example is Alfred Wegener and his theory of continental drift. Mainstream science has been criticised a lot for its scepticism about Wegener's ideas, but Wegener failed to propose a credible mechanism for the motion of the continents -- the concept of plate tectonics arrived fifty years later. Without an explanation, the observation didn't convince, and Wegener was long dead when it was recognized that his intuitive idea had been right.
The reverse is also true, there is a real danger in theory without experimental observation. This is illustrated by the case of the mysterious "N-Rays" of a patriotic French scientist, who "discovered" them as a counterweight to the German Roentgen's discovery of X-Rays. The N-rays did not exist, but an otherwise very capable scientist also proved highly capable of seeing just what he wanted to see.
Yes for local systems, and that is using a wide definition of "developers" -- not just the professional software developers we employ, but also the people in "informatics" who are basically scientists who build or evaluate new software tools for data analysis, and the engineers who write software to control automated systems.
As far as I understand, this is against our own Very-Large-Company policy, and the threat of having administrator rights taken away is hovering constantly over our heads (now said to be a central IT goal by the end of 2010), but fortunately our local IT managers do understand that we need them. It wouldn't be worth the massive overhead of routing all requests for installation of new software over three different continents (I'm not kidding) and some people with difficult-to-replace skills would simply walk out if this was taken away from them.
There are always dire predictions of the disasters that will result by granting mere end-users administrator rights, but incidents are rare and usually easily resolved. Frankly, while it is true enough that many of the people with admin rights don't know that much about system administration, they actually know a lot more than the average helpdesk jockey who is in and out within a year, but can apparently be trusted with the powers to mess things up thoroughly...
Admin rights for servers we don't have on a personal basis, but on servers that are dedicated for specific purposes, some of us have access to service accounts that have administrator rights. If the server provides services to multiple people, the IT people are usually unwilling to grant that, and I find that understandable even if I don't entirely agree.
Yes, but that pre-supposes the ability to invent a new theory, because scientists are very unwilling to discard a theory if they have no alternative. After all, having no theoretical framework at all is very uncomfortable.
And there I do think that Dunbar makes a perfectly valid point: Any group of specialists who are all of the same mind is very bad a thinking "out of the box" and inventing a new theory. To be able to do that, you need a healthy mixture of different backgrounds, and enough dissent to stimulate the debate. Unfortunately scientists often assemble in excessively homogeneous groups, sometimes on their own initiative ("old boys network") and sometimes through deliberate but foolish policy ("center of excellence").
This is part of the reason why publishing and attending scientific congresses is a vital part of scientific activity. I've often noticed that in industry, this is regarded as a kind of bonus, a perk that allows scientists to travel to nice places. But is absolutely essential, even if that means talking things through with direct competitors.
"Truth never comes into the world but like a bastard, to the ignominy of him that brought her birth." -- Milton