Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:I wonder if they will be charging the same amou (Score 5, Informative) 26

It's a small wheeled device with an opening just about big enough to get your head into.

I was wondering when something like this was going to come out. There had already been the development of portable CT scanners for ICUs about 10 years ago. Development of portable head MRI units was an obvious next step. Additionally, unlike for CT, scaling down a CT scanner doesn't help all that much in terms of cost. Whereas for MRI, the cost savings are vast.

There are major disadvantages with this machine, however. You wouldn't want to use it if you had access to a more conventional MRI scanner. It is very low field due to the use of permanent magnets - this cuts your signal-to-noise ratio by an order of magnitude, and takes away a huge amount of flexibility in terms of resolution/speed trade off. You could do a minimal head exam in 30-40 minutes, whereas a high quality superconducting system could do a full neuro protocol at double the resolution with far higher diagnostic performance in the same, or do the minimal protocol at the better resolution and better SNR in 10 minutes. There is also the issue of size - the system is extremely small, and far more claustrophobia inducing than a conventional scanner (for head scans). The low field strength also translates into terrible performance for detection of small blood clots (like epidural or subdural hematomas) - the visibility of the blood depends on it's magnetic properties, and the interaction with the scanner's magnetic field. Stronger magnetic field drastically increases this effect, over and above the drastically higher SNR.

However, for the intended use case, which is neuro ICU where patients are too unwell to be transported to the MRI department, this is an incredible development. You can diagnose stroke with near 100% accuracy without leaving the ward. You can also diagnose other causes of sudden deteriotation, like hydrocephalus, large epidural/subdural hematomas. By avoiding the need for a complex transfer (potentially with extensive support equipment, such as ventilators, monitoring equipment, etc) you can shorten the time to diagnosis, and therefore treat the cause of the deterioration faster.

This unit does not compete as a replacement for a general purpose superconducting system for non-urgent cases, but may have a role in situations where a superconducting MRI system is unaffordable, such as developing countries, where the cost of even a basic MRI scan can exceed a month's wages. Most importantly, due to it's small size, it will not be able to fit body parts larger than the head - so would be limited to extremities like wrists, hands, feet and knees, and possibly the neck.

In some sites, portable CT is being used for a similar purpose - but even the best CT is poor in terms of anatomical detail compared to even low field MRI. There are also significant radiation protection concerns when portable CT units are used, for both staff and other patients - as ICU rooms lack the lead-lined walls of a proper CT suite. This MRI unit avoids the radiation concerns.

In MRI, the major costs are the magnet - typically a superconducting system of 1.5 T or 3T. These are large complex and very expensive devices, requiring substantial energy and substantial support plant - and the gradient set (a set of 3 orthogonal electromagnets which dynamically distort the main magnetic field in a precisely controllable way). The costs of superconducting magnets scale roughly proportionally to the radius to the 4th power, and the cost of the gradient equipment scales roughly proportionally to the radius to the 5th power. The move from 60 cm diameter MRI systems (which are limiting due to claustophobia and in the case of obesity) to 70 cm systems (which are now standard) has come with a rise in cost.

The change in diameter also needs a significant increase in support plant. One manufacturer is offering an refurbishment service for their old scanners which replaces the gradient system with a much thinner system, increasing the bore of the scanner from 60 to 70 cm diameter, while simultaneously increasing the performance of the gradient set. This comes at a substantial price in terms of the building services required - a 150 kW 3 phase electricity supply is no longer sufficient, as the new gradient amplifiers (capable of delivering 2.7 MW energy pulses) now need a 300 kW supply, together with a concomitant upgrade in chilled water and HVAC. This is something which has proven a significant barrier for sites with an EOL system, looking to upgrade it to the current specification, only to find that they cannot accommodate the services.

However, in turn, when down-scaling the magnet (in particular, to the use of low cost permanent magnets) huge cost reductions can be achieved (think replacing a $1 million superconducting magnet with a $10k rare earth magnet), and similarly down-scaling the gradient unit will also result in huge cost savings due to the reduction in the quantity of power electronics required (no longer need a $500k system capable of handling multi-MW pulses, when a $10k system capable delivering 10 kW pulses is all you need).

There are of course new safety concerns with this device - however, the low field, and small size of the magnet should mean that the hazard zone is much smaller than around existing MRI devices. However, you have the new risk that the hazard zone moves with the scanner - so this will require new MRI safety practices. There are also safety issues regarding metallic or electronic implants - in general, for things like screws and plates (which aren't really an issue anyway), the low field has fewer issues than high field systems. However, electronic devices like pacemakers, neurostimulators and so on pose a more difficult problem. For these device, the major hazards are EMI from the scanner (either from it's RF transmitter or the gradient system). For the RF transmitter, currents may be induced in implanted wires, but the amount of current depends on the geometry of the wire as an antenna, and whether it is resonant with the scanner's frequency, which in turn depends on the magnetic field strength. There is an industry agreement among medical device developers to only qualify their devices for compatibility with 64 MHz RF (corresponding to 1.5T superconducting systems), and that the different frequency may pose unrecognised hazards if it hits a resonant mode of the implanted device. The mitigating factor is that the small size of the scanner minimizes the amount of the body exposed to RF (and also gradient related EMI) - so even though a pacemaker may be labelled as only qualified for 1.5 T MRI, and therefore this scanning with this type of low field system would be against instructions - the fact that the chest would be well outside the scanner unit, may serve to reduce the risk - although this is difficult to quantify without either extensive numerical RF simulations and/or physical experiments with an instrumented pacemaker implanted in a human shaped gel block.

Comment Re:Extraordinary claims require ... (Score 1) 94

The only information publicly available is the graceDB page for the event (called S200114f). The LIGO/VIRGO policy is to publish only limited information as early alerts (e.g. which pipeline found the event, what class of template was matched, where in the sky was the source), which is of key interest to other astronomers. The remaining data is kept confidential pending internal review, and formal peer review prior before publication.

However, one parameter that is published is the duration of the signal of interest. This parameter is opaque, as the template matching pipeline works on 1 second templates, so the matched duration is usually 1 s exactly. However, the actual event is usually shorter (a few 100s of ms).

In this case, the duration was only about 13 ms. This is far shorter than could be explained by a typical binary coalescence. Some informal discussion speculates that it could be a core collapse supernova, but there are some commenters pointing out that a supernova, while fast, is almost certainly not that fast. Further, preliminary optical and neutrino observations don't really support the idea of a new supernova consistent with the GW detection.

As further information isn't available, there isn't really anything to go on. The lack of any smoking gun follow-up observations is making some people think it was a coincidental combination of glitches rather than a real event. However, the event has not been retracted after plenty of time for human review inside the research group.

Comment Re:Extraordinary claims require ... (Score 3, Informative) 94

The primary reference for this low latency alert is: https://gracedb.ligo.org/super... Preliminary reports of GW detection from the low-latency pipelines are published online ( https://gracedb.ligo.org/lates... ), and via e-mail circulars ( https://gcn.gsfc.nasa.gov/gcn3... )

The normal low-latency search process at the LIGO/VIRGO observatories is to attempt to match received signals with a library of pre-calculated models. In other words, the emission of gravitational waves from a variety of binary inspirals is simulated and the results of simulation used as the templates. This gives good sensitivity for events which match the models, as well as identifying a best fit template which gives an indication of the cause of the event (i.e. mass, spin, distance of the merger event)

Recognising that this type of analysis would fail to detect unexpected events, a second search process operates using a different method - in this case, by detecting coherent wave bursts - In other words, the near-simultaneous detection of the same arbitrary waveforms by 2 or 3 detectors, which meet some sort of threshold (likely based on measured or observed noise levels/characteristics in the detectors and a statistical model giving an estimate of the false positive rate).-+

Up till now, all the GW events have been detected by the template matching pipeline. This event was detected only by the coherent wave burst pipeline without triggering the template matching pipeline. Interestingly, the direction estimates from the phase/polarization differences in the detection at the different sites are extremely narrow, which may indicate a very high SNR measurement at all 3 detectors (i.e. a particularly strong event).

Note that the search process intended for formal scientific publication uses a much more complex set of off-line analysis and optimisation pipelines. The low latency alerts are intended for other astronomers who wish to collaborate on multi-messenger observations (e.g. gamma ray, visible light) who require a target ASAP. If you look through the various archives, you will see a lot of the preliminary low-latency alerts are retracted some time later after further analysis suggests a glitch or terrestrial event.

Comment Re:Seriously? (Score 3, Informative) 244

So... unless they have figured out the last piece of security (what to do without electric system...) nothing new but just "scaled down" (do we hear here any news about "mini" cars?)

This was figured out a decade ago.

Reactor designs like the Westinghouse AP1000, GE-Hitachi ESBWR and China General HPR-1000 are all resistant to a complete loss of the electrical system. These systems all provide 72 hours of assured cooling in the event of a complete loss of AC electricity. They also have mitigations in case of a complete loss of battery/UPS electricity, as well as much higher levels of protection of the batteries than at older sites.

However, smaller reactors are much easier to cool than larger ones - which makes it easier to provide longer grace times and reduce the requirement for manual intervention. For example the Nuscale reactor (60 MW, compared to 1000-1600 MW for other modern PWRs) means that there is an indefinite grace period following almost any conceivable accident scenario. Total loss of electricity and UPS - cooling automatically activates and will operate indefinitely without further intervention. Reactor coolant leak - significant coolant level drop impossible due to containment fill up. Containment is self-cooling indefinitely without further intervention.

Comment Just a national pager network (Score 1) 29

From the appearance of the message decode, this is just someone tuned into the broadcasts from the pageone network. This is the only remaining commercial nationwide radio pager network in the UK. It is relatively little used, but as an old protocol it is one of the first things which people try to decode. In fact, when I purchased a $10 USB SDR, decoding pager transmissions was one of the first projects I attempted. After only a few hours, having never done any DSP programming before, I had a working decoder.

Hospitals as a general rule don't use this type of pager - they will typically have their own internal system operating on a private licensed frequency, and usually using a modified protocol. As a general rule, national or wide area pagers are VHF, whereas hospital pagers tend to be UHF as they don't need the range or building penetration that public network pagers require. Typically, hospital pagers don't support text messaging - just numeric messaging, so there is little scope for leakage of private information (although some hospital systems include a voice transmission channel for the most urgent pages - e.g. code blue Ward 3)

Ambulance services, however, are quite a large user of pageone's national network. Certainly, when I left my decoder program running overnight, I had dozens of messages, including names, addresses and health information (e.g. "Male child. Not breathing. Think he's dead", or "Adult female. Cut herself. Saying she'll kill herself"). However, they are not the only users - security systems, automated equipment monitoring, and a bunch of uninterpretable machine-to-machine messages also showed up prominently in my scans.

Even in hospital, these old technology pagers are not a good method for communication. They aren't reliable - i.e. there is no retransmission in case of error, and no acknowledgement from the receiver - so the sender has no idea if a message has got through, or even if they dialled a valid number, or if the recipient is on site. The require the sender to stay by a particular phone until the callee responds, or they get bored. The callee needs to be able to find a phone (which are often in short supply in hospitals). As hospitals phones are busy, then if a 3rd party calls the phone used by the caller to send the page, then they block the callee from responding. There is often no way to mark a page as urgent or routine - so, it's common to have staff interrupted at critical times for a routine notification. As these systems are often old and proprietary, spares are difficult and expensive to get - prices for a replacement numeric pager can be as much as 250 UKP ($350).

I know at several hospitals, many of the doctors prefer to use whatsapp or similar for communication. It's faster, text based, self-contained (doesn't rely on an external phone system), reliable, provides read receipts, and works over a wider range than the hospital systems. For reasons of governance, whatsapp is discouraged by hospital management, and there are several startups offering healthcare compliant clones of whatsapp messaging (i.e. tiered keying - with a management recovery key, server side archiving, audit trails, etc.). Some pager companies have got onto this bandwagon rather late, and are now selling "smart pagers" which are basically android devices which receive/send text messages over wifi (optionally, with a UHF page receiver as backup). As most hospitals now have widespread wifi for interfacing computers/devices for medical noting - this type of system is marketed as having almost no infrastructure cost to install. Of course, there are also other companies with much more integrated or erognomic products (e.g. vocera pagers - which are lanyard worn, and offer text messaging, voice messaging, voice control/siri type voice assistant, 2 way voice calling, etc. operating over an encrypted dedicated site network).

Comment Re:Upgraded? How? (Score 2) 53

https://www.nature.com/article...

The major technology currently being tuned at advanced LIGO (aLIGO) is "squeezed light" - manipulation of the quantum state of the light, so as to decrease phase uncertainty. Due to the Heisenberg uncertainty principle, this comes at the cost of increased amplitude uncertainty. The phase uncertainty is an important source of noise at high frequencies (which are more interesting), but the amplitude uncertainty manifests as low frequency noise (due to pressure on the mirrors), so this is a reasonable tradeoff.

The next step being developed for future upgrades(termed aLIGO+), is frequency dependent squeezing - the quadrature of the squeezing (i.e. it's direction in the amplitude/phase plane) can be made to rotate over time. This has the extraordinary effect of squeezing so as to reduce amplitude uncertainty at low frequencies, while reducing phase uncertainty at high frequencies (i.e. improving noise at all frequencies)

http://www.apc.univ-paris7.fr/...

Comment Re:false advertising... (Score 1) 199

In the UK, it is prohibited to advertise or offer a product or service for sale which claims to treat cancer, directly to the general public. It does not matter whether it works or not.

It is permissible to advertise such products and services in such a way that the only audience is members of parliament, doctors, nurses or pharmacists.

Comment Re:Facsimile. (Score 1) 163

Do you happen to know if that has changed? and if so, what the current reasoning is?

Because it is often difficult to get different parties to cooperate enough to get more modern IT systems to integrate.

For example, I recently worked on a project where several hospitals and primary care facilities outsourced their laboratory services to a 3rd party laboratory serving multiple hospitals and clinics. They wanted to connect their individual EHRs and orders/results systems to transmit orders and receive the results electronically instead of by paper. The major obstruction was the IT vendor supplying the laboratory.

The laboratory IT solution vendor insisted on an initial set up fee of $80k and a $10k per year annual licensing fee, for each individual site interfacing with the lab. On top of that, it was impossible to get appropriate signoff on transport security (TLS over public internet was not considered adequate for regulatory compliance) so VPNs were required, which were compatible with the laboratory server software's static IP authentication. On top of that, there were additional software licencing costs at each site (generally more reasonably, typically in the $10k setup and $1-2k per year maintenance), VPN setup and maintenance costs, etc.

One of the sites balked just at the cost of getting in some networking consultants to design a compliant and workable solution. By the time we had a technically acceptable solution designed and quotes obtained, the setup costs, networking costs and licencing costs, the costs were considered unmanageable, so everyone decided to stay with paper and fax. We even toyed with the idea of replacing the laboratory IT system, as it potentially would have been cheaper, but we decided that the project risk was unacceptable given that the costs would only have been marginally less.

Comment Re:What does color mean when there's no visible li (Score 1) 59

I think in this case, the colours are not just based on density, but on absorption spectrum i.e. atomic composition, as well as simply attenuation.

The use of multiple energy x-ray beams to determine atomic composition has been around for a long time. It's been a common feature of commercial CT scanners for 10 years. The idea would be that by making crude two-point measurements of the absorption spectrum, you could measure the quantity of an atom of interest - for example, if the patient had been given an iodine dye, the dual-energy technique could precisely quantify the iodine concentration in one acquisition more robustly than taking one scan before the dye, and one after and subtracting. Or, in the case of certain diseases like kidney stones, by measuring the calcium concentration in the stone, you could confidently start a particular treatment, without actually needing to wait to collect a stone for lab analysis.

About 2-3 years ago, more sophisticated CT detectors have been commercialised, offering a spectroscopic measurement - i.e. they can measure multiple energy bands simultaneously - so by using multiple polychromatic x-ray beams and spectral detectors, you can get a multi-point spectrum for each voxel, and also improve image quality by being able to measure scattered photons and correct the reconstruction process for them. The idea with this is that you might be better able to quantify multiple different atoms - like iron and calcium - so, by biasing the image contrast towards iron, you might improve detection of blood clots.

The novelty of this technique is that it appears to be a photon counting system with continuous energy measurement - so instead of measuring a spectrum with 4 or 5 broad energy bins, this is a high resolution spectrograph with single photon sensitivity. Essentially, it takes the current spectral CT technology one step further, by delivering a higher spectral resolution.

As this type of spectral imaging has only been commercialised for a few years, the actual medical applications are not yet clear. It is an active area for clinical research, with medical teams trying out the enhanced capability of spectral imaging to determine where it may be of value.

Comment Re:Someone is doing (Score 1) 355

It used to be. CFC-114 is used as a coolant for gaseous diffusion enrichment; it operates in a thermosiphon system carrying heat from exchangers in contact with UF6, to cold water tanks at higher elevation. It has the advantage that it does not absorb moisture and does not react with UF6, so that leaks would not result in dangerous reactions taking place (the most serious being transfer of moisture into the UF6). However, there are now HFC or HFO alternatives which could be used, although there is little need for them.

Gaseous diffusion enrichment is obsolete in most countries, due to the vast energy requirements, and huge capital costs. The US has closed all its gaseous diffusion plants. France is the only country still using the process on any scale (achievable as they have copious nuclear power to power the plant, and never developed centrifuge technology).

Comment Re:Coincidence != Causality (Score 1) 140

It's an observational case-control study. A group of cancer patients and a group of healthy controls were given a questionnaire about their blue light exposure and home address. The home address was used to estimate the effect of street lighting/external light pollution.

One of the problems is that the cancer patients had a lot more family history of the relevant cancer, than did the controls. However, the blue light exposure was found to be a weak predictor independent of this on multivariate regression.

The odds ratios reported in the various subgroups are very weak - however, the positive results are only found in the maximum exposure groups (highest questionnaire answers about indoor illumination, and in separate comparisons the highest tertile of external blue light).

Comment Re:Externalized Costs (Score 1) 248

The other issue is that when serious nuclear accidents have occurred, there has often been overreaction by the authorities.

This really started with the Chernobyl accident; a large area of Ukraine was evacuated and turned into an exclusion zone. That was a justifiable approach for about 30% of the region, but the majority of the region was not sufficiently contaminated that there was a meaningful public health hazard. Similarly, neighboring countries placed restrictions on land and livestock, where the justification was borderline, if valid at all. Several years before the UK government de-restricted Welsh lamb, the radiation hazard from eating that lamb was such that if you ate 1 lb of lamb per day indefinitely, the radiation burden would result in an estimated 15 minute reduction in life expectancy. That's a hazard similar to smoking 1 cigarette on one single occasion.

The problem with these types of government response is that they are expensive, and they often disproportionately affect certain regions. Wealth and GDP are intimately associated with health outcomes (including life expectancy), and anything which reduces wealth is at risk of reducing health. In fact, by performing a regression of GDP vs life expectancy, you can obtain a completely objective coefficient for this to inform decisions. Looking at the Welsh hill farm example, the harm to health caused by economic damage by restrictions in land use, was multiple orders of magnitude greater than any credible estimate of harm from radioactive contamination.

It has been a similar thing after the Fukushima accident. Other than for some land immediately in the vicinity of the plant, it is hard to justify the existence of any exclusion zone or any population evacuation. Estimates of radiation risk in even "heavily contaminated" parts of the exclusion zone are in the order of a reduction of population life expectancy of approximately 2-3 months. Compare this to other environmental risks - air pollution in Paris causes an estimated population life expectancy reduction of 9-12 months, purely due to particulate emissions. That's around 4x as toxic as the Fukushima exclusion zone, yet so far no one has seriously suggested evacuating Paris due to a public health emergency. However, tens of thousands of people were displaced from Fukushima, dozens dying in the process of evacuation, and thousands suffering serious mental illness as a result of the stress of losing everything, having been told that they have been poisoned, placed in unstable temporary accommodation and turned into outcasts by social stigma.

The real question to ask is, is overzealous government reaction to an accident an integral part of the risks of nuclear power? The harm figures are dramatically different if you do include it.

Comment Re:File complaints with NHTSA (Score 1) 188

What car actually governs the speed during a failure

Lots do. This is particulary common with "drive by wire" engines where the ECU has much greater control over the engine.

Older cars with a mechanically actuated throttle were limited in the degree of control that the ECU can have, because the ECU has virtually no control over air mass flow. Without that, it is very difficult for the ECU to limit engine power without risking further serious damage to the engine or emissions.

With drive-by-wire, the ECU has direct control over air mass flow, and therefore manufacturers are taking the opportunity to limit engine speeds and torque under many more fault conditions, to prevent more expensive engine or emissions failures. Triggers for limp mode include things like excessive knock, excessive misfires, air control problems (e.g. charge air pressure control, charge air temperature control), excessive coolant temperatuere.

Comment Re:1975 (Score 1) 368

Reprocessing separates the spent fuel into 3 fractions: recovered uranium, plutonium and fission products.

The uranium is not typically re-enriched due to employee radiation protection difficulties, but would either be disposed as ILW, blended with enriched uranium to form an intermediate level enriched uranium, or blended with plutonium to give MOX.

The fission products are dried and fused into borosilicate glass, giving an extremely stable, highly concentrated and compact final high level waste material.

Reprocessing has been used in the UK for decades, largely because the first generation reactor fuel was not suitable for direct disposal. However, in light of that experience, and the fact that it reduces the waste stream it has been considered as a method for increasing the capacity of a final geological disposal facility.

For example, 1000 t of spent fuel, would after conditioning for direct deep geological disposal require 1,540 m3 of volume of HLW, and 1801 m3 of ILW, and a disposal footprint of 0.1 km2. By contrast, after reprocessing, this would require 341 m3 of HLW (0.03 km2) and 2310 m3 of ILW. It would also generate MOX, which if directly disposed after use, would require 348 m3 (0.025 km2).

The result is that by reprocessing, the HLW disposal footprint is reduced by approx 50% while energy recover is increased by approximately 15%. The ILW disposal requirement is increased, but the cost and area footprint for disposal is minimal in comparison with HLW. Total repository costs would be expected to be decreased by approx €100 million per 1000 t reprocessed.

Slashdot Top Deals

"May your future be limited only by your dreams." -- Christa McAuliffe

Working...