Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:towed to the dealer? (Score 5, Informative) 315

A number of the Japanese manufacturers use a similar system.

Toyota use a dual NFC (RFID) / "far-field" radio system. The same transponder in the fob is connected to both an NFC antenna, and a battery powered MCU and RF power amp.

With a working battery, a button push on the fob will cause it to transmit an appropriate radio signal to the car. When key-less starting, the battery will provide power to the RFID transponder, and power the RF amplifier to allow a successful authentication whenever the fob is in the interior of the vehicle.

In the event of a discharged or removed fob battery, there is a mechanical key concealed in the fob which can open the vehicle doors. By placing the fob directly on top of the "push-to-start" button, then transponder will be sufficiently energized by the car's antenna (which is concealed in the button) to complete an authentication transaction.

Comment Re:They shouldn't have access in the first place (Score 1) 84

That's not correct. The Data Protection Act allows disclosure, "on or by order of a court", for the purpose of "legal action", for "legal advice" or for "defence of any legally recognised right".

So, for example, if I enter into a contract with another party, even if I refuse consent for my personal information to be handed to a 3rd party; if I fail to pay a contractual obligation, the other party to the contract can pass my details on to a debt collection agency, as they are defending the legal right to collect monies owed.

Comment Re:Why not? (Score 2) 84

No, the legal process of handling illegal parking has been delegated to councils and does not require police involvement.

However, more concerning is the fact that there are a lot of private parking enforcement contractors operating on private land. The DVLA also offers a service to these private companies, whereby the DVLA provide drivers' identity details from a plate number, in exchange for a fee. Technically, this service is open to any party who can provide a legitimate reason for wanting it.

Hence, if I were to park at a supermarket car park, and overstay the 2hr free-parking period, I might "implicity agree to a contract where I pay £100 per 24 hours to park", as stated in the small print on a sign by the entry road. A private contractor can then contact the DVLA with my plate details, and the DVLA will provide my name, address, DOB and other details.

I recently tried to do the same, because a driver was repeatedly parking on my land and obstructing access to it by my own vehicles. He failed to respond to notes on the car, and he kept late hours, so never saw him in person. I contacted the DVLA (and paid their fee) with the plate details and explained that I needed the details to send formal notice of impending legal action for trespass. The DVLA refused, stating that I did not have legitimate grounds to request this privileged personal information.

Comment Re:The evil that is laser disc. (Score 1) 368

Probably magneto-optical disc, as those were widely used in medical imaging at that time. Although each generation of MO disc was supposedly backwards compatible, in general, the backwards compatibility was flaky as hell. So, although a 540 MB MO disc should be readable in a 5.2GB drive - in practice, this often wouldn't work. Only a 540 MB drive could be used.

In general, the workstations were supplied as a complete package with an expensive support contract, so no hardware modifications were possible. As MO was the standard method of archiving medical data in the late 1990s/early 2000s, when this device was likely acquired, there may not have been any other type of drive attached to the workstation. So, while the image could be displayed on screen, it could not be copied to a new medium (like a CD).

Alternatively, it's possible that the last 540 MB drive died, and none of their existing drives could read it. I know at one hospital where I was doing some research on MRI scans, I needed to retrieve some historical scans which were on 540 MB MO discs. We couldn't read them on anything in the hospital, even though are modern drives were supposedly compatible (or the OS was incompatible, e.g. the discs were formatted in ext, but the drive was connected to a windows box). In the end, I used some research funds to buy a refurb drive off ebay, and connected that to a linux box which could copy the data onto a more practical format. I could get away with doing that myself in a research context - if a hospital had to get an IT consultant in to source the drive and do the format conversion, then the bill could have been substantial.

Comment Re:He should seek legal advice. (Score 2) 368

In the UK, 7 years from last modification date is generally regarded as the minimum retention period. Up till now, paper records would be destroyed after this point, due to the cost and space constraints of maintaining them. However, some hospitals would have microfilmed them, or scanned them into a document management system prior to destruction, with retention of the microfilm or digital data for a longer period.

However, although 7 years is the "normal" retention time, there are lots and lots of exceptions; cancer cases , clinical trials, legal cases - 25 years after death; children - at least till age 25 or 7 years after death; the list goes on and on...

One of the things with digital data storage, especially server based storage is that it is now so cheap that there is much less pressure to destroy data. I was recently involved in purchasing a PACS system (digital X-ray/CT/MRI storage/viewing solution). One of the things that I asked the vendors was do they offer a method to destroy old data to free up space on the discs. (the previous system was subject to an insane markup on the cost of the SAN, and not only that, the system didn't support tiered storage, so the only storage upgrade option that the solution vendor would support was another EMC box of 15k drives with a 200% markup on top). Out of 8 vendors, 7 stated that they do not support automated data destruction; the answers basically came back "we sell this software in 53 countries. We have never had this request outside of the UK. Bearing in mind that we are only charging you $500/TB for archive storage on SATA arrays, realistically, why would you ever want to delete anything when the cost is that low, and only set to drop further if you purchase an upgrade at a later date?".

While current guidelines do recommend data destruction when the data is sufficiently old, and with the cost of storage continuing to drop, have decided that it is better to hoard it just-in-case.

Comment Re:He should seek legal advice. (Score 1) 368

Those are the statutory maximums. However, there is a get out clause. The data holder only needs to provide a copy of the data which can be accessed without "disproportionate effort".

In other words, your name might have been mentioned a couple of times in an e-mail conversation, and those mail spools have now been purged under retention policies. However, there might be a great-great-grandfather backup tape with a snapshot of the exchange server on it, and that might contain e-mails referring to you. However, the effort involved in creating a new exchange environment, restoring the snapshot to it, and running a search is not reasonable for a generic information request (but retrieval might have been appropriate in the context of court-ordered discovery).

If the data holder advises that the costs of data copying are disproportionately large, they can refuse to provide you with a copy. If you insist, then they are entitled to charge you their legitimate costs in making the copy.

Comment Re:What a fuckup (Score 4, Informative) 368

The old system may not have been phased out completely - only phased out for new data. In fact, this is typically what happened with the older systems. Data was stored on MO discs, and stored on yards and yards of shelves. Although the data on the discs is in an open and standard format, the discs are an obscure and obsolete format.

When a new system was installed (which after about 2000 would have been networked with data stored on a large server, rather than individual discs/tapes) it would have been too labour intensive to convert the format - and indeed, the existing equipment may not have supported it, or if it did, it may have required expensive configuration on both the image acquisition device and on the server side (to set up a connection from e.g. a CT scanner to an image server is an expensive process - typically configuring the server's IP address in the "image destination" config on the scanner is a manufacturer service call out - $4k+; and there must be a matching entry on the server with the scanner's IP address - again, software vendor only setup + new image source IP address licence - $5k+).

So, even though the old system has been decommissioned for new use; the discs may still be available, and the workstation still functional, so that the discs can be read and the study examined by a doctor that needs it. However, there may be no way to transfer the data to a new format. E.g. the workstations may not have been fitted with a CD Writer; just the MO drive.

This means that there is no way for the hospital to get the data off an MO disc and onto a contemporary format (like CD or DVD). The only way to do it, would be to acquire an old external SCSI CD-writer compatible with the old workstation (which may be something obscure like a sparcstation or an SGI indigo II) from a specialist IT supplier - or acquire an MO drive which can be connected to a modern workstation with a CD-Writer, or network access (in fact, even that isn't the end of the story, the old equipment may have been unix/linux based, which means the MO discs might be formatted in ext2 - an MO drive on a Windows workstation won't help with that). It is entirely plausible that this is the first request they have had for the data to be migrated to a new format, and that the equipment and configuration needed would have been expensive.

Comment Concentrated solar is less efficient (Score 4, Informative) 237

Unfortunately, this is a concentrated light solution. This means that the figures quoted for efficiency are in the presence of direct sunlight. However, this is only a proportion of energy generated from PV modules, hence the "efficacy" and therefore, total energy production, of concentrated solar solutions is less good than unconcentrated modules.

The reason comes from diffuse sunlight - light that has been diffused by the atmosphere or by clouds. This typically accounts for 10% of module illumination in direct sunlight, and much higher in the presence of atmospheric haze/cloud; even in lightly overcast conditions, you can expect unconcentrated PV to yield approx 10-15% of direct illumination yield because of the diffuse illuminance.

Diffuse light cannot be concentrated by optics, thus concentrated solar PV modules cannot utilise the diffuse light (more precisely, they can utilise it, but not concentrate it - thus if the system uses a 10:1 concentration, then the energy yield from diffuse illumination falls from 10-15% to 1-1.5%).

A boost from 30 to 33% efficiency by switching to concentrating modules could be completely wiped out by the loss of diffuse yield, even in direct sunlight. In non-direct sunlight, hazy or cloudy conditions, the yield can be reduced much more severely; resulting in a net reduction in productivity, despite the higher nameplate efficiency.

This technology is most suited to areas with the most intense direct illumination; e.g. dry areas, at low latitudes (where the role of diffuse light is diminished in proportion).

Comment Re:Any other variables..? (Score 3, Informative) 206

The original research cites a large number studies with large numbers of children (hundreds or thousands). One of the major studies cited looks at different "types" of neglect which they call "global neglect" and "chaotic neglect". These mean multi-modal or single-modal sensory deprivation; e.g. no exposure to speech, or no exposure to physical experiences (for example, not allowed out of bed), no exposure to cognitive stimuli, etc.

The research showed that for "chaotic neglect" (i.e. one aspect of stimulus missing), brain scans were usually normal, or only slightly abnormal (e.g. brain volume reduced). However, for "global neglect" (multiple aspects of stimulus missing), then nearly half the brain scans were abnormal, showing severely reduced brain volume.

Of course, there are other aspects to neglect, not just sensory and intellectual deprivation; but that was not what the image, or the description in the text was about; this review purely (as far as possible in an observational study) looked at the differences between partial and severe sensory/intellectual deprivation.

Comment Re:Meh... (Score 5, Informative) 234

You're right about the network architecture, but things rapidly get complex.

Let's take the example of MRI/CT. How much data is in a CT or MRI study, or even an X-ray study? A single X-ray image (e.g. a Chest X-ray) taken with a modern digital machine, is about 60MB (30 megapixel image, 16 bits per pixel).

My new CT scanner, if I prescribe a "full neuro" protocol, generates 16000 files of 500 kB each. The reason I'm doing a "full neuro" it means that minutes count. I need to have that data set sent to not just a PACS (image repository and viewing software), but also to a PC with 3rd party software (which has the complex software capable of analysing the data) and I have to have it ready within 5 minutes. Not only do I need to have it in my office in 5 minutes, the doctor who is dealing with the patient in the ER, needs to have (some) of it in the ER within 5 minutes. Then, after everything is said and done, I need to send the data to my office at the university, so that I can run it through my research software.

If it was just PACS - no problem. You put the scanners and the PACS incoming-data server on a restricted VLAN. Have the incoming PACS server communicate with the main PACS application and data-store servers over a private VLAN, and have the PACS app servers face the hospital clients on the main hospital VLAN (or individual departmental VLANs).

However, at my hospital we also get several hundred CTs/MRIs sent in from outside per day, that need to get onto the PACS. Many come on CD/DVD. Some come via VPN tunnels. Some come via 3rd party proprietary transfer services. (The DICOM protocol used to transfer medical images doesn't support encryption, so must be tunnelled in some way). Now you have to somehow connect all these incoming points to your restricted VLAN (or you open your wallet to your PACS vendor for another software license at a cost that makes oracle enterprise look like chump change).

What if your PACS vendor has you buy the balls on your SAN contract, so that you are paying $10 per GB + $2 per GB per year? Do you really want to send that 8GB dataset to PACS (which can't actually do anything useful with it- and remember, as a medical-grade archiving device, you can't delete)? Or do you now need to start putting PCs with 3rd party software on your restricted VLAN so they can talk to the scanners?

Comment This is extremely common. (Score 5, Informative) 234

The term medical device has a broad definition; it includes obvious things such as laboratory analysers, X-ray equipment, etc., but it also includes PCs running specific types of software, such as medical records software. Most of these things run general purpose OSs - some embedded; some desktop.

E.g. Windows XP is a common platform for things like ultrasound scanners, MRI scanners, etc. XP embedded is quite common on things like laboratory equipment. Variants of linux are also in widespread use - albeit, often old. E.g. I work with an MRI scanner that runs a 2.2 kernel.

Now, things like analysers and scanners are usually on their own VLAN (or should be) with connections only to their application servers, with the servers heavily firewalled from the general purpose VLANs; however, this often isn't the case, and I've seen a number of installations where you can just sit down at a random PC, and SSH into an MRI scanner (these things usually have generic root passwords which are written in the service manual - once you know what the passwords are, you can get into any device of that make and model).

The biggest problem, however, is that these machines never get updated. The manufacturers often won't support any updates to the OS, or even permit hotfix installation, nevermind a 3rd party security package (for more general purpose devices). For example, one hospital earlier this year, upgraded their PACS system (software for storing and displaying X-ray/MRI/CT images) and bought a new set of dedicated workstations (quad core, Xeon E5, 8GB RAM, Dual Quadro), but because the PACS client software had to interface with a number of other client software packages, and those vendors had strict requirements; these machines ended up being loaded with XP SP1 32-bit and Java 1.4. Unsurprisingly, these aren't regularly patched, and more importantly, they can no longer update their anti-virus software as the current version of their chosen AV software won't run on this configuration (so they're stuck using an obsolete, unsupported version).

I saw an extreme example of this a few years ago when the Confiker worm hit. There were a group of hospitals in a major city, which shared the same infrastructure, and they had a very large PACS system. The worm got onto the PACS VLAN, and essentially killed the servers. The system was completely down for days, because as soon as the servers we rebooted or re-imaged; the worm killed them again. The vendor stubbornly refused to apply the hotfix and refused permission to install the hospital's antivirus system on the servers/workstations. The only thing that got it moving was when the CEO of the hospitals made a conference call with the hospitals lawyers and the CEO of the PACS vendor, telling them that they were going to f**k them so hard with the SLA stick, that they wouldn't be able to sit down for a month. After that call, the vendor agreed to install the hotfix, and the system came back online.

Comment Re:...Why? (Score 2) 328

No, it isn't. It's just that unlike GPS, the precise part is open to the general public.

Correct. The L5 intermediate-precision Galileo signal will be freely available to the public. However, this signal is not as precise as the GPS encrypted "precise acquisition" (aka military) signal.

The freely available L1 signal has essentially an identical format as the GPS "coarse acquisition" signal, and is therefore expected to offer broadly similar performance.

The advantage of having 2 distinct frequencies available to the public is that it is possible to correct for atmospheric dispersion. Within the ionosphere, the propagation velocity of the signals from the satellites can be altered by prevailing "space-weather" conditions; this is actually the major source of error in GPS. As different frequencies are affected differently, using a multi-frequency receiver makes it possible directly to measure this dispersion. Single-frequency receivers must either use a model based upon satellite ascension, or utilise a correction obtained from another source (e.g. DGPS or SBAS).

The galileo service plans to offer a paid-for premium service, offering another signal technically equivalent to the GPS "precise acquisition" signal, and encrypted using commercial cryptography. This service is also expected to provide high-bandwidth downlink for rapid acquisition of satellite ephemeris and differential-correction data (if conventional terrestrial based assisted GPS techniques are not available), as well as a guaranteed level of service, with guaranteed OTA alerts if a satellite malfunction is detecte, allowing a receiver immediately to drop a bad satellite rather than attempt to solve the location problem with bad data. As this signal will be transmitted on a 3rd frequency - triple frequency receivers which could perform an even more precise measurement of atmospheric signal dispersion are possible.

Comment Re:Thoughts (Score 3, Informative) 163

It's a neutron emitter. Alpha-particles will interact with Beryllium nuclei to emit neutrons. By encapsulating a mixture of Americium 241 and Beryllium, the alpha radiation (and gamma radiation) can be contained, but the neutrons allowed out, where they can be used for chemical analysis (in this case for analysing the composition of the rocks around the well bore).

Quite apart from the fact that the source is dangerous in its own right, emitting neutrons which are an ionising radiation, they are a particular nuisance, because they can leave "radioactivity behind" by activating the nuclei of nearby materials (metals are particularly troublesome).

Comment Re:Already Broken (Score 3, Informative) 183

I haven't tried IE or firefox, but magnifier doesn't work on Chrome windows. The magnified view just shows an empty page.

I'm guessing that whatever chrome is doing - openGL, or whatever it is using to composite the pages, bypasses whatever layer magnifier hooks into.

Similarly, the mcafee tool probably works by using graphics hardware overlays, and rendering the image directly into the graphics buffer, and then using hardware compositing. This works quite well to defeat low-end screen capture software. The better software, such as FRAPS, is capable of capturing the overlays, and then re-compositing the final image in software.

Comment Re:Not about ATA, about enterprise data storage (Score 5, Informative) 192

The "Turn off Windows write-cache buffer flushing on the device" option activates an ancient windows bug, and should never be used.

When Windows 3.11 was released, MS accidentally introduced a bug, whereby a call to "sync" (or whatever the windows equivalent was called) would usually be silently dropped. At the time, a few programmers noticed that their file I/O appeared to have improved, and attributed this to MS's much marketed new 32-bit I/O layer. What a lot of naive developers didn't notice was that the reason their I/O appeared to be faster was that the OS was handling file steams in an aggressive write-back mode, and then calls to "sync" were being ignored by the OS.

Because of this, there was a profusion of office software, in particular, accounting software, which would "sync" frequently - some packages would call "sync" on every keypress, or everytime enter was pressed, or the cursor moved to the next data entry field. As on 3.11, this call was effectively a NOP, a lot of packages made it onto client machines, and because it was fast, no one noticed.

With Win95, MS fixed the bug. Suddenly, corporate offices around the world had their accounting software reduced to glacial speed, and tech support departments at software vendors rapidly went into panic mode. Customers were blaming MS, Win95 was getting slated, lawyers were starting to drool, etc. Developers were calling senators and planning anti-trust actions. The whole thing was getting totally out of hand.

In the end, MS decided the only way to deal with this bad PR, was to put an option into windows, where the bug could be reproduced for software which depended upon it. The option to activate the bug was hidden away reasonably well, in order to stop most people from turning it on, and running their file-system in a grossly unstable mode. However, in Win95 - Vista, it had a rather cryptic name "Advanced performance", which meant that a lot of hardware enthusiasts would switch it on, in order to improve performance, without any clear idea of what it did. At least in Win7 it now has a clear name, even though it still doesn't make clear that it should only be used for when using defective software.

Slashdot Top Deals

Anyone can make an omelet with eggs. The trick is to make one with none.

Working...