Yes, they are well aware of that fact. Their argument was that open source was a security risk, because it would make probing for vulnerabilities easier. And because open source was a risk, the source had to be closed and only shown to persons on a strictly need to know basis.
The problem wasn't so much with virtualized IO. The problem was the way in which the middleware communicated with the *client* software on the workstation. It did some horrible hackery where it loaded the other apps DLLs and directly called various interfaces exposed by the DLLs in the software to send messages. No RPCs or pipes in this software (which says something about the quality of the middleware).
No one could find a way of doing that unless the client software ran in the same VM as the middleware. This would have been an option, but these workstations did *nothing* else apart from run these half-dozen apps.
It was decided that it was better to just run XP on the bare metal, than load win 7 with nothing except VMware, which would then run the fully loaded XP.
The problem is customers. I work at a major hospital and a local consortium is looking to purchase some new medical records software, worth about $10 million.
We've been drafting the new contract for tender, and line 1 of the tender instructions is "The software will run on Windows Server 2008 R2 or Windows Server 2012 64-bit on the servers, and on Windows XP, 7 and 8 32-bit and 64-bit on the client side". I protested at this, but was told by the technical chair, that this term was not negotiable as it was a critical part of the spec; they simply did not have the in-house experience to manage a *nix system.
Later on, there was another line in the tender instructions. "The distribution of the source code of the product must be strictly controlled with appropriate audit trails for persons who have seen it, includes the source code of any 3rd party components used within the product". Again, I protested about this, but the chair of information governance and security said, that this term was non-negotiable due to the large volume and the critical nature of the data stored in this system!!
True. However, there may be issues of vendor support. Some business apps are, and this includes specialist medical apps, mission critical, or at least sufficiently important that business may be compromised in the event of failure.
I know one hospital that recently upgraded their hardware. However, some of the middleware needed to make their various medical records applications work together, was only supported by the vendor on XP SP1. There were several problems:
1. The critical nature of this middleware, and the fact that the vendor would not support windows 7 (or even XP SP3) with their version of the software.
2. The complex interaction of this middleware with so many other apps meant that they could not run the middleware in VM as it would not connect to the other apps via OLE/COM or whatever non-networkable protocol it used.
3. The prohibitive cost of sourcing an updated version of what was effectively a custom built solution, and the fact that the original vendor had been bought-out by a new company who were desperate to kill the original product, but were tied into a 10 year support contract. So, although they were contracted to provide 10 years of support, they were only going to support the original config.
The result was that when the original hardware reached end-of-life and had to be updated late last year, the hospital had shiny new quad-core Xeons with 8 GB ECC RAM, and 15k RPM SAS RAID workstations with 2 GB Quadro cards running XP SP1.
Well, it's $1000 for the consumables for the device, and the operator's time. Then there's the cost of the machine, building, admin, etc.
In reality though, this is extraordinarily cheaper than what is done at present. Currently, if a physician suspects a genetic disorder, then they the typical process used in a medical genomics laboratory is to use a "matching" technique where the patient's DNA is matched to known mutations. Typically, this costs around $500-700 per mutation tested against. For a number of diseases, this only gives a 75-80% accuracy, because certain genes are prone to new spontaneous random mutations, and have a lot of "normal" functioning variants - so simply checking for a known good gene isn't an option. As a result, these patients end up only with a presumptive diagnosis, leading to difficulties in family and reproductive counselling (i.e. do siblings need to be aware of the risk of passing on a genetic disorder to their offspring?)
Sequencing is occasionally performed in patients with unknown presumed genetic diseases, where a suspected gene is known - but the cost is very high, and it is infrequently done, unless a whole family are affected, and it is possible to identify which the culprit gene is likely to be.
Total genome sequencing, while not a panacea, would greatly help the diagnosis and research into newly recognised, presumed genetic diseases. If the total cost of the testing can be brought down to $2000 per analysis, then that would be cheap compared to the current techniques for genetic diagnosis.
Finally, as to the MRI - the actual cost of an MRI scan including scanner, building, maintenance, staff, admin is about $300-600 depending on scan complexity (or at least, that's the "bulk" price charged by private MRI facilities to insurers or hospitals who have exceeded the capacity of their own MRI scanners).
The big changes which have affected apple with the implementation of IEC 60950 Amendment 1 are:
1. Requirement for guards and warnings on fans located within equipment where the fans are accessible during user maintenance/servicing.
The previous regulations did not specify particular requirements for guarding during servicing, on the assumption that service personnel would be expected to know where fans, etc. are.
The new regs for fans in areas accessible during user maintenance are: A fan likely to cause pain if contacted by a finger, needs at minimum a warning label. A fan likely to cause injury if contacted, needs both a label and a guard. In both cases, if the user is expected to service the fan, the some method of deactivating the fan needs to be labelled (e.g. a sticker saying disconnect mains power before removing fan guard would be sufficient).
Where equipment is intended for maintenance by qualified service personnel only, then fan guards are not required.
2. New methods of testing fully solid-state circuit breakers used for providing power to externally accessible ports.
Prior regs only required short-circuit testing of electronic circuit breakers (e.g. as provided on USB ports). The new regs prescribe a whole suite of tests, including response times, handling pulsed overloads, etc.
A number of the Japanese manufacturers use a similar system.
Toyota use a dual NFC (RFID) / "far-field" radio system. The same transponder in the fob is connected to both an NFC antenna, and a battery powered MCU and RF power amp.
With a working battery, a button push on the fob will cause it to transmit an appropriate radio signal to the car. When key-less starting, the battery will provide power to the RFID transponder, and power the RF amplifier to allow a successful authentication whenever the fob is in the interior of the vehicle.
In the event of a discharged or removed fob battery, there is a mechanical key concealed in the fob which can open the vehicle doors. By placing the fob directly on top of the "push-to-start" button, then transponder will be sufficiently energized by the car's antenna (which is concealed in the button) to complete an authentication transaction.
That's not correct. The Data Protection Act allows disclosure, "on or by order of a court", for the purpose of "legal action", for "legal advice" or for "defence of any legally recognised right".
So, for example, if I enter into a contract with another party, even if I refuse consent for my personal information to be handed to a 3rd party; if I fail to pay a contractual obligation, the other party to the contract can pass my details on to a debt collection agency, as they are defending the legal right to collect monies owed.
No, the legal process of handling illegal parking has been delegated to councils and does not require police involvement.
However, more concerning is the fact that there are a lot of private parking enforcement contractors operating on private land. The DVLA also offers a service to these private companies, whereby the DVLA provide drivers' identity details from a plate number, in exchange for a fee. Technically, this service is open to any party who can provide a legitimate reason for wanting it.
Hence, if I were to park at a supermarket car park, and overstay the 2hr free-parking period, I might "implicity agree to a contract where I pay £100 per 24 hours to park", as stated in the small print on a sign by the entry road. A private contractor can then contact the DVLA with my plate details, and the DVLA will provide my name, address, DOB and other details.
I recently tried to do the same, because a driver was repeatedly parking on my land and obstructing access to it by my own vehicles. He failed to respond to notes on the car, and he kept late hours, so never saw him in person. I contacted the DVLA (and paid their fee) with the plate details and explained that I needed the details to send formal notice of impending legal action for trespass. The DVLA refused, stating that I did not have legitimate grounds to request this privileged personal information.
Probably magneto-optical disc, as those were widely used in medical imaging at that time. Although each generation of MO disc was supposedly backwards compatible, in general, the backwards compatibility was flaky as hell. So, although a 540 MB MO disc should be readable in a 5.2GB drive - in practice, this often wouldn't work. Only a 540 MB drive could be used.
In general, the workstations were supplied as a complete package with an expensive support contract, so no hardware modifications were possible. As MO was the standard method of archiving medical data in the late 1990s/early 2000s, when this device was likely acquired, there may not have been any other type of drive attached to the workstation. So, while the image could be displayed on screen, it could not be copied to a new medium (like a CD).
Alternatively, it's possible that the last 540 MB drive died, and none of their existing drives could read it. I know at one hospital where I was doing some research on MRI scans, I needed to retrieve some historical scans which were on 540 MB MO discs. We couldn't read them on anything in the hospital, even though are modern drives were supposedly compatible (or the OS was incompatible, e.g. the discs were formatted in ext, but the drive was connected to a windows box). In the end, I used some research funds to buy a refurb drive off ebay, and connected that to a linux box which could copy the data onto a more practical format. I could get away with doing that myself in a research context - if a hospital had to get an IT consultant in to source the drive and do the format conversion, then the bill could have been substantial.
In the UK, 7 years from last modification date is generally regarded as the minimum retention period. Up till now, paper records would be destroyed after this point, due to the cost and space constraints of maintaining them. However, some hospitals would have microfilmed them, or scanned them into a document management system prior to destruction, with retention of the microfilm or digital data for a longer period.
However, although 7 years is the "normal" retention time, there are lots and lots of exceptions; cancer cases , clinical trials, legal cases - 25 years after death; children - at least till age 25 or 7 years after death; the list goes on and on...
One of the things with digital data storage, especially server based storage is that it is now so cheap that there is much less pressure to destroy data. I was recently involved in purchasing a PACS system (digital X-ray/CT/MRI storage/viewing solution). One of the things that I asked the vendors was do they offer a method to destroy old data to free up space on the discs. (the previous system was subject to an insane markup on the cost of the SAN, and not only that, the system didn't support tiered storage, so the only storage upgrade option that the solution vendor would support was another EMC box of 15k drives with a 200% markup on top). Out of 8 vendors, 7 stated that they do not support automated data destruction; the answers basically came back "we sell this software in 53 countries. We have never had this request outside of the UK. Bearing in mind that we are only charging you $500/TB for archive storage on SATA arrays, realistically, why would you ever want to delete anything when the cost is that low, and only set to drop further if you purchase an upgrade at a later date?".
While current guidelines do recommend data destruction when the data is sufficiently old, and with the cost of storage continuing to drop, have decided that it is better to hoard it just-in-case.
Those are the statutory maximums. However, there is a get out clause. The data holder only needs to provide a copy of the data which can be accessed without "disproportionate effort".
In other words, your name might have been mentioned a couple of times in an e-mail conversation, and those mail spools have now been purged under retention policies. However, there might be a great-great-grandfather backup tape with a snapshot of the exchange server on it, and that might contain e-mails referring to you. However, the effort involved in creating a new exchange environment, restoring the snapshot to it, and running a search is not reasonable for a generic information request (but retrieval might have been appropriate in the context of court-ordered discovery).
If the data holder advises that the costs of data copying are disproportionately large, they can refuse to provide you with a copy. If you insist, then they are entitled to charge you their legitimate costs in making the copy.
The old system may not have been phased out completely - only phased out for new data. In fact, this is typically what happened with the older systems. Data was stored on MO discs, and stored on yards and yards of shelves. Although the data on the discs is in an open and standard format, the discs are an obscure and obsolete format.
When a new system was installed (which after about 2000 would have been networked with data stored on a large server, rather than individual discs/tapes) it would have been too labour intensive to convert the format - and indeed, the existing equipment may not have supported it, or if it did, it may have required expensive configuration on both the image acquisition device and on the server side (to set up a connection from e.g. a CT scanner to an image server is an expensive process - typically configuring the server's IP address in the "image destination" config on the scanner is a manufacturer service call out - $4k+; and there must be a matching entry on the server with the scanner's IP address - again, software vendor only setup + new image source IP address licence - $5k+).
So, even though the old system has been decommissioned for new use; the discs may still be available, and the workstation still functional, so that the discs can be read and the study examined by a doctor that needs it. However, there may be no way to transfer the data to a new format. E.g. the workstations may not have been fitted with a CD Writer; just the MO drive.
This means that there is no way for the hospital to get the data off an MO disc and onto a contemporary format (like CD or DVD). The only way to do it, would be to acquire an old external SCSI CD-writer compatible with the old workstation (which may be something obscure like a sparcstation or an SGI indigo II) from a specialist IT supplier - or acquire an MO drive which can be connected to a modern workstation with a CD-Writer, or network access (in fact, even that isn't the end of the story, the old equipment may have been unix/linux based, which means the MO discs might be formatted in ext2 - an MO drive on a Windows workstation won't help with that). It is entirely plausible that this is the first request they have had for the data to be migrated to a new format, and that the equipment and configuration needed would have been expensive.
Unfortunately, this is a concentrated light solution. This means that the figures quoted for efficiency are in the presence of direct sunlight. However, this is only a proportion of energy generated from PV modules, hence the "efficacy" and therefore, total energy production, of concentrated solar solutions is less good than unconcentrated modules.
The reason comes from diffuse sunlight - light that has been diffused by the atmosphere or by clouds. This typically accounts for 10% of module illumination in direct sunlight, and much higher in the presence of atmospheric haze/cloud; even in lightly overcast conditions, you can expect unconcentrated PV to yield approx 10-15% of direct illumination yield because of the diffuse illuminance.
Diffuse light cannot be concentrated by optics, thus concentrated solar PV modules cannot utilise the diffuse light (more precisely, they can utilise it, but not concentrate it - thus if the system uses a 10:1 concentration, then the energy yield from diffuse illumination falls from 10-15% to 1-1.5%).
A boost from 30 to 33% efficiency by switching to concentrating modules could be completely wiped out by the loss of diffuse yield, even in direct sunlight. In non-direct sunlight, hazy or cloudy conditions, the yield can be reduced much more severely; resulting in a net reduction in productivity, despite the higher nameplate efficiency.
This technology is most suited to areas with the most intense direct illumination; e.g. dry areas, at low latitudes (where the role of diffuse light is diminished in proportion).
The original research cites a large number studies with large numbers of children (hundreds or thousands). One of the major studies cited looks at different "types" of neglect which they call "global neglect" and "chaotic neglect". These mean multi-modal or single-modal sensory deprivation; e.g. no exposure to speech, or no exposure to physical experiences (for example, not allowed out of bed), no exposure to cognitive stimuli, etc.
The research showed that for "chaotic neglect" (i.e. one aspect of stimulus missing), brain scans were usually normal, or only slightly abnormal (e.g. brain volume reduced). However, for "global neglect" (multiple aspects of stimulus missing), then nearly half the brain scans were abnormal, showing severely reduced brain volume.
Of course, there are other aspects to neglect, not just sensory and intellectual deprivation; but that was not what the image, or the description in the text was about; this review purely (as far as possible in an observational study) looked at the differences between partial and severe sensory/intellectual deprivation.