Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment This type of law isn't unique. (Score 3, Informative) 119

It's not much different in a number of other countries, notably the UK.

If a crime is committed over your internet connection, you are liable - unless you can provide proof of identity of the perpetrator. For a commercial ISP, this isn't too hard - they can tie a communication to an account, and the name of the account holder is good enough.

If you are offering wi-fi as part of a business (e.g. a coffee shop), then unless you keep some form of record of customer IDs, which allow you to match a communication to a customer, then you are on shaky ground. A common business practice is to outsource Wi-fi provision to an ISP, where the customer has to provide their account credentials for that ISP, or otherwise provide some evidence of their identity (e.g. by providing valid credit card details, or less invasively, by sending an SMS containing an activation code to a phone number provided by the customer).

An alternative, and increasingly common is to heavily filter wifi traffic - it's increasingly common to see free wifi locked down like a corporate network with all manner of block lists, and increasingly more so blocked ports (I've come across a few public wifi services where only ports 80 and 443 are available - every other port is blocked - such networks severely disturb smartphones, as it breaks their e-mail, iMessage/facetime, etc. connectivity).

Comment What happens if there is gross negligence? (Score 3, Interesting) 550

Bugs and security vulns are almost unavoidable - but some are due to gross negligence. Gross negligence should always be open to litigation. To follow on from Microsoft's analogy, if a door manufacturer was grossly negligent (let's assume that the door includes the lock and hinges - when this isn't normally teh case), and sold a high security door system, but had accidentally keyed all the doors to a single grand-master key. Then if you were burgled because a burglar happened to find out about this grandmaster key, then potentially you have a claim.

I don't see why it shouldn't be too different in software development. A software vendor needs to bear some responsibilty for good programming practice.

Bad software is everywhere; some is so bad, that it does border on grossly negligent.

As an example, I recently reverse engineered an "electronic patient record" system that was installed at a local hospital. This had a number of interesting design features:
1. Password security was via encryption rather than hashing. The encryption was a home-brew modified Vigenere cipher.
2. The database connection string was stored in the clear in a conf file, stored in the user's home directory. Interesting the database connection used the "sa" user.
3. Presumably for performance reasons, certain database tables (notably "users") would be cached in plaintext to the user's home directory. This way, an SQL join could be avoided, and the joins could be done client side.
4. The software ran an auto-updater that would automatically connect to a specified web site and download and run patches as admin - without any kind of signature verification.
5. All SQL queries were dynamically generated strings - no parameters, prepared statements or stored procedure. Not every user input was properly escaped. Entry of a patient name with an apostrophe in it, would cause very peculiar behavior. In the end, regular memos had to go round to staff telling them under no circumstances to use apostrophes in patient names, and to avoid, wherever possible the use of apostrophes in the plain text entries.

This is by no means all the security problems this software had, never mind the bugs e.g. a race condition when synchronising with a second application which would result in the two components opening different patient's charts.

Amazingly, there weren't any security breaches or significant medical errors as a result of this software - but I can't really conclude that this software production was anything other than grossly negligent.

Comment Re:That Poster... (Score 4, Informative) 439

The lead is likely very effective at reducing recorded exposure - probably cutting it by 75-90%. Most of the radiation in a typical fission product incident is beta radiation, which will be substantially attenuated by 1 mm of lead (the beta particles won't get through, but probably 1-2% of their energy may get through as bremmstrahlung X-rays). Gamma rays, will also be attenuated but only by a few % (high energy direct photons won't be significantly affected, but photons scattered from concrete, etc. will be of much lower energy, so will tend to be heavily attenuated).

There are plenty of radiation suits that offer 0.1 or 0.2 mm lead equivalent protection (they don't usually contain lead for environmental reasons, bismuth is usually used instead). These are quite useful for protection against beta energy, even if they do nothing for gamma. However, the sheer weight of even a 0.2 mm lead suit makes it only barely practical (though I understand the US military have bought a lot of them).

However, lead boots are a sensible precaution - most of the radiation in a Fukushima type incident is in the form of water soluble or suspended particles, which pool on the floor in puddles. Severe radiation injury to the feet from beta emitters is possible - 1mm lead equivalent rubber boots are tolerable to wear, and would offer substantial protection to the feet.

Comment Re:Remember the Kernel Backdoor (Score 3, Interesting) 194

I don't think Gibson found a kernel backdoor.

He did should very loudly about an intentional backdoor in the windows metafile image handler, which would start executing native code when a callback command was included in the script. He made a large number of spurious arguments as to why this was clearly intentional, as the vuln could only be triggered in very exceptional circumstances.

He was completely wrong about almost everything he said. The vuln was trivial to trigger, except when it was the last instruction in the script (which was the only way Gibson was testing). From the fact that he had great difficulty triggering it, requiring multiple parameters to be set to nonsense values, he concluded that this was clearly a deliberate backdoor.

It later came out from a number of MS insiders (incl. Mark Russinovich) that metafiles were a feature of Win 3, and were intended to be fully-trusted OS components (for rapid image drawing, and therefore had privileged access to a variety of internal system calls - notably the ability to set callbacks). The functionality was greatly increased in Win95 and later, with the original x86 hand-written assembly being ported directly, rather than rewritten. In the mists of time, the assumption of full-trust got lost.

Comment Why would you want to do this? (Score 1) 257

In most areas where condos exist, the commercial ISPs will offer adequate DSL or cable services. This way, individual residents can make a decision about what service they want, and purchase the DSL or cable service that suits them.

Simply lay down a few ground rules to condo residents: No externally mounted dishes, No new visible cabling in communal areas, etc.

The condo assoc, may be willing to assist in installing ducting - this way, residents can chose fiber, cable, DSL, etc., and it can all go through existing ducting which only needs to be paid for once and won't require new building work when new technology X comes along in 5 years time). In fact, probably the most sensible thing to for a CA to do would be to install some cable ducts running from the basement cable entry point along corridors to individual apartments. That way when a resident wants new cabling installed, all the contractor has to do is install the cable into the ducts and it's job done. FTTP providers may even wish to fill up your ducts with sub-ducts. This is the maximum level of involvement that I would suggest. By having the CA do the duct work, you keep control of quality of workmanship. A particular problem with high-rise blocks is fire codes. Often, they require any hole in an individual apartment to be fully firestopped. This is not a job you should trust to individual network operators, and their low-cost installers. Make a decision to install building wide ducting, and get it properly installed, firestopped and certified.

Running an ISP is a difficult business, both on technical and customer service grounds. The network design is difficult and needs to be done by an expert. Further, how much time are you budgeting for fielding technical support queries, billing, DMCA requests, etc. If you get a DMCA request, which identifies your IP (or one of your IPs), how are you going to forward it to the alleged offender (You've got DHCP server logs, haven't you?) . If you can't forward it, because you don't know, will you face criminal penalties yourself? You might not now, but laws change (in the UK, if you resell an internet service, and a criminal act is committed via it, and you don't keep information allowing you to link an identifiable person with the particular communication, YOU are personally liable).

Similarly, if running a communal ISP, how do the costs work if residents choose not to participate? HSPA+ and LTE dongles are on the market and, where I live, they are killing wired internet and WiFi. I now know many people whose only internet access is a smartphone/tablet with 3G - and that's all they use at home.

Yes, you may be able to get a better service for less money if all 80 residents participate. But what if only 40 participate? What happens when you start getting into legal problems (whether legitimate or not)?

Comment Brief description of what this crack entails (Score 5, Interesting) 270

FPGAs commonly protect user-code with encryption. An encryption engine is included in the silicon to which the user has limited access to crypto=keys with which to encrypt the code that is installed in ROM/Flash.

A number of attacks are known against microcontrollers/FPGAs that secure code with encryption - notably differential power analysis (DPA) which works by connecting a current probe to the chip, and collecting measurememnts of energy consumption as the device performs an authentication operation. By carefully, measuring power traces over thousands of authentication operations, statistical analysis can reveal clues about the internal secret keys; potentially allowing recovery of the key within useful periods of times (minutes to hours).

These secure FPGAs contain a heavily obfuscated hardware crypto-engine, with lots of techniques to obstruct DPA (deliberately unstable clocks, heavy on-chip RC power filtering, random delay stages in the pipeline, multiple "dummy" circuits so that an operation which would normally require fewer transistors than an alternative, has its transistor count increased, etc.). The idea being that these countermeasures reduce the DPA signal and increase the amount of noise, making recovery of useful statistics impractical. In their papers, this group admit that the PA3 FPGAs are completely impervious to DPA, with no statistical clues obtained even after weeks of testing.

This group have developed a new technique which they call PEA which is a much more sensitive technique. It involves extracting the FPGA die, and mapping the circuits on it - e.g. using high-resolution infra-red thermography during device operation to identify "interesting" parts of the die by heat production under certain tasks - e.g. caches, crypto pipelines, etc. Having identified interesting areas of the die, an infra-red microscope with photon counter is focused on the relevant circuit area. As it happens, transistors glow when switched, emitting approx 0.001 photons per switching operation. The signal from the photon counter is therefore analogous to the DPA signal, but with a much, much stronger signal-to-noise ratio, allowing statistical analysis with far fewer tries. The group claim the ability to extract the keys from such a secure FPGA in a few minutes of probing with authentication requests.

The researchers claim to have found the backdoor, by fuzzing the debug/programming interface, and finding an undocumented command that appeared to trigger a cryptographic authentication. By using their PEA technique against this command, they were able to extract the authentication key, and were able to open the backdoor, finding they were able to directly manipulate protected parameters of the chip.

Comment Re:Bad administration is a major problem with this (Score 1) 290

You assume that NTLMv2 or kerberos are the default authentication methods. The workstations ran, and still run to this day, XP SP1 as that is the most recent OS supported by the vendor of the software.

XP SP1 uses NTLMv1 as the default authentication method which does not make use of the time during authentication.

Comment Bad administration is a major problem with this (Score 4, Informative) 290

This is often a case of poor administration, perhaps more frequently than poor design.

For example, I was recently tasked with reviewing the performance of several hospitals in the diagnosis and treatment of stroke. Under national guidelines (UK) a patient with suspected stroke must have had a CT scan within 30 minutes of arrival at hospital, with blood-thinning treatment administered within 60 minutes (if appropriate).

The problem was that the times on the CT scanners were discrepant by +/- 45 minutes from true time - so the images were tagged with the incorrect time. Further, the CT viewing workstations had times up to 2 hours discrepant. The CT scanners were Windows or Gentoo depending on the manufacturer's preference. Similarly, the CT workstations were windows, and were all bound to the hospital domain.

The time discrepancies made my assessment very difficult - and I had to correct for each individual scanner, and assume that the clocks hadn't drifted over the 6 month period of the audit.

I also found several safety issues because of this - e.g. if it was 1am, and a patient had a CT scan, some workstations would be 2 hours slow, so would read 11 pm on the previous day. These workstations would refuse to load the CT scan because the files were filtered by "WHERE [StudyTime] NOW".

I raised a support issue with the workstation vendor who simply said "These are windows workstations. You should ensure that they are appropriately bound to your domain, and configured to sync with your time server or domain controller". So I called IT to configure this, "No way. These are medical devices, we can't change the configuration - and anyway, what will happen if the clock is fast, and the sync pushes the clock back, so that there are 2 occurrences on the same time. That would cause chaos. Even if the manufacturer supports it, there's no way we'll set it up". Of course, their concern doesn't actually exist, because most time sync algorithms (even on Windows) are clever enough to avoid "double time".

There was similar obstruction with the CT scanners. The vendors simply said - we support and encourage synchronisation with a time server. IT or the radiology administrators simply stonewalled the ideas. They refused even to correct the clocks on teh scanners - so the clocks are still wrong to this day (even more so, due to accumulated drift).

Of course, even if the time can be set right - there is disagreement as to how daylight-saving is managed. Some equipment, esp. older embedded kit isn't daylight-saving aware. Do you set it to Summer time or winter time? In most hospitals I've been in, it's been an inconsistent mixture - often with lots of clock drift added, so you can't actually be sure.

Comment Re:Let go? (Score 1) 141

One of the difficulties is that priorities in government sector procurement are often biased in favour of the senior management and doing what is seen to be good politically, rather than usability or manageability.

The difficulty with the govt tender process is that some vendors are unfamiliar with it and don't give the best answers to the questions asked in the initial tender documents.

E.g. I've just been involved with the procurement of a PACS system (digital X-ray archive), and a lot of the vendors simply scored 0 on a large number of points when they returned their responses to the original specification document.

For example (these are not verbatim examples, but fictional examples which I believe accurately depict the problem):
Tender question: Describe how the software ensures compliance with the Data Protection Act (DPA).

Typical bad answer: The software is compliant with the DPA.
(This is a totally meaningless answer - as a result the vendor scores 0 on this specification point).

Typical good answer: The software has features that assist the hospital in meeting the following aspects of legislation: Control of access, control of retention, Prevention of disclosure and assisting staff in preparation of subject access requests.
Control of access: The software provides for password, certificate, hardware token or active directory authentication. There is a role based permissions system with arbitrary complexity - for example, a nurse's login could be restricted to access of patients only on her ward. Permissions can be controlled on a role or user level, and can provide access control on any image, case-record metadata (including custom fields) or metadata available from a connected information system.

Control of retention: Data can be destroyed automatically when no-longer needed. The period can be configured by the local adminstrator according to local policy. A rules-engine is included which permits granualar control of retention based on, for example, patient age (children's examinations can be kept until adulthood, instead of on a data age), type of exmaination (e.g. research studies may need longer retention), manual flags, any image metadata, or metadata from a connected information system.

Prevention of disclosure: All data stores are encrypted with 256-bit AES. Data transmission over the LAN, or public networks, are encrypted using TLS 1.1 with 256-bit AES. If data caching on client machines is permitted by the administrator and local policy, the data is encrypted using 256-bit AES. All system accesses are logged in an audit-trail. Powerful analysis tools, including a rules-engine, are provided to allow investigation of suspected abuse. If the system administrator permits images to be saved to teaching files/powerpoint documents/etc., image metadata containing patient identifiers will be removed automatically. If the images contain patient identifiers in the pixel data, then the images will be redacted automatically (subject to the availability of appropriate metadata in the original image files).

Subject access: The system can provide a full subject access report for both patients and users (staff). The report will include all data, including audit trails, together with summary (the staff report will have patient data redacted automatically), and can be exported to optical disc or hard drive in a single operation.

With an answer like that, it has to score 10/10.

The problem is that most of the software vendors are not very good at understanding the questions - particularly, where they relate to legislation. The big winners here tend to be the big contractors, often infamous in the national press for supply of poor quality solutions. They "get" what the questions are asking, so score big - and this often makes up for less-than-stellar performance in the technical and usability sections of the scoring.

Comment Re:but... (Score 1) 141

They just put an old OS on. At the hospital I work at, there are a number of critical applications (like parts of electronic patient records, and other custom made apps) which only work on IE6.

That means the brand new workstations we took delivery of last month (dual quad core Xeons, 4 GB RAM, Fire GL Pro cards) have all been loaded with XP 32-bit SP1, in order to get IE6 and avoid some features of SP2 which break a number of other apps.

To be honest, it's a miracle that they're stable, as I can't believe the drivers for the graphics cards, etc. are fully supported on this OS.

Comment Re:20 years later... (Score 1) 157

It's true that SMS "just works" now. But that is only a recent development.

5 or 6 years ago - SMS across a national boundary was a lottery. It might work, it might not, and you'd have no way of knowing when, or even if, your message was delivered. This was particularly the case with the US, where SMS was particularly unreliable.

Even SMS between individual networks within a country wasn't always as reliable as you might have expected.

Comment Re:Like to see them in smaller sizes (Score 1) 529

The type of lamp you want exists, and is very widely used commercially for shop lighting. It's called ceramic metal halide. A 20W lamp produces about 2200 lumens from a very compact (3-4mm) point source. When used with a reflector, they make excellent accent lighting, and there are plenty of commercial products that install them on tracks, etc. They have excellent color rendering (far better than the best LEDs and fluorescent) and are available in a variety of color temperatures. The lifetime is also long - typically 15,000-20,000 hours. They are even dimmable, although the dimming range is limited (and while the lamps will dim, and there are color shifts).

The efficiency of ceramic metal halide is unmatched by any other commercially available technology (except for sodium lighting).

The problem is the price - which is enormous (partly as these are targeted at commercial use, where replacement labor cost and energy costs/cooling costs dominate; but also because it is inherently an expensive technology), not just in terms of the bulb cost, but the ballast to power it. There are other problems too, such as a long warm up time (60 seconds) the first 15-20 seconds of which result in virtually no light output; and a very long cool down time (5-10 minutes) during which time the lamp cannot be restarted.

Nevertheless, CMH is pretty much the leading lighting technology in high-end retail, for the above reasons. I picked up an old shop CMH unit off ebay, and have it at home - and it is stunning. Brilliant brightness, tightly focused beam, very high color quality, flicker-free. Thankfully, replacement bulbs are available very cheap off ebay - there's no way I'd pay the $50+ retail for the bulbs.

Comment Re:Yes (Score 1) 138

Quite. A lot of our "medical devices" are actually software programs running on PCs. Many of them require a specific environment to run.

I can think of one package that will only run on: Windows XP32-bit (No service pack) and Java 1.4. It simply won't run on anything more recent (no idea why), and the developer of this (very expensive) package has gone bust, and the product is no-longer supported (but the finance department budgeted on a 10 year usable life-span, so it's not getting replaced for 10 years following installation).

I've no idea of the total number of vulnerabilities on the combination of unpatched XP and Java 1.4- but I suspect, the number is substantial.

Slashdot Top Deals

"Religion is something left over from the infancy of our intelligence, it will fade away as we adopt reason and science as our guidelines." -- Bertrand Russell

Working...