Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?

Comment Transactional DDL (Score 2) 244

PostgreSQL supports transaction DDL statements (e.g. ALTER TABLE, CREATE TABLE). MySQL doesn't. If you run a poorly-written upgrade script against a MySQL database and something goes sideways, your only recourse is to restore from backups. This means that any sane MySQL upgrade plan involves testing the upgrade on a replica of the database first. For large or mission-critical deployments that usually isn't an option.

If all you're looking for is a cheap DB to serve a Wordpress blog about your hamster, MySQL away. Otherwise, use PostgreSQL. You'll sleep better.

Comment Re:My sister is a nurse (Score 1) 232

Good certified professional coders working in a non-HMO setting can make north of $25 an hour if they are productive. They are also very difficult to find. Since the job that coders do determines whether or not your facility gets paid for a $20K procedure, most places tend to err on the side of caution and spend the extra cash on a good one.

The reason that Kaiser doesn't pay as much for their coders is because the only time Kaiser submits claims to an external entity is for certain subsets of Medicare patients. Otherwise, they are both the insurance company and the provider, which means that they don't have to spend lots of time and money convincing an insurance company to cough up the requisite payment. They will occasionally outsource care to a third party lab or surgery center, but that's the exception.

Comment Re:How Much? (Score 1) 232

Depends on whom is calculating the costs.

Insurance companies profit every time you need healthcare, and can find a plausible reason to ration it or deny it completely. Most insurance companies now do pre-authorization for services based on the diagnosis codes. By making diagnosis coding more granular, they have more of an opportunity to save money prior to dispensing care, or by denying your claim after the fact, by saying "You had diagnosis X, which our contract with your provider does not cover for procedure Y". Conservatives refer to this process as a death panel. Liberals refer to it as "bending the cost curve", which has a more pleasant ring to it, even if the net effect is that you don't get treatment in a timely manner, if at all. Please select your preferred side of the political spectrum and rage accordingly.

Providers' costs will probably be a wash. Most providers already have someone who is a Certifier Professional Coder on staff, or they contract it out to a third party coding company. Strangely, these people refer to themselves as Coders, which tends to confuse the hell out of the IT staff. This individual is tasked with reading the pre- and post-procedure diagnosis and procedure report, and assigning a set of CPT and ICD codes to the encounter. These are, in theory, used for clinical purposes, but the main reason procedure and diagnosis codes exist is that they form the terms of the contract between payer and provider. This coding process tends to have the same precision as a group of witches divining the future from cat entrails, because of a wonderful little concept called "unbundling". Let's say that an anesthesiologist is performing a spinal block in which they inject narcotics and steroids into multiple levels in your back. Some combinations of spinal blocks are issued their own procedure codes, whereas other spinal blocks can be billed for separately. If a coder fails to group the separate procedure codes into a proper bundle, or fails to justify the code with the correct diagnosis, CMS and an entire army of subpoena-wielding acronyms will crawl up their butt screaming "FRAUD!!!", because frequently the total of several individual codes will pay much more than a single bundled code. With a more granular coding system, the coder may need to spend a bit more time per report finding the right level of detail, but in theory the number of unpaid visits and claim denials will go down, because they and the insurance company now have a more accurate agreement on what diagnosis will justify the treatment they are providing. ICD-10 covers the diagnosis portion of the coding system, but rest assured, the AMA will soon be confusing the hell out of us with an equally arcane set of new procedure codes.

Incidentally, the procedure codes can and do create some perverse incentives in healthcare. For example, if an orthopedic surgeon is performing a knee procedure with an MCL repair and doing it laproscopically, she is actually paid MORE if she makes two incisions than if she makes one. Reason? There is a single code for "Knee arthroscopy with MCL repair", and separate codes for arthroscopy and MCL repair as their own procedure. Medically, it's safer if you do both procedures with a single incision because of lowered infection risk and less bleeding, but the the difference between the bundled code and the unbundled code is that second incision. So the coding system actually creates a financial incentive to perform a second incision. This is especially true if the procedure is done in an outpatient facility that the surgeon happens to have ownership in.

Universities, research companies, EMR vendors, and biotech firms using big data analysis to perform studies will consider this granularity a godsend, because they won't have to squint as hard at the data to tease things out. Whether you, the patient, want your record to be teased without your knowledge or consent is a debate for the future, but the medical benefits of being able to data-mine everyone's record will likely far outweigh the privacy concerns. Without a centralized way of accessing medical records in an anonymized manner, this will not happen anytime soon.

EMR firms will also be big winners. As of today, electronic medical records are not directly processed by the government, because there is no federal standard for medical record interchange. Some states, notably Massachusetts with their HiWAY program, have either built or are planning to build a statewide medical record exchange, so that each EMR vendor will only have to deal with one format in and out, and the exchange will handle the details of transferring records between providers (e.g. you have a lab test done that your surgeon needs to look at, but both are separate business entities). Although it isn't on the legislative roadmap yet, sooner or later Congress critters will get it through their Kevlar-plated skulls that this might be a good thing for the feds to subsidize and/or standardize, so that EMR vendors have one standard to deal with, rather than 50. Having a reasonably granular set of codes means that describing an encounter will be easier to do in a standardized manner, which in turn should lead to more portability. Of course, with every seismic shift in healthcare IT comes a slightly delayed tsunami of billable hours from IT consultants "helping" you navigate the transition, so providers will lose as much as EMR vendors will win, if not more in terms of productivity losses associated with reimplementations.

Patients, as always, will lose money, because the cost difference between what the provider pays and what the insurance company thinks they will pay will likely increase.

Comment No mention of Sonatype's business? (Score 5, Informative) 130

It should be noted that the company releasing this report, Sonatype, markets a product called Insight Application Health Check that scans your binaries for libraries with known vulnerabilities.

I have never used their service, and can offer no comments on its utility or value. However, it is a bit unseemly that TFA doesn't mention that the source of their information about this very real problem also sells a service that solves it. This is a knock on IT World, not Sonatype.

Comment Annoying, but workable (Score 1) 278

What the OP wants is perfectly possible. I'm typing this on an Ubuntu 12.04 box running the most recent Catalyst driver, and connected to three 1920x1080 monitors. Two are DVI, one is via a DisplayPort->DVI adapter. Video card is an older Radeon 6950. It works, more or less without issue, for what I do: coding in Eclipse, browsing the Internet, etc.

Using the open-source driver works for triple monitors, but the power management is not up to snuff in the open-source driver, and the fan on the video card gets annoyingly loud after a few minutes. This is the only reason I run the closed-source driver. Strangely video playback is smoother with the open-source driver in the triple monitor scenario.

Contrary to popular myths, you do not have to edit a config file for either closed or open-source drivers to enable magical triple monitor goodness. Both were able to detect and orient the monitors using either the Ubuntu monitor control panel or the Catalyst Control Center.

Things that don't work as well: video playback and 3D games. Video will get choppy full-screen if tear-free mode is enabled, and the tearing is intolerable when it's not. Likewise, performance for 3D games across 5760x1080 is iffy. I have a laptop for gaming and an HTPC for the video stuff, so it's not a deal-breaker for me. The OP did not specify what kind of engineering he/she does (circuit design? CAD? software?), so the 3D performance may well be an issue depending on the tools being used.

I have tried the Nvidia route several times, but always came away frustrated. AMD cards Just Worked for this application. Google 'Linus Torvalds middle finger' for a more complete technical discussion of why this is.

Getting a reliable triple monitor setup on Windows or Mac is much easier than in Linux, but most that experience can be chalked up to X. In theory, Wayland or Mir will handle this much better, but no stable distro uses them by default, and none of the high-level toolkits have mature support for it.

Comment Re:Um, wrong cause for the effect. (Score 3, Insightful) 530

Picture a desert island with two people. At first they both work all day long to survive. Later, they improve their lot, to where they each only have to work half the time to survive. The other half can be spent loafing, or working to get more comfortable. Is one of them entitled to relax and do nothing while the other needs to work all day long to support them? Of course not. Each person has the option of working full time to improve their position, part time to simply survive, or they may die. They aren't owed anything.

Your analogy is missing a third party: the absentee owner of the island. A more accurate analogy would be that, having developed a more efficient means of harvesting coconuts, one of the two island inhabitants receives a slightly larger number of coconuts than before, while the second fellow's previous coconut wages were instead diverted to the island owner's offshore pina colada factory, leaving the second fellow to eke out a decidedly calorie-free lifestyle.

This is, in the island owner's view, the proper order of things: he paid the fellow to develop a more efficient coconut harvesting strategy, and thus is entitled to a nice drink at the end of the day.

This is, in the first fellow's view, also the proper order of things: he developed the improved technique, and thus is entitled to a few extra coconuts.

In the second fellow's view, any discussion about the abstract problems of coconut division in an isolated island economy is pointless academic frippery because he is, at this point, starving to death on a fucking desert island.

Sooner or later, productivity gains will land us in a scenario where there isn't enough work to go around, and the jobs that do remain will require so much technical expertise as to render them unattainable for most people. For the remaining majority, the question is: what the fuck are we going to do in order to earn our daily coconuts?

Comment Re:PDF is fine (Score 1) 221

PDF is about the only format where you can mix a lot of elements, get it to look like you meant it to

Ah, but therein lies the problem...if you meant for the document to be printed on an 8.5x11" piece of paper, and I instead try and read it on my 7" Nook (rooted and running CyanogenMod, of course...), then we have a problem. Namely, that I need to zoom in and scroll around, when the text should just reflow.

Comment Re:You are right, and wrong (Score 1) 728 away with the Costco truck containing said pallet of 65" TVs while the driver is out in front gorging himself on cheap pizza and $2 hot dogs. And make sure you leave your gun at home.

Plus, if someone tries to mug you in the parking lot, you now have a truck with which to run them over. It's a win-win for everybody, except the mugger. And Costco, obviously.

Comment Re:Dear aunt, (Score 2, Informative) 221

13 years ago, when I entered the medical transcription industry, the fellow who sold us our dictation system told me that he was a dead man walking: voice recognition was going to KILL the transcription industry, and he almost felt guilty selling us the system. When we mentioned we had looked at Dragon, he practically cried. 13 years later, that salesman is now deceased, and the transcription industry is larger than ever. Voice recognition in transcription is like Linux on the desktop: every year, articles pop up saying that THIS year will be the year medical transcription dies at the merciless hands of voice recognition.

For a guy in an industry that Netcraft has confirmed is deader than FreeBSD, I'm doing pretty well.

I now own a medical services company that does transcription, so my opinion is certainly biased here, but I fail to see the economic logic in turning a physician, who makes between $120-250K per year, into a clerical worker editing his own files. Especially when said clerical worker can be seated in India. Time is money, and the time of physicians and surgeons is one of the most expensive line items on your medical bill. Even with transcription prices as they are today, tacking 20 minutes of extra editing time onto a doctor's already long work day means that I can do it cheaper with manual labor. Voice recognition just means that I need one MT and a voice recognizer instead of one transcriptionist and a QA person.

Internally, we use a batch speech recognizer based on Sphinx, as the Dragon source is too expensive to license in the volumes that we do. As one of the earlier posters said, the code is the easy's generating the speech corpus that's the really expensive part. Developing that was easily a seven-figure outlay in labor, which is why you don't see any usable free medical speech corpi available for free* on the Internet. You'd think with all the federal money being thrown at making medical records electronic that they could spare a few million to develop an open-source speech corpus, but that would make too much sense.

As long as physicians and surgeons are better paid than the rest of us, someone will be doing transcription.


* If you know of one, post a link...believe me, we've looked.

Comment Linux users...screwed again (Score 5, Informative) 138

According to ATI, support for Eyefinity on Linux will be enabled by a 'future Catalyst release'. Three releases of the Catalyst driver have come and gone since I got my Radeon in February, and they still have zero support for Eyefinity on Linux. Which is irritating as hell, because the famed YouTube demo of Eyefinity running a flight sim on 24 screens was a Linux box.

Some days, it really sucks to be a Linux zealot. This is one of them.

Comment Re:Here we go again! (Score 0, Troll) 169

Sky+ has no issue with recording encrypted content (except for one off pay per view purchases). It simply records the encyrpted stream straight to the HDD and you can watch it whenever you want.

Your rights in the UK are to record something, watch it once, then delete/destroy it. This has been established since the VHS days. Services like Sky+ actually give you more rights than you legally have.

Comment Re:Ideology meet reality (Score 3, Insightful) 675

Sorry, back here in reality Theora's quality is at least on par with H.264 with the same size []. But thanks for your attempt at FUD, though.

Someone from isn't exactly an unbiased source - maybe you could cite someone who doesn't have a vested interest in one or the other. If you actually look at the videos on the site, you'll see that Theora performs a fair bit better than H.263 (not surprising; most things do these days), but watching the 17MB files next to each other it's immediately apparent which is which. The colours in the Theora version are washed out and details are fuzzy.

Now, if Flash would add support for Theora, then GooTube could easily ditch the H.263 versions and serve both Theora and H.264, rather than H.263 and H.264...

Comment Re:HTML5 allows multiple codecs to be specified (Score 1) 675

Having to deal with multiple video formats means either increased storage requirements or processor requirements. I believe the reason for trying to standardise the supported video formats to a limited selection, is the same one for limiting the number of image formats officially supported by web pages: ensuring the content is viewable everywhere. If the specification said do what you want, we would see half a dozen different formats, browser supporting some of them and the users being caught in the cross-fire.

The day an Ogg endcoder/decoder is made available for things like Adobe Premiere, Final Cut, Quicktime and Windows Media Player, using a BSD style license and also focus on quality for a given bit rate, then we aren't going to see widespread adoption.

While Ogg might be fine, it is not packaged as a solution suitable for commercial products. At the same time the MP4/H264 licensing means it is not suitable for open source. We have clash of cultures and each is wanting to stand in their ivory tower, and not come down to Earth.

Slashdot Top Deals

"Your mother was a hamster, and your father smelt of elderberrys!" -- Monty Python and the Holy Grail