Forgot your password?
typodupeerror

Comment: Re:Epic systems is a load of crap. (Score 1) 228

by GPSguy (#42588455) Attached to: Health Care Providers Failing To Adopt e-Records, Says RAND

I recall reading, sometime in the last 2-3 years, and my wife's often told me, that the solo practitioner has to see a minimum of 48 patients per day in clinic to pay the bills. A few years ago, I suspect this included some salary for the practitioner. Today I bet the number of patients is higher, and the practitioner's take is smaller. My friends are leaving private practice in droves. They're going to hospital or (large) clinic practices. It's how they can earn a living.

Comment: Re:Why the switch? (Score 1) 228

by GPSguy (#42588443) Attached to: Health Care Providers Failing To Adopt e-Records, Says RAND

HIPAA was envisioned to protect you, the consumer, from data mining, especially by insurers who wanted to use those data for rate adjustments and denials. Or so the theory went. What HIPAA became was a behemoth with an implementation plan designed to make data sharing nigh well impossible, and with costs to the health care provider, clinic and patient that were never anticipated.

I'll posit that a _GOOD_ implementation of an EMR, with a valid and robust data exchange plan, and which has accounted for the human factors aspects of physician, nurse, advanced practitioner, specialist, physical therapist and pharmacist, might reduce costs and provider errors. I'll state that, of the one's I've seen, and from colleagues I've talked to, it just doesn't exist yet.

Comment: Re:I think part of it (Score 1) 228

by GPSguy (#42588283) Attached to: Health Care Providers Failing To Adopt e-Records, Says RAND

Post Sandy, at the NYU Medical Center, they recounted the problems associated with no access to EHR after their systems went down. Bad enough when they were still in their own hospital, but very serious when they transferred patients to other hospitals. The story is that staff physicians, nurses and residents went with patient cohorts to the receiving hospitals and served as verbal medical records to get their patients situated best.

Well crafted database and server replication might help in a scenario like this, but so much of the infrastructure in NYC was broken, I doubt it would have.

This is an IT problem but it also extends beyond that simple statement. It requires human factors, so that the medical personnel can use it readily. It requires that common conditions be addressed (e.g., in obstetrics, it should be able to calculate EDC from LMP and project a due-date). I'll accept having separate adult, pediatric and neonate elements to help with dose calculations; that's not too bad and almost everyone's smart-phone can do those calculations close to automagically now. It needs customizable checklists for common procedures, AND an ability to go outside the checklist for issues/complications. It needs a good problem list generator and then a tracking system to allow repeat visits to recognize a problem list entry and bring it up at the next visit... or for a home phone call sooner if need be.

And did I mention it needs a data exchange format that really works? Recent experience: I had to see someone in a new city for care. My primary care physician's clinic (using a large EMR system they're abandoning in favor of EPIC) printed and faxed the whole chart to the doc's office in the other city. And when I asked the doc to send stuff back to my PCP? Yep. They faxed it all back (save the important stuff which didn't get sent at all).

EMR's something I've loked at for over 20 years and played with off and on. I was playing with it when the best way to automate was to create a lab-reporting system using VAX PDP-8's and DECterminals. Expensive? Slow? Yes but with a little screen building and database work, it was useful. I've watched HL7 and its predecessors over the years and they continue to get more robust, so getting the infrastructure standards in place isn't too hard.

What's hard is getting the INDUSTRY to stop being greedy and decide to interoperate. And to respond to the primary users, who are the medical professionals who have to hammer on the damned systems daily.

Comment: Re:Livescribe (Score 1) 300

by GPSguy (#39546401) Attached to: Ask Slashdot: What Is the Best Note-Taking Device For Conferences?

Several years ago, I had a netbook (before netbooks were cool) made by compaq. As this was in the days before every kid had a computer, and before, well, wifi, and before Facebook, I didn't succumb to today's general distractions. I took abbreviated notes in class often using vi. I'd find a quiet spot later, although the keyboard was too loud to do it in the library, reorganize the notes and rewrite them using complete sentences, add equations (via an equation editor), and generally make 'em useful. These served me well. I've gotta say, though, that the "improvements" to Notepad, Office, OpenOffice, etc., and the advent of tablets overall, has made it HARDER to use a computational platform to do what I did, rather than easier. I'm going back to pencil and paper.

Comment: Re:Orbital Junkyards (Score 4, Interesting) 186

by GPSguy (#37803280) Attached to: DARPA Proposes Ripping Up Dead Satellites To Make New Ones

Beat me to it...

There's a tendency now to try to use more common components in new satellites, especially for meteorology birds, while there's always new science, adapting existing hardware to do the work means you might get a couple of instruments on different spaceframes, and not cost as much as the gee-whiz one-offs. Someone already mentioned that R&D, testing, SRM&QA and launch services cost a bunch. If we COULD accomplish this, then restoring capabilities on-orbit would be great.

NASA had a "Flight Telerobotic Servicer" project in the early 90's. Don't know where it went but it did get a fair bit of support and a lot of good engineering talent was pointed at it. From my interactions with DARPA projects in the past, there's a fair chance that something useful will come out of this, even if the whole program is over-ambitious.

Comment: The idea's beeen around for awhile (Score 2) 182

by GPSguy (#36688096) Attached to: NASA's New Bag Turns Urine Into Sports Drink

If not the exact technology, the concept was first bandied about in the early days of Space Station Freedom design and development. Among other things, Space Station was supposed to lead to a Closed Environmental Life Support System that included reprocessing urine, atmospheric condensate and, well, yeah, fecal water into water of sufficient quality for drinking and even medical uses. A lot of work, by quality scientists and engineers went into this. In 1992, an experiment flew in SpaceLab on STS-47 that demonstrated taking Kennedy Space Center tap water, storing it in a closed container for 90 days, and running it through a process/apparatus called SWIS (Sterile Water for Injection System) to create water that was demonstrably "ultra-pure water for injection" per the US Pharmacopaea. Oh, and it worked, too. Making waste water into something drinkable is considerably simpler.

A poster commented on the potential for cross-transfer of large molecular weight compounds across the ultrafiltration membrane... Unlikely unless it's got holes, and they'd become obvious by the "filtration" rate.

Comment: Re:Submitter here (Score 1) 264

by GPSguy (#36262606) Attached to: Ask Slashdot: Best Linux Distro For Computational Cluster?

As pointed out earlier in the thread, you're not defining requirements well.

I think you're going to want to consider setting up a cluster front-end. You generally do not want to run X on all the nodes: Let them run the monte carlo sims and don't waste memory or resources allowing users to hammer each node. Or, allow it now, and regret it later when performance plummets.

Consider GPGPU (nVidia Tesla, realizing that AMD/ATI have GPGPU options, but I am not versed in them yet) for improved performance in calculations.

Have you looked around your university? Is there anyone else running clusters with whom you could partner? My group does exactly that: We run a cluster and while I also am a numerical modeler, we provision and operate a cluster that serves users in agriculture, nuclear engineering, petroleum engineering, atmospheric sciences, HEP, chemistry and the social sciences. Your questions suggest, to me, that your time is better spent as a researcher and not as a system administrator.

And while we're here... One of my pet peeves is when a professor takes a grad student who came into a program to get their degree in, say, nuclear physics, and turns them into a system administrator and user support girl for the group. Either instead of, or in addition to, their scientific career, they have to manage the computing resources and learn how all the software works. In my experience, if they're good graduate students and conscientious, they will do a great job, but will not get the education they came for. They may get the degree, but they are likely doomed to supporting other users who got a better education. They're still good, in fact, indispensible, to a research program, but they were sacrificed with little input to their future. Better, if that's what you need, to actively recruit for someone who wants to learn the field to better become a computational expert with a discipline track in your field, nurture them, and if they are deserving, provide said terminal degree. I really don't like sacrificing an unsuspecting graduate student to the HPC gods for a faulty member's benefit.

Comment: Re:X11 ...server? (Score 1) 264

by GPSguy (#36262230) Attached to: Ask Slashdot: Best Linux Distro For Computational Cluster?

Generally, your head node will need an X client, but NOT the compute nodes. You won't have to log into them per se, but the head node, where you submit your jobs, does have to get to them. In general, the compute nodes in an HPC environment are hidden away on a private network, and don't see the outside world, And, for that matter, shouldn't (let's not talk about OSG requirements, or things that ATLAS and CMS are promulgating).

Another consideration is cluster-local visualization: As datasets grow, it becomes less practical to bring whole datasets back to your desk, and then process them for a quick-look at results. Instead, initial, and perhaps all analysis should be considered on the cluster. This argues in favor of an X installation, and GPU accelleration hardware on at least the head node, a dedicated graphics/analysis node(s), or perhaps the whole cluster.

And, so far, no one has spoken of favorite compilers. gcc's not bad but not stellar for a lot of HPC uses. Portland Group and Intel have done good things, IMNSHO, in the compiler world, and PGI is starting to incorporate nVidia GPU compatibility in their stuff.

Say "twenty-three-skiddoo" to logout.

Working...