My experience in studying Medical Informatics is that they had no idea on how to create an ecosystem. Firstly, they were wrongly insistent on the need for everything to be coded. Take a look at things like SNOMED and LONIC as an example.
HL7 is a completely over engineered mess and it's a standards process driven by too many doctors and other health professionals and way too few computer scientists. It tries to capture the process of health care as a protocol. Completely wrongheaded. By the way, I worked on the UML 2.0 standard committee, which I think is reasonable by comparison to HL7, which is a major user of UML. Let that sink in.
HIPAA also has completely outdated and overly complex requirements as well. It was well intended, but it needs replacement. The law standardized technology, not requirements and that's a mistake.
Epic is a total mess. A local hospital system in my state adopted it and (surprise), it was horribly over-budget and there are still issues. And it's legacy code out of the box. It's all based on MUMPS and bits and pieces hacked on top of it.
Overall, the main problem is insisting that the problem be solved all at once, versus step by step. Step one, establish a system for identification for health providers and patients. This includes a system to get a identity of a patient via known data while providing a high level of confidence that the requestor of information is a health provider. Solve this, and then you can start talking about interchange. And start simple. Forget highly coded documents. Exchange vital history, procedure history, problem list and notes. That's it. Then move forward based on actual user demands.
Frankly, Clinton had the right idea with the national health id. If we could create an ID that everybody had that was only used for medical identification, that'd be great. But I doubt that'll happen, so we will be stuck with a huge data deduplication problem.
It's not easy, but it's more doable than people think. And heck, open source as a means of standardization is a fine part of this equation that is completely ignored.
A fair point. I was thinking the Android price point was more around 69-79. Clearly I haven't been shopping extensively.
Sure, it's fine to be skeptical, but it's easy to verify (or not). You don't think Windows has a big enough market that people won't analyze every bit of traffic that comes out of the next OS?
Plenty of programs have had that customer experience improvement program opt-in for a while. I haven't seen anything that suggests that you really can't opt out of it, that data is sent anyway. I'm sure that if somebody found evidence of that, we'd hear about it instantly.
Sure, it may be required as part of installing the technical previews (but even that's not clear). How it works in the release, who knows. I agree that the best move would be not to have it at all in the RC or RTM builds, but that's not impossible or even unlikely.
This isn't a port. It's streaming the application. It is actually running on their cloud, so you could do the same on Linux, Windows, whatever.
This is just another part of them moving to a cloud-based model. No big deal.
I guess Microsoft's plan to charge nothing for small screen form factors is having a bit on a effect. Even 20 bucks would be a significant impact on that price. At that price, there'd be enough people to see if you get a Linux distro on it, and it's close enough to cheap android levels.
For me, it's cool, because I'm more versed in Windows development and since it's full Windows, I can easily install whatever the heck I want on it (no developer unlock, etc, etc). Save up, get a few and just have them around the house.
The way the rule is stated and repeated in modern culture is a vast oversimplification, and so a critique is fine. As some have noted, the argument was also about the "ability and drive" to put in the 10,000 hours. Certainly, individual factors do play a role. The only reason this is controversial is when people try to apply it to certain populations, where there is no evidence for that at all (in fact, plenty to the contrary). The article itself notes this.
But, it does raise a question: Are there skills require innate abilities to truly master, and if so, what are they and how do they differ from those that don't? There is evidence to suggest that the former is true.
This rule is often linked to how to be successful, but the studies have all been on skills that have no direct links to financial success. Brilliant musicians don't get paid well by default. Chess players aren't sport stars. Artists struggle.
I am curious if programming is a skill that does require an innate mindset to truly master (I do believe these skills do exist), or if it just a skill that demands disciplined practice. I've seen no evidence either way, so anything would be speculation on my part.
Yea, that was great, but Redis did actually turn into a useful tool compared to Memcached.
I've conducted a lot of interviews (in an academic setting in the humanities), and I can say that it's risky guessing what exactly the interviewer is trying to accomplish with a question. Sometimes a question is asked neither to see if someone knows the answer to the question nor to see the content of the interviewee's answer, but to see how the person handles being asked such a question. I could see someone deliberately asking a question that he know the candidate not to know the answer to just for such a purpose, though personally I would avoid doing it as it's neither nice nor useful to stress out the interviewee even more (but I might do it in a mock interview preparing someone for a real interview).
So the interviewer might be interested to see if the interviewee honestly, humbly and politely says: "Would you like me to tell you the container classes I use the most? The others I have to look up when I need them", or if the person pretends to know the answer, or rudely bristles, or tries to weasel out of the question by changing the topic (of course it might be a bonus if the interviewee actually has a great memory and knows all the container classes; but then another question might need to be asked to gauge character).
There's a sad lack of proper work for PhDs in our field. I'm in the same boat, but I am working now as a contractor.
Sure, people say that there is a glut on the market, but nobody notes that this is due to drastic cuts in research funding at all levels. Maybe that'll change and we there will be more research and academic positions.
As a practical matter, I disagree with leaving your PhD off your resume. You'll have a large gap to explain (what did you do in all those years) and it's not hard to find out that you do have a doctorate.
The best thing to do is explain that a PhD is one of the best examples that are you are self motivating, able to work on a problem diligently and independently, and that is valuable to any employer. Then, get out there and try to find a employer that gets that (in other words, is worth working for). That's hard, but that's what it'll have to be.
I'm seriously considering a hefty pay cut and trying to get a postdoc, because I do miss working on actual interesting problems. Don't discount this either.
Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.