Thus the use of "not obscure" in the quoted description. Thank you for your support.
Thus the use of "not obscure" in the quoted description. Thank you for your support.
So, you mean the fact that I wrote a c-intercal parser that used obscure opcodes to actually perform the interweave and or and xor isn't a good thing to put on my resume?
Eep, I have offended someone with actual skills! The horror.
Putting it on your resume is one thing... heck, I'd hire someone who had legitimate INTERCAL experience on principle.
Still, I ran a few job searches and couldn't find a match... not a single job looking for INTERCAL experience. What has the world come to? You may get more luck on masochism personals (Ashley Madison anyone?): "gwc, into whips, chains, and being forced to code complex algorithms in INTERCAL". Hmmm.
Off to google LIRL now!
R is also only one of several even more obscure languages in that domain, including Julia and Stan... is MAPLE still a thing? Less obscure is MATLAB, and Mathematica... (all platforms as well as languages) they've all got their special strengths as usual.
Swift is more popular than R, yet still obscure compared to the top 10 or so. I don't know how ABAP is still alive.
Prolog, Scheme, Groovy, SCALA... there are lots. Even LISP shows up below R in some lists.
SQL is similarly not obscure in its area, but worth learning and you rarely see it in a list of general programming languages (because it isn't). But the commercial vendors all ship their SQL with strong variants that extend the language and do more common language functions like looping. I speak of PL/SQL, TSQL, and their ilk, which all have a touch of obscurity in the same way R does.
I might recommend targeting obscure libraries or platforms also. CUDA isn't a language so much as an architecture; OpenCV is interesting.
If you're looking for jobs, take those, plug them into a job search engine and see what interests you. Languages tend to correlate with industries fairly well. If you want to work on Genomics, you'll see different languages at the top than if you want to work on Wall Street.
Avoid INTERCAL job postings at all costs.
I can't speak for Cuban, but I was describing taking the 20-40 regular blood tests plus whatever someone may be interested in for more personal reasons, more regularly. Not necessarily adding breadth to the data, but regularity. Yes, a genetic test that comes back negative is unlikely to change.
Of course, we think it won't change based on a belief about how genetics works that isn't frequently tested, and poorly researched. Some genetic changes that can happen during a lifetime weren't really accepted until 2008, and we don't have a good understanding of that impact yet. Another example I can give you is Chimerism -- the possibility that someone has two sets of DNA working in their body, sometimes with different chromosome types. This is the sort of thing that tends to be diagnosed by accident, sometimes during transplant typing. It simply wouldn't be uncovered during a single genetic test, but multiple tests over time would make it readily apparently.
I think that's a bit extreme, though... simpler examples are borderline cases of routine blood work. The de-facto standard is to compare blood work to general standards, but we don't adjust for variation in individuals. A white blood cell count of 11 is borderline, but if I came to a doctor with symptoms and an 11 WBC, an might be prescribed, since "normal" is just below that. If, on the other hand, I had years of "healthy" measurements where my WBC was normally 10-11, we'd realize that I had a naturally high count and probably conclude that this symptom may be something that is not affecting my WBC. Or, years of high tests might be a symptom of something else, but still different than a single spike from a person with a proven healthy average of 4-5. If I only ever have the test performed when I'm sick then all you can compare me to is the standard population. That's bad statistics, bad science, and it ought to be bad medicine.
Now, I've also heard from doctors that said they wouldn't treat this patient differently because it goes against their training. So, yes, I agree the US has a ways to go, and I agree we need a culture change. I don't think the way to make that change is to shoot down people who are willing to be a part of it and promote it. I do think that even if you dislike the precise mechanism Cuban recommended, it still sounds like a step towards the more openness that it seems we agree is superior in your Scandinavian example, even if the path isn't direct. We won't change overnight, but we won't change at all if we stop people from trying.
The downside of the information collection you mention is an invasion of privacy. Some people think it truly is worth the tradeoff, and others do not. I personally agree that it is not.
The downside of more individuals collecting their own personal medical information is NOT an invasion of privacy; not if the information is freely collected by the individuals as Cuban suggested. If there is a problem of data retention or privacy use by labs or doctors, that is an important but separate argument; a red herring to what I'm talking about..
The argument was that the downside is over diagnosis -- that more routine testing will lead to more worrisome results and more invasive/expensive/unnecessary tests; that's a different haystack. I agree that this is a worrisome trend in the way medical tests are currently performed; research supports this. Where I differ is that I think more frequent (voluntary, healthy, uninterpreted by themselves) tests could provide a better "big-data" baseline. Research has not, I believe, thoroughly investigated this one way or another. While I'm open to being proved incorrect, I don't think I've seen a solid rebuttal. I have enough of a background in data mining and medical data that I feel qualified to take that stance, but certainly I disagree that the privacy is a central issue here.
To elaborate, in reply to parent, I do get the opposing point of view, from the perspective of how medicine is currently practiced. But we're in a much more data-driven world where the "quantified self" is much more viable. If it takes decades of peer-reviewed research for the medical industry to catch up and make real and helpful use of the wealth of more easily captured data, then that's what has to happen.
I'm not comparing someone going to a doctor more frequently to the status quo. I'm comparing going to a doctor when you feel sick and getting labs then and only then, vs. going to a doctor with a stack of labs from when you were healthy and sick in the past. If that additional data can't help inform the doctor's decision then it's more research, not less testing, that is required.
This is interesting, given a conversation between Mark Cuban and some doctors/researchers yesterday:
Cuban was advocating for regular baseline lab tests so that doctors would have a trend analysis available to them when he gets sick. He got pretty thoroughly attacked, by Forbes: http://www.forbes.com/sites/dandiamond/2015/04/02/mark-cuban-doesnt-understand-health-care/
My opinion was that Forbes misrepresented things, but, related to this Slashdot post, it seems there's an interesting resistance to this sort of data-driven-diagnoses. Forbes would argue that lots of tests will lead to a false positive; I would argue that the more data the more you can become confident of the difference between a false positive and a real one -- seems like basic statistics to me, but we need to get the research and the doctors on board with a more data driven approach, rather than the kneejerk approach used in diagnoses now.
I don't think this is true. There's no requirement that the parody or satire adhere to specific conventions; what people find funny or ironic could be debated, I suppose, if someone wanted to push the legal boundaries, but just because you don't consider being "serious yet thought-provoking" capable of ALSO being satire doesn't mean you're legally correct.
"Tongue-in-cheek" humor goes out of its way to appear serious, but is intentionally satirical. I think this could easily classify. In fact, the reference to the Power Rangers is what pushes it over the line for me. I see only two reasons to use the Rangers imagery -- a non-satirical reason which would imply that the producers really believed they were generating a standalone, serious piece of work that borrowed from the Power Rangers' mythology... or a satirical reason where the producers believed that making a high-production-value version of a campy old action TV show was so ironic as to be funny, both to themselves enough to create it, and to the internet enough to appreciate it (and I'd have hoped, to the original Power Rangers as a parodical homage).
I would choose to believe the latter, and I would argue that even if you believe the former, as you seem to, there's enough of an argument to qualify for satire exceptions to copyright.
This may be a valid point if you weren't being an jerk and hiding behind an AC post to ask it. I have a whole rant about who should and shouldn't be considered a scientist, and it's really a spectrum, not a binary thing, complicated by things like specialties and whatnot. I don't wear a lab coat, sorry, but yes, I have nice degrees with "science" in them and some credentials to back it up. If you were to rank everyone by their job and their credentials on how "sciency" they are, I wouldn't be anywhere near the top, but I think if there were a line in the sand, I'd make the cut.
Of course, "we" in the sense I used it could also comprise everyone who wants to convince people to put more faith in science than in whatever else they make decisions by, at least on these wide consensus issues.
And, once again, I've responded to an anonymous troll. *sigh*... no more internet for me today. Back to vacation.
It was absolutely the best science that the 1970s had to offer. The fact that it turned out to be wrong was due to a large number of factors, but not that it wasn't "science". One good article of many is: http://www.wsj.com/articles/SB10001424052702303678404579533760760481486, which references a lot of large controlled scientific studies that, yes, had issues, but were still the best of the time. There were ALSO studies that came to other conclusions, but remember that there are real studies by real scientists (by any useful definition) that come to all sorts of wrong conclusions. There will always be someone to say "told you so", no matter how ludicrous their position seemed by the majority at the time -- even if the majority includes most of the scientists; if those scientists are later wrong.
People equate science with truth, and that's simply wrong. Science is a process, a mechanism to expand our knowledge, but it's fallible, and rarely results in absolute truths. As the linked Scott Adams article says, Science is about nudging us towards improvement, and I agree. The public face of science is, unfortunately at times, journalism, government and other, equally human equally (if not more) fallible entities -- but those people did listen to scientists; they didn't just make stuff up (most of the time).
Science has an image problem, though, and it IS self-inflicted. We're coming across as arrogant to the scientifically illiterate, rather than nurturing, and it's turning people away. We label people "deniers" when they're genuinely curious, and they get defensive, and it's all downhill. We get combative and then pretend that it was someone else's misunderstanding when our consensus is wrong. Science is the right approach, but when it loses a popularity contest, particularly in a democracy, it's can get pretty bleak for a while. There's no reason that needs to happen, but denying the problem isn't the answer. We should embrace the dialogue that Adams is a part of here.
I love my 2002 WRX but it is ridiculously more difficult to repair than my 1965 Ford. I just rebuilt the ford carburetor; there's hardly anything even similar that I can do that's nearly as simple to improve performance of the WRX computerized fuel injection system (beyond replacing spark plugs, hoses, and simple parts that are common to both generations of automobile -- headlights fall into that category I think; they're a bad benchmark for overall repair effort). I LOVE the car, don't get me wrong, but ease of repair is not a key selling point. In exchange, of course, we get much better gas mileage, performance, comfort, and safety to name a few -- I'm fine with the tradeoff, but it is real.
The counterpoint I have to the main article is that while people may not walk around with the ability to repair things, they do walk around with access to an insane amount of information to help them along. The number of youtube videos, bits of advice from message boards, and random web articles are really what keeps my Ford humming (and to a lesser extent the WRX), not to mention access to cheap parts shipped immediately. I think this generation will manage.
This is a wildly nontrivial question. Volumes are written about building data warehouses, and there's a lot to consider. In a large complicated environment, you could spend weeks doing comparisons (some people spend years, but that seems extreme); and some of the decisions are worth weighing.
The first question is what capability are you looking for -- why are you sure one of these vendors is correct, and have you truly explored your options? If you want a place to capture and gather lots of near-real-time sensor data, then Hadoop might be good, if you want a more traditional Kimball or Inmon style warehouse for a small or mid size amount of data, then Microsoft, Oracle, Teradata, IBM, MySQL, and others have decades of experience that is, in fact, useful. But that's just a single-source vendor, and your question is focused on database vendors. Asking what "capability" you need includes ETL, Reporting, Meta Data, Master Data, Data Quality, User Interaction, Training, Methodology... if you're going to in-house all of that, or spread those things to multiple vendors then your answers will be different.
All of those lead to follow-on questions. Where does cost play a role? Watch your up front costs vs long-term TCO. Do you have a development team with any expertise that may make it easier to in-house decisions and developments for one platform over another? Is your corporate buy-in strong so you can weather people second-guessing your decision? There are technical issues, personnel issues, cost issues...
The first ANSWER is really that any vendor will work, and every vendor will have different headaches. Older vendors have very specific ways of doing things, but that can make developers less expensive and more uniformly capable (although you'll always find extremes). Asking several Oracle DBAs to question each other and report back on each other's competencies is rather easy. With newer capabilities like Amazon, Google, and other cloud-big-data vendors, the landscape is newer, people are using different approaches (each of which may be valid), and it's not clear which are going to survive long enough to have the richest eco systems. But again, these systems came into being for a reason -- Hadoop and NoSQL databases can perform better and more cheaply than older databases in raw throughput, or unstructured data, or other areas but they sacrifice different things -- ACID compliance, strong typing or data models, or what have you.
Some of it just depends on taste. Some people avoid a single provider "lock-in" and pick and choose different ETL tools (see Informatica), Reporting Tools (Cognos, Microstrategy, Tableau, Jasper, Pentaho), and other tools (Talend DQ/MDM comes to mind... there are many), while some people prefer single vendors due to massive integration (particularly Microsoft if you're a Windows farm). If you're Gmail based, then Google's apps have good integration; if you have an Oracle ERP then several tools speak nice to it.
I'm generalizing a lot of examples that don't always apply, to keep things shortish, but the bottom line is that every option has strengths and weaknesses. I wish it were easier.
Nice to have a first-hand opinion. Thanks Yaz. Someone else mentioned the corporate holding aspect, but then pointed out that they'd see it as deceitful if they found out a job applicant was sole owner of a company that held their IP, although the licensing benefits of that approach seem to make it worthwhile.
Would I be right in assuming that the patents on your resume make you feel MORE marketable, rather than less? Some people have mentioned that it could lead employers to fear conflicts of interest and that is an aspect I hadn't considered.
Awesome. I have to enumerate my choices now... I think a d-4 or d-6 can handle it... great advice!
Hell, if I found out you owned the holding company with the patent, I'd probably not hire you for the same reason.
Precisely one of my concerns and similar to the sentiment others have issued merely for having an industry patent, much less a holding company. I'm not trying to be dishonest, and frankly I wouldn't have minded if more people recommended granting a limited non-exclusive license to an employer; really I was hoping that a few inventions would look good on a resume. I'm rather taken aback by the number of people that actually perceive it as a negative. Thanks for the feedback; I don't want a holding company be a negative any more than some IP. More to consider, thank you.
What is now proved was once only imagin'd. -- William Blake