Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?

Comment Kaggle (Score 1) 123

The single most motivating thing for me, personally, was to find real problems to solve and real examples and help on how to solve them. Bonus points for variety and competition and even prizes.

Enter Kaggle -- data mining competitions with an absurd amount of examples, datasets, community posts, forums, curated examples. I really cannot emphasize how much I've learned in this community. Join and try one of the example competitions -- the Titanic one is popular, follow the getting started guides and go from there.

I'm sure there are many other ways, and it may not be for everyone, but this has really been a great resource for me.

Comment One Book vs. Three Books (Score 4, Insightful) 175

I didn't think there was much of a question; the lord of the rings simply had a huge amount more material that was fully assembled by the original author than the Hobbit did. It was one book, with a scattering of notes and addendums, that got stitched and stretched into three epic movies.

It's interesting that they're admitting directorial mayhem at this point, but the direction taken from the outset was overkill and greedy. I'm sure it could have been better, but still, it took a lot to make this mess.

Of course, I'm still going to watch them again. Someday.

Comment JuiceSSH is a nice terminal app (Score 4, Interesting) 352

I use JuiceSSH on my phone, which is amazingly useful more often than it should be necessary. It falls fairly low on that link, for some reason, so maybe I should check the others out.

puTTY on Windows.

Otherwise I'm connected directly to a linux box and just SSH out from a native command line. I don't tend to boot into X unless really necessary, and then I'm normally just stuck with xterm until I can get out of it.

And I don't know when the last time I had to terminal from an apple product is, so I don't even know any more for that one.

Comment "Software Engineer" != "Programmer" (Score 5, Insightful) 568

When a building gets built, or a dam, or a pipeline, there's engineers, architects, brick layers, welders, and all sorts of other people involved.

When you do software, "programmers" is just a catch name all that could fit several of those archetypes -- the only thing you have to do to be a programmer is to actually code. But there certainly ARE software "Engineers" -- people whose job it is to make sure that everything the designers and programmers are individually putting together should _work_, and not fall apart, and survive in the actual ecosystem the code is released in.

Just like a good mechanical engineer shouldn't be above picking up a shovel or an acetylene torch now and again, if needed, even if it isn't still their forté, a good software engineer should probably be able to code, but that's not really their core purpose.

And as for "Engineering claims an explicit responsibility to public safety and reliability, even if it doesn’t always deliver"... next time someone fails to steal your identity on the internet, thank a programmer, a software engineer, and probably a computer scientist among others. Next time I don't die driving over a bridge I'll thank a few mechanical and civil engineers, as well as the workers that did their job putting it together.

Comment $20 is probably impossible (Score 5, Informative) 508

The best thing out there, designed specifically to address your concern, is the XO laptop by the people for their "One Laptop Per Child" campaign:

Their price is $35 per unit, and they take significant cuts (and some creative solutions to be sure) to get there. They're not exactly readily available, particularly for US schools, but it may be worth talking to them. It's possible these would be enough for you, if you could get hold of them, but I'd consider them pretty under-powered for an applicable middle-school or higher education where there are other options.

The XO is a good data point for what you sacrifice going below the entry Chromebook or hp-11 style laptop, or even an android tablet with a keyboard. Also, it sets the bar at $35 so your hopeful target of $20 seems unlikely; the XO has been around for years and they probably can't go much lower, and you're not likely to get many people competing for this space, at least not for profit. The DIY kits (i.e. raspberry pi) you've already addressed and those are even more expensive. The idea of hooking to an existing TV (with an Android Stick) may have merit, but there's still the price of a mouse, keyboard, and a capable TV in the first place, so the real price is higher.

Anyway, I think you're going to be hard pressed to find better solutions. It's a noble goal, but the industry just isn't there yet, despite good examples of people trying. Hope this helps.

Comment Re:Avoid INTERCAL (Score 1) 429

So, you mean the fact that I wrote a c-intercal parser that used obscure opcodes to actually perform the interweave and or and xor isn't a good thing to put on my resume?

Eep, I have offended someone with actual skills! The horror.

Putting it on your resume is one thing... heck, I'd hire someone who had legitimate INTERCAL experience on principle.

Still, I ran a few job searches and couldn't find a match... not a single job looking for INTERCAL experience. What has the world come to? You may get more luck on masochism personals (Ashley Madison anyone?): "gwc, into whips, chains, and being forced to code complex algorithms in INTERCAL". Hmmm.

Off to google LIRL now!

Comment Re:Avoid INTERCAL (Score 3, Interesting) 429

R is also only one of several even more obscure languages in that domain, including Julia and Stan... is MAPLE still a thing? Less obscure is MATLAB, and Mathematica... (all platforms as well as languages) they've all got their special strengths as usual.

Swift is more popular than R, yet still obscure compared to the top 10 or so. I don't know how ABAP is still alive.

Prolog, Scheme, Groovy, SCALA... there are lots. Even LISP shows up below R in some lists.

SQL is similarly not obscure in its area, but worth learning and you rarely see it in a list of general programming languages (because it isn't). But the commercial vendors all ship their SQL with strong variants that extend the language and do more common language functions like looping. I speak of PL/SQL, TSQL, and their ilk, which all have a touch of obscurity in the same way R does.

I might recommend targeting obscure libraries or platforms also. CUDA isn't a language so much as an architecture; OpenCV is interesting.

If you're looking for jobs, take those, plug them into a job search engine and see what interests you. Languages tend to correlate with industries fairly well. If you want to work on Genomics, you'll see different languages at the top than if you want to work on Wall Street.

Avoid INTERCAL job postings at all costs.

Comment Re:On Yesterday's Medical Data Topic with Mark Cub (Score 1) 96

I can't speak for Cuban, but I was describing taking the 20-40 regular blood tests plus whatever someone may be interested in for more personal reasons, more regularly. Not necessarily adding breadth to the data, but regularity. Yes, a genetic test that comes back negative is unlikely to change.

Of course, we think it won't change based on a belief about how genetics works that isn't frequently tested, and poorly researched. Some genetic changes that can happen during a lifetime weren't really accepted until 2008, and we don't have a good understanding of that impact yet. Another example I can give you is Chimerism -- the possibility that someone has two sets of DNA working in their body, sometimes with different chromosome types. This is the sort of thing that tends to be diagnosed by accident, sometimes during transplant typing. It simply wouldn't be uncovered during a single genetic test, but multiple tests over time would make it readily apparently.

I think that's a bit extreme, though... simpler examples are borderline cases of routine blood work. The de-facto standard is to compare blood work to general standards, but we don't adjust for variation in individuals. A white blood cell count of 11 is borderline, but if I came to a doctor with symptoms and an 11 WBC, an might be prescribed, since "normal" is just below that. If, on the other hand, I had years of "healthy" measurements where my WBC was normally 10-11, we'd realize that I had a naturally high count and probably conclude that this symptom may be something that is not affecting my WBC. Or, years of high tests might be a symptom of something else, but still different than a single spike from a person with a proven healthy average of 4-5. If I only ever have the test performed when I'm sick then all you can compare me to is the standard population. That's bad statistics, bad science, and it ought to be bad medicine.

Now, I've also heard from doctors that said they wouldn't treat this patient differently because it goes against their training. So, yes, I agree the US has a ways to go, and I agree we need a culture change. I don't think the way to make that change is to shoot down people who are willing to be a part of it and promote it. I do think that even if you dislike the precise mechanism Cuban recommended, it still sounds like a step towards the more openness that it seems we agree is superior in your Scandinavian example, even if the path isn't direct. We won't change overnight, but we won't change at all if we stop people from trying.

Comment Re:On Yesterday's Medical Data Topic with Mark Cub (Score 1) 96

The downside of the information collection you mention is an invasion of privacy. Some people think it truly is worth the tradeoff, and others do not. I personally agree that it is not.

The downside of more individuals collecting their own personal medical information is NOT an invasion of privacy; not if the information is freely collected by the individuals as Cuban suggested. If there is a problem of data retention or privacy use by labs or doctors, that is an important but separate argument; a red herring to what I'm talking about..

The argument was that the downside is over diagnosis -- that more routine testing will lead to more worrisome results and more invasive/expensive/unnecessary tests; that's a different haystack. I agree that this is a worrisome trend in the way medical tests are currently performed; research supports this. Where I differ is that I think more frequent (voluntary, healthy, uninterpreted by themselves) tests could provide a better "big-data" baseline. Research has not, I believe, thoroughly investigated this one way or another. While I'm open to being proved incorrect, I don't think I've seen a solid rebuttal. I have enough of a background in data mining and medical data that I feel qualified to take that stance, but certainly I disagree that the privacy is a central issue here.

Comment Re:On Yesterday's Medical Data Topic with Mark Cub (Score 1) 96

Well said.

To elaborate, in reply to parent, I do get the opposing point of view, from the perspective of how medicine is currently practiced. But we're in a much more data-driven world where the "quantified self" is much more viable. If it takes decades of peer-reviewed research for the medical industry to catch up and make real and helpful use of the wealth of more easily captured data, then that's what has to happen.

I'm not comparing someone going to a doctor more frequently to the status quo. I'm comparing going to a doctor when you feel sick and getting labs then and only then, vs. going to a doctor with a stack of labs from when you were healthy and sick in the past. If that additional data can't help inform the doctor's decision then it's more research, not less testing, that is required.

Comment On Yesterday's Medical Data Topic with Mark Cuban (Score 1) 96

This is interesting, given a conversation between Mark Cuban and some doctors/researchers yesterday:

Cuban was advocating for regular baseline lab tests so that doctors would have a trend analysis available to them when he gets sick. He got pretty thoroughly attacked, by Forbes:

My opinion was that Forbes misrepresented things, but, related to this Slashdot post, it seems there's an interesting resistance to this sort of data-driven-diagnoses. Forbes would argue that lots of tests will lead to a false positive; I would argue that the more data the more you can become confident of the difference between a false positive and a real one -- seems like basic statistics to me, but we need to get the research and the doctors on board with a more data driven approach, rather than the kneejerk approach used in diagnoses now.

Comment Re:Parody (Score 1) 255

I don't think this is true. There's no requirement that the parody or satire adhere to specific conventions; what people find funny or ironic could be debated, I suppose, if someone wanted to push the legal boundaries, but just because you don't consider being "serious yet thought-provoking" capable of ALSO being satire doesn't mean you're legally correct.

"Tongue-in-cheek" humor goes out of its way to appear serious, but is intentionally satirical. I think this could easily classify. In fact, the reference to the Power Rangers is what pushes it over the line for me. I see only two reasons to use the Rangers imagery -- a non-satirical reason which would imply that the producers really believed they were generating a standalone, serious piece of work that borrowed from the Power Rangers' mythology... or a satirical reason where the producers believed that making a high-production-value version of a campy old action TV show was so ironic as to be funny, both to themselves enough to create it, and to the internet enough to appreciate it (and I'd have hoped, to the original Power Rangers as a parodical homage).

I would choose to believe the latter, and I would argue that even if you believe the former, as you seem to, there's enough of an argument to qualify for satire exceptions to copyright.

Comment Re: Not the fault of science (Score 1) 958

This may be a valid point if you weren't being an jerk and hiding behind an AC post to ask it. I have a whole rant about who should and shouldn't be considered a scientist, and it's really a spectrum, not a binary thing, complicated by things like specialties and whatnot. I don't wear a lab coat, sorry, but yes, I have nice degrees with "science" in them and some credentials to back it up. If you were to rank everyone by their job and their credentials on how "sciency" they are, I wouldn't be anywhere near the top, but I think if there were a line in the sand, I'd make the cut.

Of course, "we" in the sense I used it could also comprise everyone who wants to convince people to put more faith in science than in whatever else they make decisions by, at least on these wide consensus issues.

And, once again, I've responded to an anonymous troll. *sigh*... no more internet for me today. Back to vacation.

Comment Re:Not the fault of science (Score 5, Insightful) 958

It was absolutely the best science that the 1970s had to offer. The fact that it turned out to be wrong was due to a large number of factors, but not that it wasn't "science". One good article of many is:, which references a lot of large controlled scientific studies that, yes, had issues, but were still the best of the time. There were ALSO studies that came to other conclusions, but remember that there are real studies by real scientists (by any useful definition) that come to all sorts of wrong conclusions. There will always be someone to say "told you so", no matter how ludicrous their position seemed by the majority at the time -- even if the majority includes most of the scientists; if those scientists are later wrong.

People equate science with truth, and that's simply wrong. Science is a process, a mechanism to expand our knowledge, but it's fallible, and rarely results in absolute truths. As the linked Scott Adams article says, Science is about nudging us towards improvement, and I agree. The public face of science is, unfortunately at times, journalism, government and other, equally human equally (if not more) fallible entities -- but those people did listen to scientists; they didn't just make stuff up (most of the time).

Science has an image problem, though, and it IS self-inflicted. We're coming across as arrogant to the scientifically illiterate, rather than nurturing, and it's turning people away. We label people "deniers" when they're genuinely curious, and they get defensive, and it's all downhill. We get combative and then pretend that it was someone else's misunderstanding when our consensus is wrong. Science is the right approach, but when it loses a popularity contest, particularly in a democracy, it's can get pretty bleak for a while. There's no reason that needs to happen, but denying the problem isn't the answer. We should embrace the dialogue that Adams is a part of here.

Slashdot Top Deals

10.0 times 0.1 is hardly ever 1.0.