Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:BIG data? (Score 3, Insightful) 155

Ah, good, I'm glad someone already mentioned this. Big did not deserve to be italicized there not only because 260 million minutes of video isn't "that much" (!) in terms of internet streaming viewers, but the statistics aren't really based on number of minutes of video analyzed... the main statistics are more about viewership and certain events (video startup, video freezing), which could be surrounded by hours of uninteresting video time that didn't really contribute to some of the metrics.

Netflix has, what, 20+ million individual viewers per month? 10 hours a piece isn't hard to imagine. As the parent pointed out youtube is much larger than that.

It's still very interesting analytics. it's not always the size that matters with "big" data. But let's not get carried away with the italics now people... this way madness lies.

Comment Re:Ouch. (Score 5, Informative) 362

To my dismay, I worked on this project. The project started with controversy -- the Oracle bid that beat out SAP like seven years ago was surrounded by complaints. The article skips some details. CSC (Computer Sciences Corp, who is quoted) was the main driver of about $800-million of that spending. It is accurate to say that this change didn't affect them, but that's because hundreds of people had already been laid off or moved of the project between last September and last March.

There's enough blame to go all over the place. Years spent in requirements that weren't turned into code; time spent passing blame back and forth across development teams who were so large and segregated that they rarely communicated properly, both within the Air Force and within CSC and between the other teams. At it's peak I believe the project had roughly 800 people on it. I don't know what the maximum size a development project should have, but it's got to be smaller than that. That number includes everyone, trainers, managers, and some key initial users and testers, but still it's a very high number.

The Air Force tried several times to realign the project, but there were contractual disputes or, once that was over, difficulty deciding what to keep and what to scrap, which lead to a death spiral where everything went back on the drawing board and I think ultimately leadership just lost hope.

It wasn't a complete loss, though. A few small teams, including the one I was previously on, have survived. We built a robust data quality system and are working on some enterprise data dictionary and master data tools, which will help the systems that are left behind. With hundreds of systems supporting a half million users, $1billion probably isn't off the chart -- at least not had this been a successful project, but the worst part is that there's still much work that needs to be done, and now someone will have to start over... again.

Comment Re:Slackware on floppies (Score 1) 867

I was a fan of the InfoMagic packages, which IIRC included Slackware's entire distro and a huge number of other files. Does anyone else remember Elfos fondly? I have a picture of the CD around here somewhere... I even had a T-shirt.

I was Slackware -> RedHat -> Ubuntu for my primary, but I tried CentOS, Fedora, and Knoppix in there for non-trivial amounts of time. Does Android count? ;-)

Also, I played with FreeBSD but that doesn't count for multiple reasons.

Comment I freakin' love Kaggle (Score 3, Interesting) 19

I've been working on the Heritage Health Prize that Kaggle is running for over a year now. It's a fantastic way to learn data science and tackle real world problems with real data and a co-op-etitive spirit. The forums and winning solutions are great for learning the art, and if you've never used R, it's a great opportunity to learn it and talk to people that have a ton of experience in the area.

Comment Re:Important question (Score 1) 118

Someone with more recent experience should answer this since, as I said somewhere up above, I haven't used one of these in a while. That said, several of the USB Microscopes I've used work as fairly standards-compliant webcams, particularly (IIRC) with GSPCA and V4L support.

What you won't get on linux is support for computer controlled focus, enabling and disabling lights or any other features that the microscopes have, unless they have explicit linux tools bundled, which some may have, but I have not worked with any so I can't say specifically.

Comment Re:Intel QX3 (Score 2) 118

Hear hear. I had one of the old Intel QX microscopes and loved it. Any USB microscope will let a child see things on a nice big screen and many of them, like the QX3, are ruggedized for kids. It's pricey, though, compared to other desktop USB microscopes of which there are many, as other posters are mentioning (search Amazon), but the design should make it worthwhile. I haven't used the software in a while; it was a bit buggy last time I played with it, but hopefully it's improved.

If you really want eyepieces, there are some that do double duty -- Celestron makes entry-level microscopes that have replaceable eyepieces (so you can use it like a traditional microscope or mount a camera on it, but not both at the same time), or ones with LCD screens directly built in instead of traditional viewfinders.

As you near the $250 barrier you can get scopes that start to do double-duty, and your options increase. These are more lab-ready and may need more oversight for a 7-year-old, and frankly I find them harder to use for kids, but the ability to look through an eyepiece to set up and focus a shot and then discover the results on a computer make for good two-person work in parent-child fashion.

Comment Re:The reason you haven't heard about it (Score 3, Interesting) 207

Out of points or I'd mod you up for mentioning Future Crew. I still have a 3.5" floating around with some great old demos from that era.

I'm not surprised anyone hasn't heard about the demoscene any more, but it's easy to figure describe: fit the most impressive graphics and sound you can on a very small memory footprint -- 64kb is the common limit now apparently, but I seem to recall some good "anything you can fit on a disk" rules and some impressive 4k demos.

Other than raw coolness, the point is to "do more with less" -- push creativity, efficiency, and algorithm design by artificially limiting resources. It's very impressive stuff, and having been out of touch for a long time, I'm AMAZED at the quality these vids are putting out. Has anyone been able to run the .exe's to verify? My rig keeps failing them -- I imagine it requires specific hardware and software versions.

Comment While 70% isn't a failing grade exactly... (Score 1) 371

I love science. From the article:

"a projection from 1981 for rising temperatures [...] has been found to agree well with the observations since then, underestimating the observed trend by about 30%."

Most predictions that are off by 30% aren't really considered "remarkably accurate".

I'm uncertain about this one (i.e. I'm not trying to troll). It's nice that the prediction does seem to chart well, and since the weather can't be accurately predicted 7 days in advance, any climate model that outperforms naive ones over 30 years is an achievement. Still, the point of the (recent) author is clear -- they're trying to emphasize that a warming trend is occurring and that people predicted it 30 years ago using "science" that is still sound -- at least some discussion of why a 30% error rate is acceptable should take place.

Comment You've come to the right place (Score 5, Insightful) 635

Well, you certainly won't find a shortage of opinions on Slashdot. :-)

If you think the software is good enough, then a non-commercial version with limited registration information (e-mail, name), and some very privacy-thoughtful reporting (maybe to ensure that the registered serial numbers are only being used by one machine at a time), should only be a good thing. Getting your software into the hands of the people that might buy it will get them used to it, relying on it, and eventually make them customers. But (as others here have posted), don't abuse the "spying"... if you start to make money by pilfering the free registrations for ancillary information you're just going to annoy your users and they'll be more apt to pirate the software or use fake registration information. Giving them something in return, like forum access for very limited support, is helpful.

Other possible models include giving the software for free and asking payment for support -- nearly all profitable Open Source companies do this, and even if you leave the source closed the business model isn't terribly different. You could publish a "crippleware" version, which I find rather annoying, unless the limits are such that the home and non-commercial users needs are really satisfied, and the only people that need to pay $10k for the software are those to whom it's worth it. I give a nice shout out to Andrea Mosaic for doing this correctly (at a lower price point).

Lastly an option you may have missed may be to ignore it because it isn't a problem. A pirated version by a customer that wouldn't have paid anyway probably doesn't hurt you. A pirated version by a customer that would have paid may actually turn into a sale if they need assistance. When you upgrade, if the pirates liked it, they'll want the next version, so they may buy. It may be pirated by employees or students who years later may remember it and decide to buy it. You never can tell.

In those cases, you're getting your software out there and used; you could take an "all exposure is good exposure" attitude. The fact that you didn't list the name of your software in the original post here means that you may not think that way, or you may outright disagree.

Still, piracy is going to happen. At least you're asking the right questions. Don't let yourself get dragged into a fight with the anonymous masses on the internet, though -- you'll probably lose.

Comment Doorstops (Score 5, Insightful) 228

Robert Jordan's books redefined the level of crazy that I will accept from an author. They're fantastic writing, a wonderful, deep, involved storyline, but come ON, the length is way too self-indulgent and unnecessary. The story is nowhere near as complicated (or worthy) as, say, FOUR Lord of the Rings trilogies, but it's substantially longer. The sadness is that it is comparably well written -- length notwithstanding.

I'm currently using four of the books as monitor stands (I actually won't go so far as to use them as doorstops).

More importantly, though, this has changed the way I'll read connected books or watch TV shows. I fear the abandoned story line too much now, and I blame Robert Jordan. "Heroes", the TV show, was a similar letdown... I waited until "Lost" was finished, for fear of it falling into the same pit as "Heroes", and nearly did the same thing with "Battlestar Galactica".

Is there a name for this? Can we call it the "Robert Jordan" effect? -- the situation where you get too involved with an author or storyline and they just go on forever or (no disrespect) die?

And the expanding-storyline theme is amazing. Eight Harry Potter Movies? Really? Five Twilight movies? I love a good trilogy, and (other than the quality of the prequels) appreciate that the Star Wars trilogies are built so that you can watch the original without needing the rest to complete the story. Many authors have interwoven stories and worlds... How many books did Terry Pratchett write? Many of which made reference to one another, but at least they each had an individual story arc. The Ender's Game series is similar... Terry Brooks' series can be read in myriad configurations of trilogies and tetralogies.

ugh... the Jordan series is fantastic in many ways and I'm very glad to see it completed -- I hope the finale lives up to the series -- but please noone ever do this again, or at least give good warning so that we can avoid going down the path until it's complete.

Books

Submission + - The Downfall of Book Burning (bbc.co.uk)

ZahrGnosis writes: "In 1953, Ray Bradbury's book "Fahrenheit 451" described "a dystopian future in which the US has outlawed reading and firemen burn books". Book burning has long been a symbol of censorship or protest, and Bradbury's book was a great Science Fiction introduction to the topic for many readers alive today.

How should we feel, then that Fahrenheit 451 is becoming an e-book despite its author's feelings? Am I the only one that finds it ironic that a seminal book about book burning soon can't be burnt? In many ways, electronic media has put a serious dent in censorship so perhaps this is a fitting conclusion — even the author may not have the ability to ebb the flow of information in the digital age: the opposite problem of censorship. As a book lover and a technophile (who has yet to make the e-book transition), I find this story oddly interesting and wondered what the Slashdot crowd would think."

Comment All those in favor of the "weird" GIMP UI... (Score 1) 403

I like the UI the way it has been. For one thing it works better with multiple monitors IMO -- I can pull my toolbars off of the main editing window and take up the full screen with the image unabated. Further, I like being able to move things wherever I want and still be able to see whatever I put behind it; the workspace can be spread out without all the grey area taken up by the useless space between MDI child windows and toolbars and such. I get that people like uniform UIs, but I never really understood why people hated the GIMP's so much except that it wasn't familiar.

Does anyone remember Fractal Painter's old UI? Noone liked that either. *sigh* Of course, I think they had things like icons which would slowly fade and disappear if you never used them... or was that some other piece of software? Brilliant idea in any case -- we should get the GIMP to do that.

Comment Re:This isn't an obviously easy question (Score 2) 785

Agreed. Training on new technology is both the employer and employee's responsibility, and neither is fulfilling their social contract if they don't keep up.

The last point, though, is way more important. Good developers are way more than just a coder that knows a particular technology or tool.

Paying a premium might make sense for someone that has experience with the new hotness, but only if the rest of their experience is commensurate. Knowing how to communicate well, cooperate on requirements, think efficiently, provide useful documentation, manage bugs, manage your time, maintain client relationships, and just work well as part of a team are all at least as important as any specific tool. A college grad may or may not have those skills, even good interviews can't tell you everything about how a new hire will interact with a team or a client. If you've got a developer you know and trust and that done good work for a long time then it's unlikely the new-hot-skill is worth a premium above years of experience. A new grad isn't likely to have any depth of experience, even in a new technology, so it seems like simple math to justify training up an existing skilled competent asset rather than spending a premium on an unknown quantity.

If we're comparing against a developer that hasn't kept up, and isn't doing those things well, then that's another story.

Still, each case is unique, and the ability to negotiate a salary is almost an unrelated skill to your actual competence as a developer.

Comment Safeguards (Score 1) 433

I think giving multiple people root access is the opposite direction the original question intended. I think the idea is that you don't want ANY one person to have the ability to bring down, corrupt, or steal from a single system. Considering the recent madness around congressional data, passwords, credit card databases, and thousands of other examples this sort of security seems more and more reasonable.

I work on some unclassified Department of Defense machines, and even to get root access to those machines our admins have to get Top Secret clearances. Of course, this does nothing to ward off incompetence and is only useful for providing increased trust in your single point of failure, (but that trust is important). I assume you're looking for assurances beyond this sort of social-engineering aspect.

The two-key (nuclear) option mentioned is probably workable for a reasonably administered production system. There are several ways to implement a two-person system that could work.

First, routine tasks, or tasks which can be pre-planned (i.e. non-emergency situations) could be scripted on backup environments without the sensitivity or criticality of the production system. This should be standard practice anyway. Once a working tested script is prepared (and that can be a script in the programming sense a la .ksh/.perl or a script in the user-documentation meaning). The production system can require dual authentication for logon before the script could be executed. In a fully automated setting this is a completely technological solution (although I admit that I don't know of a working implementation that is publicly available), but even if there are some manual steps required, holding both key holders responsible and requiring at least over-the-shoulder confirmation is a useful policy.

For more complex tasks, such as recovering physically corrupted data, which really can't be simulated and scripted, dividing responsibilities and ensuring multiple people are in attendance at all times can significantly reduce risk exposure with minimal impact (it doubles your staffing requirement, but only for the hopefully brief periods when this is required). Again, this is not a technical solution, other than having lock boxes or rooms with multiple keys.

It's important to realize of course that you'd have to apply this solution not just to your sysadmin role, but to your physical infrastructure (no one person can access the power button, the fuse box, etc. by themselves), and to each application running on the system, at least for any role that has access to protected areas. This is nigh impossible in a usable system, so the problem gets pretty complicated, but if you really need the security then it may be worth the effort.

First Person Shooters (Games)

Combat Vets On CoD: Black Ops, Medal of Honor Taliban 93

An anonymous reader writes "Thom 'SSGTRAN' Tran, seen in the Call of Duty: Black Ops live action trailer and in the game as the NVA multiplayer character, gets interviewed and talks about Medal of Honor's Taliban drama. '... to me, it's a non-issue. This is Hollywood. This is entertainment. There has to be a bad guy if there's going to be a good guy. It's that simple. Regardless of whether you call them — "Taliban" or "Op For" — you're looking at the same thing. They're the bad guys.'" Gamasutra published a related story about military simulation games from the perspective of black ops veteran and awesome-name-contest winner Wolfgang Hammersmith. "In his view, all gunfights are a series of ordered and logical decisions; when he explains it to me, I can sense him performing mental math, brain exercise, the kind that appeals to gamers and game designers. Precise skill, calculated reaction. Combat operations and pistolcraft are the man's life's work."

Slashdot Top Deals

Today is a good day for information-gathering. Read someone else's mail file.

Working...