Please create an account to participate in the Slashdot moderation system


Forgot your password?

Comment Re:what? (Score 1) 229

You're forgetting the 3rd option:

Horribly insecure code that's too complex (or obfuscated or just plain badly written and possibly poorly commented) for most people to bother looking at, much less fixing & for those that DO bother, they submit a fix/patch which goes ignored or rejected by the maintainer. This, of course, followed by no one bothering to fork the project b/c no one has time for that. This is where most open-source users whine and complain about features, design flaws, and bugs while devs and fanboys tell them "If you don't like it, fork it and do it YOUR way." as if that were a trivial thing just anyone can do in their spare time... b/c we all have such amazing coding skills and free time to take on such an enormous effort by ourselves.

That's the same regardless of whether it's open source or not. So, no - I'm not forgetting. Been there, done that.

Comment Re:what? (Score 1) 229

Something being open source has never, ever meant that it is more secure. That is a myth propagated by open source zealots. Open source only means that, the source can be viewed, and most likely changed, by anyone. Open source zealots assume that means it is rigorously vetted by security experts to find any flaws and fix them, which is a huge assumption that mostly likely is not true for most projects.

While I agree it is a myth, I don't think it's the zealots that really pushed it, but those that didn't really understand their message that open source has the *potential* to be more secure *because* of the many eyeballs effect. That doesn't mean it *will* be, just that it has the *potential* to be.

Open Source Zealots typically won't talk about security, they'll talk about bug fixes and may be equate that to security since more bugs fixed typically will mean less potential for exploits, which is true unless there are fundamental flaws in the programming related to security.

At worse, an open source project has the same security profile as a closed source project - only the people that started the project do anything on it.
At best, a large community builds around it and thereby the many eyeballs effect can take place and the bugs found/fixed (and thereby security improved) by magnitudes higher than a closed-source project of the same initial size.

Comment Moot point... (Score 1) 200

...unless you're also going to halt *all* AI development because any automated weapon without an AI can be controlled by any sufficiently advanced AI.

For instance, all the UAVs could be controlled by an AI, thereby taking a non-AI weapon and making it an autonomous weapon. As long as you have remotely controlled weaponry and AI development - however disconnected they may be - you have the potential for an autonomous weapon that could be outside of human control.

That said, a claymore is a very simple autonomous weapon - albeit one that can be easily disabled, but it's autonomous nonetheless.

Comment Re:Turing Evolved (Score 1) 200

The other important point to make is that when nations sign up to a treaty that bans something, then they will be very reluctant to ignore the ban (openly, at least), and of course, if they don't sign up to it, that tells us something as well. These things can have significant repercussions for the reputation of those countries.

So if the U.S signed a treaty that banned all guns from citizens, it wouldn't be enforceable as the U.S Constitution's 2nd Amendment would trump the treaty. Many other countries may not have that issue, but a treaty can only go so far and a country can only go so far or it risks revolt by its citizens and exiting the treaty any way.

Comment Re:Uh... let me think about it (Score 1) 530

Sorry - should have explained better - the blue ball in my system is where I am now, and red is the goal. It shows both well enough that I can see if it has got the now and goal right.

The point still stands - the user still has to ensure that the ball (red/blue/whatever) for the destination is in the right place. Failure to do that is the primary failure itself, all other failures in logic/common-sense/etc only making the impact of that first failure worse.

Comment Re:Uh... let me think about it (Score 1) 530

Agreed, I almost always have my GPS muted, just using it as a moving map with live traffic information (Google Maps FTW) and ETA. And I look at the ETA and journey time before I start to see if it looks reasonable.

That said, the Belgian woman was lying and using "GPS made me do it" as cover. No one is that stupid, for one thing you can't drive for two days straight without breaks and rest, which would be a dead give-away to anyone with enough cognitive function to actually be able to drive. Not to mention signposts in several different languages along the way


I've known a couple people that drove 35-42 hours by themselves in a car without rest, stopping just to grab food (which they then ate in the car) and rest room breaks. (Grand Rapids, Michigan to Seattle, Washington; and Grand Rapids, Mighican, to somewhere in Arkansas). So yeah, I can believe it, but it's probably more a PEBCAK error than anything else - she probably never verified that the address the GPS was taking her to was the address she *thought* she entered - she probably mistyped it.

Comment Re:Uh... let me think about it (Score 1) 530

The over the top version: If she or anyone uses the voice, they're using it wrong.

The more nuanced version: If you use the map the GPS presents, you'll have a fair idea of how where you are relates to where you want to go. I never listen to the audio or use the turn by turn instructions. I don't need them. I don't trust them. The little blue ball on the map tells me all I usually want.

I have had a GPS try to send me off onto a defunct logging road, a once usable dirt wagon path that has become filled with trees. It was, in honesty, a more direct route, but only by foot.

I think more importantly is that you need to (a) ensure the little blue ball is where *actually* want to go, (b) check it to make sure it's not doing something stupid - like taking you around the block when you could have gone a little further and made one turn instead of 3, and (c) you also have to keep your maps up-to-date.

Doing those 3 things will probably keep you from the stupid examples listed - because that lady that drove 2 days probably put in the wrong address that was very close but just slightly off and didn't check that it was the right address the GPS was taking her to. It probably did *exactly* what she told it to do - a simple PEBCAK error, but a costly one at at that (both time and money).

Comment Re:Simon Seems Off The Mark (Score 1) 89

Some licenses allow anyone to create derivative works that build on the original product, while others reserve that right only for the owners of the original product.

Its pretty clear they're referring to the ability to make commercial works, not downstream OS projects.

That and the fact that more copyleft licenses, e.g GPL, tend to have the requirement of providing information upstream when asked, even though the license itself doesn't directly state it. So it's hard to create even a downstream open source project or derived project for such projects.

Those biases seem to arise from an outdated view of the market for open source software. Students of history know that pioneers of new markets are able to command profit margins approaching 100 percent as long as they can behave as monopolists. As their markets becomes subject to fair competition, margins fall. Expecting 90 percent margins is probably not realistic, yet the authors clearly do:

He seems to be ignoring his own point from the next paragraph, most VC ventures fail. In order for them to see high returns they need the huge home run, if a business bunts into first and barely covers the investment they're still in the hole for the other 5 ventures that failed

Agreed. Lots of respect for Simon, but he seems to miss some of the issues VCs tend to take - namely how they can recoup *their* money out of their investment. Or...

But as Red Hat employee Harish Pillay pointed out, “RHEL has no Enterprise-only features. What is in RHEL is in Fedora.”

Well, RHEL by itself may be the same as Fedora, but then if you are using RHEL you are probably also using RHN which provides a lot of stuff that isn't necessary in Fedora - it's their big value-add.

Comment Re:Hipster software is the real problem. (Score 1) 86

NoSQL is shit

Hardly. NoSQL is glorious at what it does. What it does has nothing to do with replacing SQL, however, and if you pull that shit, it will fall flat on its face, because that's not what it's fucking for.

If your dataset is truly unstructured, then yes NoSQL databases work great. But odds are your data is actually structured and should be in an RDBMS. I worked on one project that insisted on using Cassandra despite the fact that the data was highly structured and fit far better, far easier into an RDBMS. The cost of Cassandra was one of the reasons (though not the only reason) the project got scrapped. Why did it cost so much? because some of the queries we had to do were extremely costly so we had to have a massive cluster to make up for the performance in order to hit the numbered needed for peak usage.Why Cassandra? For the ease of replication.

So yes, it works great for what it works for, but that stuff is typically niche stuff and abnormal. That said, most RDBMS's have incorporated similar functionality so you can continue using the RDBMS and still get the NoSQL-like functionality. PostegresSQL now has the ability to replicate nicely across the Cloud.

Comment Re:My Very Original Thoughts on the Subject (Score 1) 108

Nutrition is a subject for which everybody should understand the basics. Unfortunately, this is hard. Not only is there a ton of conflicting research about how to properly fuel your body, there's a multi-billion-dollar industry with financial incentive to muddy the waters. Further, one of the most basic concepts for how we evaluate food — the calorie — is incredibly imprecise. "Wilbur Atwater, a Department of Agriculture scientist, began by measuring the calories contained in more than 4,000 foods. Then he fed those foods to volunteers and collected their faeces, which he incinerated in a bomb calorimeter. After subtracting the energy measured in the faeces from that in the food, he arrived at the Atwater values, numbers that represent the available energy in each gram of protein, carbohydrate and fat. These century-old figures remain the basis for today's standards."

In addition to the measuring system being outdated, the amount of calories taken from a meal can vary from person to person. Differences in metabolism and digestive efficiency add sizable error bars. Then there are issues with serving sizes and preparation methods. Research is now underway to find a better measure of food intake than the calorie. One possibility for the future is mapping your internal chemistry and having it analyzed with a massive database to see what foods work best for you. Another may involve tweaking your gut microbiome to change how you extract energy from certain foods.

Oh and my captcha is pinhead which obviously refers to the editors.

What I'm curious about is how did he determine that said fecal matter was from the specific food he set out test? It's not like you can fully predict when any one persons (or test subject) will produce fecal matter based on the input foods.

Comment Re:Clever PHBs... (Score 1) 186

"sitting around"? You mean, thinking about the problem being solved, the ways to solve it, the design of the system, the ways to test the code, the edge cases, the potential issues, the possible code approaches and also keeping an eye on the code being written to prevent bugs, bad implementation, laziness or poor practice?

Yeah, sitting around.

I was just quoting TFA. According to TFA, pair programming should allow both people to use the computer, etc; however, what typically ended up was that only one person would use the computer - the more experienced dominating the system usage, and the other would sit there. Yes they may be contributing in discussion, but they won't be gaining the experience necessary to advance.

Comment Re:Relevence of this organization? (Score 4, Informative) 129

Why is this organization even relevant? Which persons involved with the Linux kernel asked for such a foundation, and what was their justification for it?

The Linux Foundation does several things for the community:
1. Pays Linus Torvalds to work on the Linux Kernel. He initially worked for Transmeta, but then when they let him go he was quickly put on the dole by OSDL (now Linux Foundation) in order to help keep him vendor neutral and allow him to focus solely on the Linux Kernel. (While at Transmeta he had some other responsibilities for Transmeta if I'm not mistaken, so most but not all of his time was on the Linux Kernel.)
2. Helps protect the Linux Trademark that Linus officially owns. Linus did not originally trademark the term "Linux"; then someone did and brought a suite against him, so the community (and corporations) stood up, defended it, and then trademarked it, officially giving Linus the ownership. However, Linus is in now way financially capable of defending it against sufficiently funded groups, so having an organization like Linux Foundation help in that respect is very good.
3. Helps show sponsorship of the Linux Kernel. Companies - especially big companies - like to get tax write-offs. By donating to the Linux Foundation (a charity) they get write-offs and they get to build some good will by having their name publicized as a sponsor.
4. Training - Linux Foundation officially does some training, and support. For example, they help companies get into the Kernel Development process, providing access to key developers, and mentoring on how to get contributions accepted. Greg Kroah-Hartman has been quite helpful to a number of companies in that respect; that doesn't mean they get a straight line into having their patches accepted, but that they get mentored on what to do so the patches are *likely* to be accepted - thus more hardware and features are supported by the Linux Kernel.

There's more they do as well, but those are the biggies.

Comment Re:Clever PHBs... (Score 1) 186

Yeah but there is a definite downside - half the time your partner will be coding...

If you read TFA, pair programming typically got skewed so one person was using the computer, typically them ore senior. So...not really half and not sure which person, but one of you will be sitting around while the other does the work.

Comment Re:How many subscribers would we lose if... (Score 1) 302

Ratings of subscription television answer the following questions: Which programs bring in the most subscription revenue? Which programs would make end users more likely to cancel subscriptions if they were canceled?

So Netflix already has a built in rating system for user, and can 100% accurate determine how many users watch any given video because in both cases they actually *have* that information.

That's a far cry from the Neilssen Rating system which uses statistics to guess at how many people actually watch something by extrapolating data based on a limited sample set.

If anything, Netflix would simply have to just have a third party audit their numbers and process of collection if they really cared about competing with broadcast and something like the Neilssen Rating System. Same for Hulu, Amazon, Pureflix, and others that are selling directly to the customer. After all, which boat would you rather be in or sell to? Someone that *knew* they had 15,000 viewers? Or something they *thought* but could not confirm they had 15,000,000 viewers. One may be cheaper (the 15,000,000) but the other (15,000) would likely produce better results.

Slashdot Top Deals

"Little prigs and three-quarter madmen may have the conceit that the laws of nature are constantly broken for their sakes." -- Friedrich Nietzsche