Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

×

Comment: Re:The Canadian middle class is dying out. (Score 1) 172

by bmo (#49366127) Attached to: Best Buy Kills Off Future Shop

You blame the union members and the unions.

You blame them when the decision to sell shit products and ignore quality issues was an upper management problem, and remains to an upper management problem to this day.

Because if that responsibility doesn't lie with upper management, then why do they get paid fucking rockstar salaries? What do they do all day, financial masturbation?

--
BMO

Comment: Re:Legal (Score 2) 168

by PopeRatzo (#49362345) Attached to: Commercial Flamethrower Successfully Crowdfunded

Is anything legal in California these days?

Medical marijuana, gay marriage, conceal/carry.

Say you're not really pissed that fucking flamethrowers aren't legal there, are you? I don't know if you've gotten a look at your fellow man in the United States lately, but are these really people you want to be able to have flamethrowers? Geez, louise, there can't be more than maybe 1 in 100 that I think should be allowed to drive a car. Maybe 1 in 10 should be allowed to have shoelaces for chrissake.

Although I'm sure we can find someone reading this that believes "More flamethrowers = Less crime".

Comment: Re:OMG america is stupid (Score 1, Insightful) 168

by PCM2 (#49361967) Attached to: Commercial Flamethrower Successfully Crowdfunded

If ever there was a weapon that would be classified as only a weapon of terror with no practical application beyond fear.

Well, fear and burning people to death so they're no longer a threat. Not very efficient, but effective.

And I guess the "practical applications" of your guns, if they don't involve fear, involve gunning people down, right? Don't bother with scaring them off, just kill them.

Between you and me, it seems like the practical application of creating fear is working just great on you, quick-draw.

Comment: Re:N4N? (Score 2, Informative) 309

tech how?

It's not, but Friday night is #GamerGate and MRAs night on Slashdot, when 8chan empties out and all the manbabies meet here to cry about how the feminazis are taking away their games and comics and action figures.

Look back a few months. It happens every Friday. There is a story about gender or sexual orientation or something that can be construed as violating the natural order of the primacy of white men. Then, the tears start to flow and it all ends in the gators and the MRAs in one big group hug.

It's harmless, really. If it keeps them off the streets, I'm all for them having their own neckbeard hugbox.

Comment: Re:Wrong target (Score 2) 56

by Just Some Guy (#49358493) Attached to: Google Loses Ruling In Safari Tracking Case

The target should be Apple not Google.

That's a stupendous way to end software development overnight. Yes, Apple had a bug. All software has bugs. They clearly intended for a different outcome and surely never expected Google to actively attack it.

Of the two, Apple made a mistake but acted with good intentions (at least on the surface, but there's no point going full tinfoil because then there's no point having a conversation about it). Google acted maliciously, and if someone's going to be held accountable for this then it should be them.

In before "lol fanboy": I would say exactly the opposite if, say, iCloud.com exploited a bug (not a feature: a bug) in Chrome to do the same thing. In this specific case, Apple seems to have acted honorably and Google unhonorably.

Comment: Re:Morality Framework UNNEEDED (Score 1) 176

by DumbSwede (#49349635) Attached to: German Auto Firms Face Roadblock In Testing Driverless Car Software

Ahhh, but you are looking at the one situation in isolation. The moral thing to do is everyone hand over the driving to the machines as that will save the greatest number of lives in the long run. By being unwilling to hand the decision to a machine you are choosing to kill a greater number of humans in practice on average – just so you can exercise the moral decision in some outlier. If self-driving cars were only as good as, or even possibly just a little better than us at driving, I might side with you, but likely they will be orders of magnitude better.

BTW I meant “former” not “latter” in my first post.

Comment: Morality Framework UNNEEDED (Score 1) 176

by DumbSwede (#49348401) Attached to: German Auto Firms Face Roadblock In Testing Driverless Car Software

Why this obsession with moral reasoning on the part of the car? If using self-driving cars are in 10x fewer accidents than human driven cars, why the requirement to act morally in the few accidents they do have. And it isn’t as if the morality is completely missing, it is implicit in not trying to to hit objects, be they human or otherwise. Sure try to detect which are objects are human and avoid them at great cost, but deciding which human to hit in highly unlikely situations seems unneeded and perhaps even unethical in a fashion. As it is now, who gets hit in these unlikely scenarios is random, akin to an Act of God. Once you start programming in morality you’re open to criticism on why you chose the priorities you did. Selfishly I would have my car hit the pedestrian instead of another car, if the latter were more likely to kill me. No need to ascertain the number of occupants in the other car. Instinctively this is what we humans do already -- try not to hit anything, but save ourselves as a first priority. In my few new misses (near hits) I’ve had, I never find myself counting the number of occupants in the other car as I make my driving decisions.

Comment: Re:You should title this "Patriot act to be repeal (Score 1) 185

by PopeRatzo (#49341795) Attached to: New Bill Would Repeal Patriot Act

I don't honestly see Jeb as having that much a chance, at least not if it were done today.

He's winning the money primary, which is the only one that really matters.

I also think there are a lot of folks in the US that just do not want another dynasty name in there, no more Clintons or Bushes.

Well, there's the problem, isn't it? It just doesn't matter that folks in the US think when it comes to US elections. The decisions are always made for us long before election day.

Comment: Re:python and java (Score 1) 482

by Just Some Guy (#49338871) Attached to: No, It's Not Always Quicker To Do Things In Memory

Python's string library isn't remotely what I'd call "overweight", but its strings are immutable. Some algorithms that are quick in other languages are slow in Python, and some operations that are risky in other languages (like using strings for hash keys) are trivial (and threadsafe) in Python. But regardless of the language involved, it's always a good idea to have a bare minimum of knowledge about it before you do something completely stupid.

Comment: No anger, just thougt exercises (Score 1) 129

by DumbSwede (#49338055) Attached to: Do Robots Need Behavioral 'Laws' For Interacting With Other Robots?

I’m not angry, far from it. This is fun and thought provoking thread. I hope I haven’t ruffled your feathers. My last post was a little dark. I am merely suggesting that we must look past mankind’s interests as the final arbiter of what is best in the universe. Perhaps what comes after us will be a better world, even if we have a diminished (if any) place in it.

If robots become truly sentient (and not mere automatons) then what we can ethically do to/with them becomes questionable. Likely there will be castes of robots. Those self-aware who should be considered full citizens, and those (from their inception) that are not self-aware can be treated as automatons without ethical dilemma. Likely self-aware robots will employ non self-aware robots to do their bidding as well.

If mankind wishes to stay in control and maintain a moral high ground, then we probably should not incorporate self-awareness into AI (if we would only then treat them as slaves). Of course failing to create self-aware intellects may it self be morally questionable if we have the power to do so.

I’m not sure what to make of the golden retriever comment. Was it moral to breed dogs that look to us as their masters? It is a thought worth considering. Or will we be the golden retrievers to our new robot overlords? We have a pet dog and it seems a good bargain for he and us. Certainly he would not be able to make his way in the world without us, so our demands on him are probably fair exchange.

What good is a ticket to the good life, if you can't find the entrance?

Working...