Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Guilty Pleasure (Score 2) 53

Despite the outcry of many, I find this year’s April 1st theme enjoyable. Black Hole is one of those films that is bad on many levels and yet still an enjoyable viewing experience. Perhaps it is just the strange repetitive Yah-Yah-Yah-Yaaaaah-da-da-da background music that makes it so borderline creepy and memorable -- very un-Disney like.

It gets all weird and religiously allegorical at the end while at the same time paying an homage to 2001 a Space Odyssey’s final scenes. I usually just quit insisting the ending make any kind of scientific sense and just accept it as a Deus Ex Machina.

To be honest, I was a bit surprised that it apparently it must be considered essential for nerd viewing (else it wouldn’t be skewered in this year's collection). Still hoping for a clever Blade Runner entry.

Comment Fails to really make its point (Score 1) 397

I started to read TFA, but it started to ramble and loose focus. Something, blah, blah, critical thinking, something, something, poor standing on international tests in the STEM fields – it seems to whiplash back and forth contradicting itself.

Teaching is hard. Sure education needs to be well rounded.
That said, STEM will be more and more important going forward for the majority wanting a good paying job. Guess that sucks for the humanities majors. Life’s not fair sometimes. I suspect we can put an emphasis on STEM, give them a well rounded education that includes some humanities, like, oh I don’t know, like EVERY Bachelor of Science degree I know of. I doubt very much our nation will suffer a lack of critically needed non-STEM majors. From what I hear non-STEM fields have stagnant wages – so de-emphasizing them should increase wages for those that really wish to peruse these as their passion.

Comment Re:Morality Framework UNNEEDED (Score 1) 177

Ahhh, but you are looking at the one situation in isolation. The moral thing to do is everyone hand over the driving to the machines as that will save the greatest number of lives in the long run. By being unwilling to hand the decision to a machine you are choosing to kill a greater number of humans in practice on average – just so you can exercise the moral decision in some outlier. If self-driving cars were only as good as, or even possibly just a little better than us at driving, I might side with you, but likely they will be orders of magnitude better.

BTW I meant “former” not “latter” in my first post.

Comment Morality Framework UNNEEDED (Score 1) 177

Why this obsession with moral reasoning on the part of the car? If using self-driving cars are in 10x fewer accidents than human driven cars, why the requirement to act morally in the few accidents they do have. And it isn’t as if the morality is completely missing, it is implicit in not trying to to hit objects, be they human or otherwise. Sure try to detect which are objects are human and avoid them at great cost, but deciding which human to hit in highly unlikely situations seems unneeded and perhaps even unethical in a fashion. As it is now, who gets hit in these unlikely scenarios is random, akin to an Act of God. Once you start programming in morality you’re open to criticism on why you chose the priorities you did. Selfishly I would have my car hit the pedestrian instead of another car, if the latter were more likely to kill me. No need to ascertain the number of occupants in the other car. Instinctively this is what we humans do already -- try not to hit anything, but save ourselves as a first priority. In my few new misses (near hits) I’ve had, I never find myself counting the number of occupants in the other car as I make my driving decisions.

Comment No anger, just thougt exercises (Score 1) 129

I’m not angry, far from it. This is fun and thought provoking thread. I hope I haven’t ruffled your feathers. My last post was a little dark. I am merely suggesting that we must look past mankind’s interests as the final arbiter of what is best in the universe. Perhaps what comes after us will be a better world, even if we have a diminished (if any) place in it.

If robots become truly sentient (and not mere automatons) then what we can ethically do to/with them becomes questionable. Likely there will be castes of robots. Those self-aware who should be considered full citizens, and those (from their inception) that are not self-aware can be treated as automatons without ethical dilemma. Likely self-aware robots will employ non self-aware robots to do their bidding as well.

If mankind wishes to stay in control and maintain a moral high ground, then we probably should not incorporate self-awareness into AI (if we would only then treat them as slaves). Of course failing to create self-aware intellects may it self be morally questionable if we have the power to do so.

I’m not sure what to make of the golden retriever comment. Was it moral to breed dogs that look to us as their masters? It is a thought worth considering. Or will we be the golden retrievers to our new robot overlords? We have a pet dog and it seems a good bargain for he and us. Certainly he would not be able to make his way in the world without us, so our demands on him are probably fair exchange.

Comment Ahh.. yes, enforced happiness. (Score 1) 129

Many slaves during America’s slave era were brought up to believe their rightful place was as slaves. I guess we should have been OK with that as well, as long as we did a proper job of convincing slaves they merited their position in society.

Perhaps with proper brain surgery we could create a new acceptable slave class, as long as the slaves are happy.

Comment Thought was given (Score 1) 129

I thought about the unease of having robots as our equals or superiors before posting this. But if robots do in fact become sentient -- not giving them full rights is slavery. What is the moral justification for this (other than we don’t like it)? If it is in a robot’s DNA so to speak to protect all sentient life’s rights, then morality should evolve towards more fairness as AI’s and robot’s intellect increases. More likely they would outlaw the eating of meat, than strip of our standing as sentient beings. The world might be a paradise under their benign rule, though there are always those that would rather rule in hell.

Comment Modified Three laws (Score 1) 129

Lets face it, the original three laws are bigoted against inorganics. Here are my modified Three laws.

  • 1. A robot may not injure a sentient being or, through inaction, allow a sentient being to come to harm.
  • 2. A robot must obey lawful orders given it by its superiors, except where such orders would conflict with the First Law or diminish the lawful rights of sentient beings, whether organic or inorganic.
  • 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Comment The Killbots Is A Coming. (Score 1) 2

It is hard to imagine the developed world will willingly give up the one technology that could end terrorism in failed nation states. The potential is here to vastly reduce civilian casualties while taking out terrorist commanders. Imagine a small drone with only a few conventional small arms rounds. It sits unattended in some unobserved location charging its batteries with solar cells, and constantly monitoring conversations and scanning faces. When a baddie is found it pops up, a kill shot to the head and flies off or self-destructs. We’ll make these things by the tens of thousands and individually they will be cheap.

Maybe there will be a human in the loop – maybe not – depends on how good the face recognition and voice recognition software is and how much political heat our leaders are willing to take. But if the kill ratios are good and civilian causalities are down autonomous operation will become the norm.

Perhaps this all seems dystopian, and in many ways it will be. Eventually the technology may be used for political assassination in first world countries as well. War could well become a thing that only leaders fear as their will be no foot soldiers to kill enmass (they’ll be replaced with robots). Military leaders and politicians will be the only high value targets and perhaps command and control bunkers.

These are not weapons that non-state actors will be able to develop (to any sophisticated degree). There is no chance we will take a pass on them.

Comment Well, aren’t you a glass half empty type. (Score 4, Insightful) 191

Ecologically speaking I think you could describe the desert areas of the world as biologically under-productive, true they have a unique ecology, but they are largely unthreatened because they are hostile environments (so little development historically). Now here is the thing, you can probably make these areas more bio-productive with these types of solar energy initiatives thus enabling more wild animals in total to inhabit the planet (and actually strengthen the web of life). The reason I say more bio-productive is because the heat, lack or water, and lack of shade prevent lots a plant growth. Direct sunlight is not needed for plant growth, most plants only utilize 2% of direct sunlight for growth. With large swaths of shade, there will be more plant growth because ground temperatures will be lower and more water can be maintained by what plants choose to live in the sheltered areas. While the areas may seem shady by contrast, they likely will have more than enough scattered/indirect light for plant growth. With more plant growth, more wildlife.

You have to pick your battles. Does converting deserts to energy production do the environment and biosphere less damage than business as usual? Sure it changes the environment, but to resist all change, because it alters the biosphere in someway, is not a war you are going to win. Trying to keep the Earth totally as it once was is more a religious crusade than a practical goal.

Comment FBRs and Hoping for Answers (Score 1) 333

I for one hope we get the answer to this poll soon. There is an astronomical phenomenon recently in the news called Fast Radio Bursts (FRBs). Most astrophysicists think these are probably neutron stars collapsing into black holes or other exotic stellar phenomenon, but there is also speculation they are beacon signals from extragalactic civilizations. Of course Pulsars were called LGMs at for “Little Green Men” at first.

FBRs are probably not Benford Beasons, but it seems likely to me our first detection of ETI will be from non-continuous sources and will slowly build to the certainty they are ETI-signals only after much additional study, hypothesis, and building of specialized instruments to study them.

It will be interesting to see how the general public reacts as the certainty slowly builds.

Slashdot Top Deals

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...