Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment: Re:Not pointless... (Score 1) 284

by TapeCutter (#49771811) Attached to: D.C. Police Detonate Man's 'Suspicious' Pressure Cooker

It's not a crime to have your car parked somewhere if you have a suspended/revoked license

I don't see anyone claiming that it is a crime. What I do see is a lot of slashdotters ignoring the fact that the location of the parking spot aroused legitimate suspicion from police, likewise a pressure cooker in that location will legitimate raise their eyebrows even further.

This is how it operated in London and Paris when the IRA were being bastards. Sure, with 20/20 hindsight, an unlicensed dickhead with a dead car is not a perfect outcome, but it's a much better outcome than a false negative.

Comment: Re:Photo? (Score 1) 86

Someone with a Facebook page?

Well, people used to joke that cancer was killing the internet, now it appears that people are promoting it. It's okay for some I guess, there's always the possibility of mutant powers from it.

Where do you think they should have been posted?

Oh I don't know, how about the tour operators website?

Comment: Re:More than PR (Score 1) 362

by khallow (#49771727) Attached to: What Was the Effect of Rand Paul's 10-Hour "Filibuster"?

Since they were long dead, most definitely not.

Except, of course, for the ones who were still alive. The US had quite an interesting mix of immigrants from Russia when the Wall came down. I recall that meeting some of the brilliant mathematician immigrants of that time from Russia and the Eastern Bloc helped solidify my resolve to get an advanced degree in math, but not to become an academician.

Comment: Re:Germany should pay war reparations for WWII (Score 1) 621

If the economic modelling you use does not have a recession as one of it's possible states. It is not a model of the economy.

If your model of how individual agents interact is not consistent with the rules of double entry book-keeping, you do not have a model of the economy.

If your model of a firms profits don't line up with empirical evidence... you get the idea.

When you impose that model on an actual economy, and it fails to follow your expectations. It isn't the real world that is at fault.

Paraphrasing Hyman Minsky; The natural instability of capitalism is upwards. When firms take small risks and they pay off, they learn to take bigger and bigger risks. Bankers have an incentive to fund larger and larger risks. Asset prices climb. It becomes profitable to speculate on assets without having the income to cover your interest payments. Until the debt level peaks and the whole process works in reverse. A boom becomes a slump. There's a period of pain, when bankers and firms reduce their willingness to take risks. The economy recovers, firms take small risks and they pay off....

But everyone starts the next cycle, still carrying some of their debts from the previous cycle. If there's high inflation, who cares. You can easily pay off your debts with your increased income. But when the mountain of debt in the system gets too large, inflation is impossible.

Once inflation turns to de-flation, the cycle is broken. What starts as a period of tranquility. A "Great Moderation" if you will. Suddenly turns into a crisis. Debtors go bankrupt, money is destroyed. Distressed sellers discover the market is much smaller than they thought it was. Even low risk projects fail as the economy suddenly shrinks.

But if the government sector taxes more in the good times, and runs a deficit during a slump. They can dampen the cycle. They can lessen the pain of the inevitable crash. But do you really think that the people will allow high taxation during a long period with very little trouble?

Do you really think the government caused this? A government run by economists who haven't learnt the right lessons from history. Economists who misunderstand and ignore the role of money and credit. Economists who codified their model of a perfect economy into law. A model which has nothing to do with how the real economy actually works.

Comment: Re:Already there (Score 1) 373

by khallow (#49771657) Attached to: What AI Experts Think About the Existential Risk of AI
Excellent. That's exactly the message I wanted you to receive.You should be bothered by it, just like I was when durrr asserted without justification the argument I quoted. My argument is the minimum rebuttal needed to deflate that assertion. It's not convincing or substantial because it doesn't need to be.

The reason I keep saying these things (and most likely will continue to say them) is because so frequently, we project our hopes and beliefs without even minimum justification for them. I understand why and I occasionally get caught doing it as well, but wishful thinking is harmful thinking.

I think in the long run, AI will be one of the most challenging and dangerous things we ever do. It also has the potential for being one of the most noble things we ever do.

Perhaps, this is just a Western bias, but I think it's not enough that I have a place in humanity, but that I should strive to improve myself as I see fit even to the moment of my death - not just for my own benefit but for those around me. I believe AI could be a crucial stepping stone to new ways for humanity to improve itself.

Comment: Karma is a bitch (Score 1) 284

by TapeCutter (#49771627) Attached to: D.C. Police Detonate Man's 'Suspicious' Pressure Cooker
Unattended, records show the (absent) owner has no license - could be stolen. Pressure cooker - improvised bombs and pressure cookers go together like ham and cheese, Boston Marathon was a recent example.

The cops did their job and rightly erred on the side of caution, the only "injustice" is the guy will not be compensated for the damage to his car, neither the cops or his insurance company are liable. OTOH, he had no license, his car should have been parked at his home where it would have aroused far less official suspicion.

Comment: Re:Well... (Score 1) 373

by khallow (#49771533) Attached to: What AI Experts Think About the Existential Risk of AI

Do the world a favour: kill yourself now. Get it over with. No need to wait for the mythical AI. While you're at it, kill all your offspring. You wouldn't want them to suffer the future of your diseased mind, would you?

How about instead of being a dumbshit, you read what I wrote and think a bit? I didn't say that AI would be bad, I merely deflated some ridiculous expectations. For example, it's ridiculous to assume that AI won't have certain broad motivations because those motivations have human cooties.

Consider our origins. After all, we are descended from a billion or more year sea of animals whose highest thought, for the ones who could think, was getting the next meal or breeding. To go from that to an animal capable of making something smarter than itself and speculating on what that smarter thing will be like, is astounding and indicates a fundamental change in our thought and behavior beyond our less developed ancestors. We aren't just smart animals - something else is going on.

That intellectual chasm between what we were and are leads me to believe that a lot of high level human behavior, thought, and motivation which we consider "anthropogenic" is rather intelligence, sentient, or sapient based. And we should expect to see some manifestation of many of these behaviors, thoughts, and motivations in our AIs, minus the human cooties, of course.

Comment: Re:Anthropomorphizing (Score 1) 373

by khallow (#49771409) Attached to: What AI Experts Think About the Existential Risk of AI

It's a non sequitor - we're talking about hypotheticals which feature entirely different physical structures, or similar physical structures composed of physically distinct sets of atoms, not single spatiotemporally connected sets of atoms. We are talking about instance identity (the "same" mind), not categorization.

Of course, it's not a non sequitur. We already know that the human brain changes substantially and structurally over time (and that we can change it further by meddling). Similarly, experiences and connections with other people radically change the human mind. Meanwhile there is considerable flow of atoms in and out of the brain just due to normal biological processes. I believe the mind and brain are just an example of the Ship of Theseus (a mythical ship which was supposedly kept sea-worthy over many centuries by replacing it piece by piece so that at some point, it no longer had any piece of the original ship in it).

The brain and mind changes, hence, it is relevant, especially in a thread on humanity's future capabilities in AI on how far we can push that ability to change in order to improve the current versions of intelligence.

Also, it's worth noting that if one is to speculate about future human or AI capabilities or traits, it is very natural and useful to speak of hypothetical situations, not because they are likely to occur, but because they illuminate possible general concepts, outcomes, or problems. Sure, this particular hypothetical might be unlikely to occur, but I believe sooner or later we will be speaking of actual transformations of the human brain and mind rather than hypothetical ones. And I believe such transforms may become quite radical. So it is interesting to consider just how much can you change the brain without changing the mind it implements or whatever.

Moving on, "instance identity" is a categorization by you. In fact, categorization is by definition a coarse identification which when applied to instances or representations of some abstract thing becomes by definition an instance identity. Sure, normally, we think of identity as the minimum unit of distinguishability. But we can distinguish bodies, brains, and minds even over the course of minutes. By reading this post, you have a different brain and mind than you did before you read the post (should I apologize for that?).

You are begging the question, by simply assuming that human mental processes are exactly representable in entirely different physical structures.

Which is not a serious problem here. After all, we already have a working instance of human mental processes, the human brain with no obvious connection to what materials the underlying machinery is composed of. It's like claiming that a car won't drive, if we make it out of aluminum instead of out of steel or the wheels of wood not rubber. Sure, if we have a ridiculous amount of failed effort put into the problem of changing the structure of the brain and mind at some very distant future date, then maybe you're right. But I don't think that will happen (especially given how easy it is to change the human mind now with education and experience).

I think rather the real difficulty will be that the human body, due to its evolved nature, is extremely difficult to reverse engineer and a key direction of effort will be refactoring of the structure of the body and mind on somewhat more manageable directions.

Comment: Re:Funny, that spin... (Score 1) 373

by khallow (#49771159) Attached to: What AI Experts Think About the Existential Risk of AI

Morality cannot be defined as a list of do's and dont's that are mechanically obeyed precisely because it has a myriad of "edge cases" that require human interpretation.

Then why do you do that for the Three Laws example? Note that Asimov got around that problem by having the robots and their makers interpret those edge cases and the whole rules situation getting more flexible over time. It's also worth noting that the Three Laws never resulted in a grave situation for humanity (rather considerable effort had to be undertaken to circumvent those rules in order to generate most of the existential threats posed by robots) The rules worked for most of the large scale problems that they were partially intended to address.

The worst problem implied to be directly attached to the Three Laws was the notable absence of intelligent alien species. I believe there was implied at several points in the later books Asimov wrote, that extremely advanced robots when they had decided to leave humanity to its own devices had some very exotic capabilities to retroactively and non-violently shape the past of the galaxy so that intelligent alien rivals never evolved. The reason was rather simple. Those potential alien species would not have been recognized as human and hence, would not have the protections of the Three Laws applied to them. And any such intelligence would be deemed a serious long term threat by the robots.

Also notice that when the zeroth law was added it just made matters worse because more laws allow for more contradictions, loopholes, and paradoxes, exactly like the evolved tax code of any nation you care to name.

The zeroth law wasn't added, it was implied by the other three laws. And as I recalled, it actually simplified the situation since it allowed the robots to act to reduce the long term harm caused by their interactions with humanity.

Ultimately, the human-robot relationship was deemed a failure by the robots, not because of some failure of the Three Laws or their application, but rather because the prevalence of robots (and having them do everything) was harmful to humanity in the long run. In that case, the Three Laws provided impetus for robots to stop the harm they were causing to humans.

The treachery of science fiction is that things wouldn't necessarily go that way. You are typically presented with a contrived situation which may be not only impractical, but physically impossible to set up in real life. We don't know if it really would be possible to create rules such as the Three Laws which are that difficult to circumvent and yet flexible enough to last something like ten to twenty thousand years.

"You're a creature of the night, Michael. Wait'll Mom hears about this." -- from the movie "The Lost Boys"

Working...