Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Hobbit (Score 1) 278

There's still a big killer lurking out in space that can't be easily avoided: radiation.

Except underground, which is the obvious solution but people are too fixated on making housing above the ground.

Except the article was talking about getting killed by the radiation exposure during the trip.

Presumably you aren't suggesting flying to Mars in a hobbit-hole. (Though if you could sneak a couple of tokes on Gandalf's pipe you might experience a good simulation.)

Comment Re:Comedy gold (Score 2) 445

4300 years ago...

I guess the Sixth Dynasty of Old Kingdom Egypt didn't notice they got washed away, and went on building their pyramids like nothing had happened.

And Sargon must have clung to the side of the ark - or snuck on disguised as a dinosaur - so he could get back to building his empire as soon as the ground dried out.

I reckon the author is better at manipulating reality than he is at manipulating search results.

Comment Re:It has always been that way (Score 1) 444

From what I've read the lack of respect for negative results ties into both the leadership for study funding and to the less informed people from outside the scientific community who often approve the funding.

The person in charge of a larger scientific entity may have even more invested in the "right" conclusion in terms of their leadership potential and may not want to fund or advance studies which could threaten their larger position on the issue.

And people from outside the scientific community may see negative outcomes wrongly as "failed" science -- why look, you couldn't even prove your theory. As you point out, this is wrong, but I think these people look at it kind of like a failed business venture. If Joe Scientist's theory is disproven, he must be an incompetent idiot and we should disown him because clearly he's going down the wrong path.

Comment Re:RTFM..? (Score 1) 106

Do SAN vendors intentionally mix production runs of drives when they ship them?

I would kind of expect them to, and it might explain why I've never seen a group of drives bought at the same time (installed in a server or SAN) fail as a group.

Although I would kind of expect some logistical challenges if I was a SAN vendor trying to keep inventories of multiple production runs in stock for populating new SANs, especially when some single unit devices can ship with as many as 24 drives. Keeping a half-dozen unique batches on site for populating systems, sure, but 24? I would think some SANs would have to go out with drives from the same production run and the logistics just get more complicated with mismatched supply/demand/production curves.

Comment Re:It has always been that way (Score 2) 444

I think there's two other interrelated things that contribute to this.

"Big Science" these days, especially in healthcare, often involves long-term, expensive studies which take years to perform. People who commit to this mode of science make both a commitment to the field, but often to the hypothesis being tested.

To get the study funded requires basically betting your career on the validity or at least the likelihood of the validity of the hypothesis.

So, if I've bought into the hypothesis that dietary cholesterol influences serum cholesterol and it takes 10 years to design, fund and implement the study involved in it when the results turn out negative, what of my career? I've invested a good chunk of it basically being wrong.

And I think a fair amount of the people involved in these big theories aren't just scientifically interested in them, they are invested in them in terms of scientific reputation since they kind of have to be to get them funded. They often become advocates for the theory before it's proven, and if it isn't sustained by the study there's the risk of looking foolish because you were wrong.

So between personal reputations and career commitment and the size of the science involved, people have a lot of personal stake in seeing their hypothesis validated.

Comment Re:E-mail client? (Score 2) 85

I see this at two clients with POS systems. They don't handle any cash or credit card transactions, everything is billed to internal accounts, but they still want to use some of the terminals for productivity software because the POS systems are underutilized as POS systems, they lack the space for additional productivity PCs and don't want to spend money on them anyway.

I opposed it on principle in terms of providing advice, but as a matter of practicality since they're not handling real money or credit card information the risk is a lot less.

Comment Pay phones! (Score 3, Interesting) 69

In the late 1970s in junior high we would ride the bus and get off at random stops and write down pay phone numbers. Then when we got home we would call the numbers and do all sorts of gags.

The one that inexplicably worked well was telling people that had won money from a radio station. Why they believed that an 8th grader sounded like a disk jockey is still beyond me.

It's almost kind of sad that kids of today can't get that experience. There's very few pay phones left and I bet none of them accept incoming calls. It was also pretty safe from a get in trouble perspective. Call logging and tracing would have been a huge endeavor and we never called any one pay phone more than a few times or suggested anything violent or even all that ribald.

Comment Re:it's not "slow and calculated torture" (Score 1) 743

They don't pay it off now by printing money because other people keep buying the debt.

The dollar represents 2/3rds of the world's reserve currency. The rest of the reserve currencies combined aren't enough to replace it.

Hyperinflation probably isn't always a guarnateed outcome, I would wager political pressure not to manipulate monetary and fiscal policy that much is a bigger reason.

Comment Re:Need to understand it before it exists (Score 1) 421

One other interesting takeaway for me was the range of what it might mean to be be a superintelligence. The author being interviewed said there are kind of various dimensions to superintelligence, such as speed of processing, complexity of processing, size of "memory" or available database of info, concurrency (ability to process independent events simultaneously).

Not all superintelligences may have all of these qualitative dimensions maxmized, either, which can be part of the problem of failing to recognize when one has been created because we may fail to see its potential because it doesn't seem omniscient.

I think it's also interesting how we kind of default to science fiction ideas of like Terminator or other "machines run amok" scenerios where the outcome is physical violence against humans.

Some of the outcomes could be more subtle and some of the biases could be inbuilt by humans and not the part of some kind of warped machine volition or intuition.

One of the everyday examples might be the advanced software designed to bank finances, linking program trading, risk and portfolio analysis, markets, etc. The amount of information big banks have to process on a daily basis is massive and while humans make important decisions, they rely heavily on machine analytics and suggested actions (and modeled outcomes) to make those decisions.

The system may make money, but is it only biased in terms of firm profit or could it have other, unintended capital effects? Is it possible that while each big bank may have their own unique system but because all these systems have a lot of shared data (prices, market activity, known holdings by others, common risk models, etc) that they could have an influential or feedback loop among them that might actually drive markets? Could this unintentional "network" of like systems be something like a superintelligence?

One question I sometimes ask myself -- what if wealth inequality wasn't a conspiracy of some kind (by the rich, the politicians, a combination, etc) but instead was something of a "defect" in the higher order of financial system intelligence? Or maybe not even a defect, but a kind of designed-in bias in the system's base instructions (ie, make the bank profitable, for exampel) that resulted in financial outcomes which tend to make the rich richer? What if the natural outcome of markets was greater wealth equality but because they are heavily influenced by a primitive machine intelligence we get inequality? How could we know this isn't true?

I think these are the more interesting challenges of machine superintelligence because they grow out of the things we rely on current (and limited) machine intelligence to do for us now. Will we even recognize when these systems get it wrong and how will we know?

Slashdot Top Deals

"The four building blocks of the universe are fire, water, gravel and vinyl." -- Dave Barry

Working...