Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment: Re:It has always been that way (Score 1) 359

by swb (#49777439) Attached to: Can Bad Scientific Practice Be Fixed?

From what I've read the lack of respect for negative results ties into both the leadership for study funding and to the less informed people from outside the scientific community who often approve the funding.

The person in charge of a larger scientific entity may have even more invested in the "right" conclusion in terms of their leadership potential and may not want to fund or advance studies which could threaten their larger position on the issue.

And people from outside the scientific community may see negative outcomes wrongly as "failed" science -- why look, you couldn't even prove your theory. As you point out, this is wrong, but I think these people look at it kind of like a failed business venture. If Joe Scientist's theory is disproven, he must be an incompetent idiot and we should disown him because clearly he's going down the wrong path.

Comment: Re:RTFM..? (Score 1) 106

by swb (#49775869) Attached to: No, Your SSD Won't Quickly Lose Data While Powered Down

Do SAN vendors intentionally mix production runs of drives when they ship them?

I would kind of expect them to, and it might explain why I've never seen a group of drives bought at the same time (installed in a server or SAN) fail as a group.

Although I would kind of expect some logistical challenges if I was a SAN vendor trying to keep inventories of multiple production runs in stock for populating new SANs, especially when some single unit devices can ship with as many as 24 drives. Keeping a half-dozen unique batches on site for populating systems, sure, but 24? I would think some SANs would have to go out with drives from the same production run and the logistics just get more complicated with mismatched supply/demand/production curves.

Comment: Re:It has always been that way (Score 2) 359

by swb (#49774725) Attached to: Can Bad Scientific Practice Be Fixed?

I think there's two other interrelated things that contribute to this.

"Big Science" these days, especially in healthcare, often involves long-term, expensive studies which take years to perform. People who commit to this mode of science make both a commitment to the field, but often to the hypothesis being tested.

To get the study funded requires basically betting your career on the validity or at least the likelihood of the validity of the hypothesis.

So, if I've bought into the hypothesis that dietary cholesterol influences serum cholesterol and it takes 10 years to design, fund and implement the study involved in it when the results turn out negative, what of my career? I've invested a good chunk of it basically being wrong.

And I think a fair amount of the people involved in these big theories aren't just scientifically interested in them, they are invested in them in terms of scientific reputation since they kind of have to be to get them funded. They often become advocates for the theory before it's proven, and if it isn't sustained by the study there's the risk of looking foolish because you were wrong.

So between personal reputations and career commitment and the size of the science involved, people have a lot of personal stake in seeing their hypothesis validated.

Comment: Re:Isn't the phrase "kicked upstairs"? (Score 2) 142

by swb (#49774359) Attached to: Apple Design Guru Jony Ive Named Chief Design Officer

I think it's more the Peter Principle -- people get promoted for success in their current position and stop getting promoted once they become ineffective.

I think the last "kick upstairs" is done for employees who are too ineffective but too loyal/valuable to have working elsewhere.

Comment: Re:E-mail client? (Score 2) 81

by swb (#49773837) Attached to: Attackers Use Email Spam To Infect Point-of-Sale Terminals

I see this at two clients with POS systems. They don't handle any cash or credit card transactions, everything is billed to internal accounts, but they still want to use some of the terminals for productivity software because the POS systems are underutilized as POS systems, they lack the space for additional productivity PCs and don't want to spend money on them anyway.

I opposed it on principle in terms of providing advice, but as a matter of practicality since they're not handling real money or credit card information the risk is a lot less.

Comment: Pay phones! (Score 3, Interesting) 69

In the late 1970s in junior high we would ride the bus and get off at random stops and write down pay phone numbers. Then when we got home we would call the numbers and do all sorts of gags.

The one that inexplicably worked well was telling people that had won money from a radio station. Why they believed that an 8th grader sounded like a disk jockey is still beyond me.

It's almost kind of sad that kids of today can't get that experience. There's very few pay phones left and I bet none of them accept incoming calls. It was also pretty safe from a get in trouble perspective. Call logging and tracing would have been a huge endeavor and we never called any one pay phone more than a few times or suggested anything violent or even all that ribald.

Comment: Re:it's not "slow and calculated torture" (Score 1) 727

by swb (#49770745) Attached to: Greece Is Running Out of Money, Cannot Make June IMF Repayment

They don't pay it off now by printing money because other people keep buying the debt.

The dollar represents 2/3rds of the world's reserve currency. The rest of the reserve currencies combined aren't enough to replace it.

Hyperinflation probably isn't always a guarnateed outcome, I would wager political pressure not to manipulate monetary and fiscal policy that much is a bigger reason.

Comment: Re:Need to understand it before it exists (Score 1) 405

by swb (#49770709) Attached to: What AI Experts Think About the Existential Risk of AI

One other interesting takeaway for me was the range of what it might mean to be be a superintelligence. The author being interviewed said there are kind of various dimensions to superintelligence, such as speed of processing, complexity of processing, size of "memory" or available database of info, concurrency (ability to process independent events simultaneously).

Not all superintelligences may have all of these qualitative dimensions maxmized, either, which can be part of the problem of failing to recognize when one has been created because we may fail to see its potential because it doesn't seem omniscient.

I think it's also interesting how we kind of default to science fiction ideas of like Terminator or other "machines run amok" scenerios where the outcome is physical violence against humans.

Some of the outcomes could be more subtle and some of the biases could be inbuilt by humans and not the part of some kind of warped machine volition or intuition.

One of the everyday examples might be the advanced software designed to bank finances, linking program trading, risk and portfolio analysis, markets, etc. The amount of information big banks have to process on a daily basis is massive and while humans make important decisions, they rely heavily on machine analytics and suggested actions (and modeled outcomes) to make those decisions.

The system may make money, but is it only biased in terms of firm profit or could it have other, unintended capital effects? Is it possible that while each big bank may have their own unique system but because all these systems have a lot of shared data (prices, market activity, known holdings by others, common risk models, etc) that they could have an influential or feedback loop among them that might actually drive markets? Could this unintentional "network" of like systems be something like a superintelligence?

One question I sometimes ask myself -- what if wealth inequality wasn't a conspiracy of some kind (by the rich, the politicians, a combination, etc) but instead was something of a "defect" in the higher order of financial system intelligence? Or maybe not even a defect, but a kind of designed-in bias in the system's base instructions (ie, make the bank profitable, for exampel) that resulted in financial outcomes which tend to make the rich richer? What if the natural outcome of markets was greater wealth equality but because they are heavily influenced by a primitive machine intelligence we get inequality? How could we know this isn't true?

I think these are the more interesting challenges of machine superintelligence because they grow out of the things we rely on current (and limited) machine intelligence to do for us now. Will we even recognize when these systems get it wrong and how will we know?

Comment: Re:it's not "slow and calculated torture" (Score 2) 727

by swb (#49767507) Attached to: Greece Is Running Out of Money, Cannot Make June IMF Repayment

Because US debt is denominated in US dollars, we could pay it off tomorrow with a spreadsheet entry at the Federal Reserve which created $16.39 trillion by fiat. There may even be some non-inflationary gimmick that could be employed to pay off existing debt via normal government revenue and only sell treasuries to the Fed who would never sell them.

And because the dollar is the dominant world reserve currency (around 65%), Congress could just vote to nullify a huge portion of that debt tomorrow. There's no short or even medium term replacement for dollars as a trading currency, so not only would it suck nations holding that debt the world would have to keep using the dollar or go broke.

The latter is the existential threat to the Chinese economy. With the stroke of a pen, the Chinese could see 1.2 trillion just wiped off their balance sheet.

When you control both the printing of your money and issue debt in the same money, anything is possible.

Comment: Need to understand it before it exists (Score 1) 405

by swb (#49767427) Attached to: What AI Experts Think About the Existential Risk of AI

I listened to a podcast (Econtalk, which is about as sober as podcasts get) that interviewed an AI "worrier" and he acknowledges that our current technology can't produce a superintelligence now. But he does make a couple of interesting points which I think make for reasonable discussion even if it isn't the "ZOMG, PANIC" kind of talk you imply.

One, discussing machine superintelligence before it actually develops is almost necessary because once it DOES exist it may be difficult to control. By definition, superintelligence will be smarter than we are and capable of manipulating at a level of complexity we can't grasp, making it hard to control.

And we've already created single-purpose "intelligences" similar to this, like the old "Internet worm" or some kinds of computer viruses that while they lack general purpose intelligence, have a self-replicating intelligence that can be difficult to contain. Imagine a smart hypervisor designed to manage a computing cluster but with the intelligence to replicate/migrate nodes across cloud computing infrastructure. Couple it with cyber defense technology, encryption, etc but given the single minded purpose of "don't shut down". It's not hard to see at a not-so-far-off level of intelligence that it could self-migrate across cloud platforms, resisting shutdown, possibly even able to hide in private cloud platforms all while being able to escape detection and control.

Which brings up the other point -- we don't know what machine superintelligence will look like. Part of the problem with understanding what superintelligence could be is that we don't know how far we are from creating one because as humans we try to imagine superintelligence in anthropomorphic terms using human epistemologies. It doesn't have to be the anthropomorphic HAL 9000, it could be a hypervisor manager, a securities trading system or some other single-purpose automation system that contains a feedback loop between a series of "dumb" systems coupled with a control plane. We may create it by accident and even if its not perfect, there are some realms where it wouldn't take long running amok to cause large problems, even if the outcome wasn't "judgement day".

+ - Universe's dark ages may not be invisible after all

Submitted by StartsWithABang
StartsWithABang writes: The Universe had two periods where light was abundant, separated by the cosmic dark ages. The first came at the moment of the hot Big Bang, as the Universe was flooded with (among the matter, antimatter and everything else imaginable) a sea of high-energy photons, including a large amount of visible light. As the Universe expanded and cooled, eventually the cosmic microwave background was emitted, leaving behind the barely visible, cooling photons. It took between 50 and 100 million years for the first stars to turn on, so in between these two epochs of the Universe being flooded with light, we had the dark ages. Yet the dark ages may not be totally invisible, as the forbidden spin-flip-transition of hydrogen may illuminate this time period after all.

Comment: IoT -- more gadgets, less intelligence? (Score 2) 225

by swb (#49762803) Attached to: Google Developing 'Brillo' OS For Internet of Things

Some devices like Nest seem to add more intelligence to things we already use, but some devices just seem to add gadgets without actually making things more intelligent.

Where are my outlets with an integrated, network accessible power meter? Or the smart electrical panel that can have circuit priorities and acceptable power source types assigned to it so that when I run off a Tesla PowerWall I get maximum utility from the power? Or even the main power meter that lets me see my electrical utilization in real time?

So much of the IoT just seems to be about adding new gadgets whose utility seems limited while ignoring the rest of the house which is dumb.

Factorials were someone's attempt to make math LOOK exciting.

Working...