Forgot your password?
typodupeerror

Comment: Re:Warrants are supposed to be narrow (Score 4, Interesting) 150

As in this case, they'd have to get as narrow a warrant as possible, specifying that they're searching for the weapon and not, say, evidence of tax fraud. Of course, if they found readily-visible evidence of such fraud during the course of the authorized search, they are not required to ignore it.

This is the one spot where I trip up a little, and have to wonder if we need slightly different protocols. How does the plain-sight doctrine work for digital media? In effect, every single email is equally 'visible'. The analogy to a house in the real world breaks down utterly here, and it's not clear how to fix that. What happens if they find emails with evidence of tax-fraud? Were those emails in plain sight? I agree that the scope is entirely reasonable for the necessary search – but I have to wonder how they would handle evidence of something they had no justification to search for? Law enforcement's penchant for parallel reconstruction (aka perjury) suggests they are very likely to use this information. Perhaps if it were understood that any future evidence brought for a different crime, and for which the defendant could demonstrate there was evidence within the email trawl, were assumed to be poisoned fruit, then we might have some better assurances. But there are some pretty obvious flaws with that kind of approach too...

Comment: Effect Size (Score 1) 219

Anyone concerned by this needs to read the PNAS paper, and brush up on their statistics. We're looking at a maximum Cohen's d of .02, with a beta on the order of 0.1%. And this is with an astronomical sample size. In other words, using an enormous trove of data, they were just able to detect that an effect probably exists – but that effect is absolutely tiny. Get a grip people.

Comment: Re:Remarkable achievement? (Score 1) 44

by perceptual.cyclotron (#47284975) Attached to: First Movie of an Entire Brain's Neuronal Activity
Well not quite. But almost 70% of the head ganglia! ... So probably closer to 180 or so neurons total. Colour me impressed in so far as the methodology is concerned – but if anyone thinks these cute little movies are going to help them more than the existing full wiring diagrams and some high quality patch-clamp data, they're delusional. Get me a drosophilia or a mouse and we can talk.

Comment: Re:This is how America ceases to be great (Score 1) 133

Sorry. Our founding fathers were mostly a bunch of bad asses with really good ideas that they were willing to fight for, and the country that came out of that fight was great.

It's the money grubbing assholes who are fucking it up now by claiming that money = free speech and corporations are people. That means that the ultra rich have at least 10,000 times as much speech as most of us, and that there are a lot of people that have no voice at all. Money should not be equivalent to free speech. Never. It's a fucking travesty that it is, and the people who made it so are destroying our country. I'm not being hyperbolic here.

Except that most of the founding fathers were wealthy, mercantile types. You may recall that at the outset, most states had minimum property requirements to be eligible to vote. It took quite some time to achieve even universal white male suffrage. And, for every increase in suffrage (removing property restrictions, racial restrictions, gender restrictions), and for every increase in the rights of the common class over the monied class (e.g., collective bargaining, minimum wage, abolishing child labour, the 14, 12, 10, and 8 hour working days, etc.), the legislature (at all levels) sided against democracy and against economic equality at every turn. Indeed, even the cherished bill of rights was something seen as superfluous pandering to the majority of the founding fathers. It was only when they realized they could never ratify otherwise that they wrote it (hence their being amendments).

The view of freedom loving founding fathers and a great dawning of liberty and freedom in the americas is, quite simply, a myth. A lovely idea – but historically baseless...

Comment: Re:I think this is bullshit (Score 1) 1746

by perceptual.cyclotron (#46654157) Attached to: Brendan Eich Steps Down As Mozilla CEO
His continued employment was a liability to the company's continued profits. This isn't about rights – it's about market forces. The consumer base reacted poorly to Mozilla's decision to place him in leadership. Their market share was threatened, so they responded. He didn't loose his right to keep his job – he became unqualified for it. A CEO's job is to make money for the company. It turns out that scaring off your customers isn't a good way to do that. Go figure.

Comment: Re:Sure (Score 1) 208

Obama is the Chief Executive.

If he really wanted to stop this shit, he could issue an executive order stopping this shit. Congress never passed a law requiring the NSA to collect this data; Obama could stop this shit RIGHT NOW if he wanted to.

Don't get me wrong – Obama's a worthless malevolent scumbag like every other US president since the founding of the nation (and just about every world leader since the founding of 'nation,' for that matter) – but going against the national security state didn't work out too well for Kennedy. POTUS knows damn well where its authority isn't welcome.

Comment: Re:Or, perhaps the test is not 100% selective (Score 1) 241

by perceptual.cyclotron (#45510341) Attached to: The Neuroscientist Who Discovered He Was a Psychopath

Of course, maybe he's only faking being disturbed by it to promote his career.

This, without a doubt. "Man develops his own abstruse criteria to classify psychopathy. By these criteria, he finds that he is a psychopath, despite not seeming like most psychopaths. Instead of questioning his tests and conclusions, he writes a book."
I'll allow that he's a narcissistic asshole – but there isn't much reason to think he's a psychopath.
Nothing to see here.

Comment: Re:Only partly joking... (Score 5, Insightful) 519

Mainly because the US is imperialist, and its material wealth is directly tied to its coercive abilities inside and outside of its borders. If the west's wealth wasn't built on enduring theft and slavery, you might see a different configuration. China is only recently moving in that direction with its economic posturing in Africa and South America – and its pretty evident that this is mainly reactionary. Given its age, and the level of historic contact with other nations in the past, China has mostly only sought empire within its own borders, whereas the west has always taken a colonial usurpation approach.

And the idea of ROI is a mistaken understanding of US power. You paid for it – but the return was never meant for you. The bloated war-mongering US military machine returns day in and day out by threatening untold violence against any economic dissent and any obstruction to continued US exploitation of the world's people and resources. The people footing the bills aren't the people reaping the rewards, and they were never meant to be. But the interests that are being protected are being served very well indeed.

Comment: Re:terrorism! ha! (Score 1) 453

by perceptual.cyclotron (#45495229) Attached to: Imagining the Post-Antibiotic Future

Fact is, there is no scientific evidence that any antibiotic resistance is coming from give antibiotics to cows.

There's plenty of evidence. Here is just a quick grab of a recent Nature news feature that reviews some of the literature. Data is accumulating, and it says exactly what you would expect: bacteria are notoriously indiscriminate in their hosts over enough generations, and they're more than happy to pass on the tricks they've learned, not only to their progeny but also via horizontal transfer to whoever or whatever else is nearby. No one's saying indiscriminately dousing farm animals is the only or even the worst vector for resistance. But it is one of them, and considering the overwhelming majority of antibiotics used in the states are used on farms, it presents a significant risk – with very poor and typically inexpert motivation – that does need to be curbed.

Comment: Re:Vegans need it (Score 1) 520

by perceptual.cyclotron (#45385745) Attached to: US FDA Moves To Ban Trans Fat

While I too am not a fan of government bans I have to say that relying on consumer choice for them to get something as basic nutrition right is beyond naive.

Particularly when there's enormous incentive for companies to do anything at all that will make people inclined to shove their "food" into themselves, and very little incentive to worry about nutrition. People don't buy food because it's nutritious – even specialists barely understand nutrition, there's little chance in hell for the rest of us to try to balance all of these factors appropriately now that our food sources have been decoupled from a co-evolved ecosystem. People buy food because they like how it tastes and because the packaging is sexy. Food is easily one of the most critical domains for extensive regulation, which is entirely inadequate right now. Frankly, I'd be in full support of enormous fines and punishments for selling anything as "food" that doesn't meet dietary needs. You can still sell all the other crap, but it should be relabelled "non-toxic taste carriers," and shelved separately...

Comment: Re:Ha ha ha (Score 1) 465

A free enterprise system, as a complex adaptive system in itself, will always tend to converge toward the most 'efficient' or 'minimal error' surface in the ecosystem.

This is all well and good in pre-specified / full-information systems, where both the space of potentials and the utility surface are known. The real world isn't very much like that. Given your CAS and Sante Fe leanings, you've surely read Kauffman. Adaptive systems, under real-world constraints, are generally poised-criticality systems. i.e., they balance centralized and parallelized processes. This is, perhaps, what you mean by needing both positive and negative feedbacks, but it's quite a stretch to call such a system 'capitalism'.

Modellers have the best of intentions, and often produce valuable insights on limit case invariants, but there's a routine overreach when these conclusions are exported back to the real world from which the original enormous simplifications were derived. Setting out to model 'capitalism' or 'socialism' is a noble effort – but it's a mistake to then say that your model is capitalism or socialism. It's merely an example of how a system might behave under the constraints you've chosen as emblematic of those systems. In the framework of policy discussions, this a dangerous thing to do – because even if the limitations and technical details are known and clear to you, the modeller, the consumers of your work will falsely assume that your conclusions relate to 'capitalism' or 'socialism' in the world, which cannot be disentangled from politics, religion, failures (or adaptive heuristics, if you prefer) in human reasoning, chance resource inhomogeneities, noisy and incomplete information, inability to predict future innovations and uses (and consequently an increasing error in estimating the utility surface for increasing time horizons), etc., etc.

That said, communism in most of its theoretical forms is definitely not a better idea – and indeed, numerous socialist theorists, even prior to the rise of the USSR, warned as much. Communism was correctly predicted by socialist thinkers to lead to a red 'bureaucracy' that would utterly fail to bring about any of the advantages of socialist and collectivist societies. However, I'd quibble on the language of 'instability'. If anything, the biggest flaw in centralized schemes is too much stability – i.e., a grinding stagnation even in the face of changing circumstances, and a piling on of compounding errors and inefficiencies. To be sure, this is a recipe for disaster, but it's not really instability so much as hyper-stability, or a 'crystallization' (in Kauffman's language) – a shrinking volume of the accessible state-space with a growing energy barrier, so that the system is doomed to simply break or short-circuit when the external forcings inevitably change too much...

Comment: Re:appearing to have free will (Score 1) 401

by perceptual.cyclotron (#45207867) Attached to: Physicist Unveils a 'Turing Test' For Free Will

The difference is that the AI can be exactly modeled, simply by making another copy of it. Given all the same inputs and the same data and initial conditions, a digital processing system comes to the same result every time. Humans are not digital processing systems, an identical copy of a human (or indeed any animal) cannot be made and the exact same combination of data and initial conditions cannot be produced.

I take some issue with this. You seem (in other replies) to only accept measuring/modelling techniques available to current humans when determining that an animal cannot be exactly modelled (i.e., if you can't show it to me, it's not relevant to the conversation), yet you are willing to speculate on an AI which does not itself yet exist.

To make the trivial case, why presume an AI will run on a digital computer?

The more nuanced case has been partially addressed in other comments, with reference to externalities impinging on the nominally 'digital' process (which is really a noisy thresholding on an analogue voltage). In one sense it is reasonable to say that these random events constitute 'input' and so must also be measured and controlled for – but this gets us into a very tricky situation regarding which system is really being analyzed. If the AI requires these additional sensors in order to replicate its behaviour, then in what sense are these sensors not part of the AI? The problem, clearly, regresses quickly.

Finally, the technology to perfectly measure, for instance, ambient thermal fluctuations across some boundary doesn't depend much on the contents of the volume whose surface is being measured (I believe a physicist would confirm that 'not much' is actually 'not at all'), so the inherent challenge of 'random' inputs in human brains is no less a challenge for similarly sensitive technological systems...

New systems generate new problems.

Working...