Comment Re:Administrators (Score 1) 538
Notice that anon used the phrase "shouldn't be", not "isn't".
Notice that anon used the phrase "shouldn't be", not "isn't".
Mozilla wants an 'open Web'. Making an open source browser is a big part of that.
Protecting users from mass surveillance is another. Crippling third-party systems by default is a big part of that.
Unfortunately that kills some existing services, like unified commenting systems, which users want. Someone *could* come along with a unified commenting system which doesn't conduct mass surveillance, but that's an unlikely business model at the moment. Hence Mozilla's solving the chicken-and-egg problem themselves, by making a unified commenting system which (presumably) doesn't do mass surveillance.
If this works, it will go a long way towards making the third-party-crippling an effective default. Hence the Web becomes more 'open'.
Serves her right.
Two wrongs don't make a right.
These people are doing the same things that were the very basis of oppression of any and all freedoms on German soil in these two regimes. It is like these cretins _want_ that state of affairs back.
They want that level of power, but since it's *them* this time, they'll only use it for "good" (ie. what *they* want).
Of course, they neglect to realise that's exactly what the Nazi's thought.
The reality that since the beginning of times governing people requires spying that same people.
The government needs spies as it needs assassins and torturers and all kinds of evil agents. If the people keep pushing to reveal the truth, the result won't be the disappearance of evil agents but the removal of the pink veil.
At some point, if the kid insists enough, the parent's patience ends and he replies "because I say so, now shut up."
At "the beginning of times" governments used targetted spying. They couldn't tap intercontinental fibreoptic communication cables, run the output through face recognition algorithms and automatically build huge databases of everyone's correspondance.
As an analogy, I accept that police and handcuffs are necessary evils. What I don't accept is that we may as well have everyone wear electromagnetic bracelets, which police can remotely switch into a pair of handcuffs.
But if I had a headset strapped on, I'd rather be in an immersive world like OpenCroquet/Cobalt/Qwak[1] (which support VNC for accessing "legacy" applications) than a white space surrounded by floating rectangles.
[1] https://code.google.com/p/open... https://virtual.wf/ http://en.wikipedia.org/wiki/C...
Of course it would be pretty awesome to be able to colonize Mars, but we're not there yet and putting a human being there unless there is a real reason to do so is wasteful and a safety risk.
You're right that there needs to be a 'real reason', but we can say the same thing about, say, Australia. Why do we make so many wasteful and potentially dangerous trips there every day? Because there is a thriving colony of humans there.
It's a bootstrapping problem. Visiting/emmigrating to a martian colony would be a 'real reason' to go to Mars; so that's what we need to build.
I'm saying "why would be assume a similar flaw in a biological system because computer simulations have a flaw".
Nobody's assuming; scientists are asking a question.
I think jumping to the possibility that biological systems share the same weaknesses as computer programs is a bit of a stretch.
I've not come across the phrase "jumping to the possibility" before. If I 'jump' to giving this a possibility of 2%, is that a 'stretch'?
If a deep neural network is biologically inspired we can ask the question, does the same result apply to biological networks? Put more bluntly, 'Does the human brain have similar built-in errors?
And, my second question, just because deep neural networks are biologically inspired, can we infer from this kind of issue in computer programs that there is likely to be a biological equivalent? Or has everyone made the same mistake and/or we're seeing a limitation in the technology?
Maybe the problem isn't with the biology, but the technology?
Or are we so confident in neural networks that we deem them infallible? (Which, obviously, they aren't.)
You're just repeating the question asked in the summary.
"Monitoring" is an awfully loose term. Could this, for instance, apply to such things as the persistant port scanning (e.g. "monitoring" which ports a user has open on a given IP) and thus have implications for operations like Shodan HQ, or even the periodic scans of the entire Internet done by the likes of H.D. Moore and other companies or universities conducting research?
Research is conducted based on the data available. If stronger protocols reduce the amount of available data, research will continue with that reduced amount of data.
If some research specifically requires more data, that's OK. That's called 'performing an experiment', and there are numerous procedures which can be followed to do this. One thing they all have in common is that if they involve people, like Internet monitoring does, then it must pass an ethics board and gain consent from all of the subjects involved.
If that were the case today, there wouldn't be all of this mess playing out.
In other words, a God-like observer with perfect knowledge of the brain would not consider it non-computable. But for humans, with their imperfect knowledge of the universe, it is effectively non-computable.
What they're saying is that there are limits, beyond undecidability, when a human mind tries to study itself. It's an algorithmic analogy to the classic data-storage problem of trying to imagine, using your mind, the whole contents of your mind. Via recursion, that can't be done. Likewise, TFA is saying that we can't use our minds to compute some things about our minds, even though an outside observer with perfect knowledge of our mind could do so.
The reference to PCs is hence entirely wrong. What they're saying is that if a PC worked like our brain, it would be limited in its introspection ability compared to, for example, a hypervisor on which it's running.
Because guns don't kill people. People with guns kill people.
http://en.wikipedia.org/wiki/L...
The groupings that emerge when ordered by homicides per 100,000 is interesting. The most dangerous seem to be quasi-dictatorial republics in the Americas. Unsurprisingly this includes the USA.
Some people prefer the high-quality version and are willing to pay extra, others are unwilling to pay extra, or have poor vision and think the low-quality version is good enough.
Others think that the quality of a movie cannot be measured in pixels.
I remember one of my Computer Architecture lecturers lamenting the end of of punchcard era.
Gone are the days of being able to see how hard a PhD student is working by counting the boxes of punchcards in their office.
Gone are the days when sending code to be compiled meant everyone could go to the pub.
I don't want colleagues or (future) employers to know
Then why put it right there in your username?
"I've seen it. It's rubbish." -- Marvin the Paranoid Android